Flash + Cloud = Savings: Let’s Do the Math

Flash Data Center

The hardware and software to make an all-flash data center a reality are on the market and ready to go. IT leaders and data center managers know that flash makes a world of difference in application performance, but the one thing stopping enterprises from adopting this approach is a business case, which would demonstrate overall price and total cost of ownership of all all-flash environment. What they don’t know is whether or not their enterprises need to and should make capital expenditures so that every single application (big or small, mission critical or not) achieves superior levels of performance.

Let’s explore how decreasing costs are changing the equation as well as the key factors—from flash technology to cloud delivery—that businesses should consider as they explore the feasibility of an all-flash data center.

The Cost of Flash Is Falling

If you had asked me last year, I would have said that flash, combined with traditional spinning disks, would be the mainstream data center storage structure for the next 10 years. Why? Because of the lower capacity and higher cost of flash. This year, when I look at the technological possibilities and costs, I am instead betting on a future of data centers that partner flash with cloud.

Let me be clear: I have nothing against traditional spinning disk. It has served the market well for the past 60 years. But, there comes a time where innovation needs to put legacy out to pasture. That time is now and the harbinger of change is rapidly falling costs. This year the cost per terabyte (TB) of flash is about 19% less than traditional spinning disks when you examine the following five factors:

  • Packaging
  • Power
  • Cooling
  • Maintenance
  • Space and disk sharing

In 2015, flash was 50% more expensive when considering the factors mentioned above. The factor that has most substantially reduced the costs from last year to this year is “space and disk sharing,” which encompasses deduplication, compression, thin provisioning and efficient snapshots. These technologies have been around for years in traditional storage arrays, but the newest players in the market have built arrays from scratch with these technologies at their core instead of bolting them on as add-ons. In addition, the algorithms that storage vendors have developed make the space usage more efficient while companies are also using “space efficient copies” of their data more often so that they don’t have to have purchase a lot of extra disk to have copies available. When added all together, the results are big space and disk storage cost savings opportunities.

Explosive Data Growth

2015 was a year of explosive data growth. Over 300 Exabytes according to the ESG Digital Archive Market Forecast! Of that 300, 88% was unstructured data. Why has unstructured data been such a hog for space the past five years when previously it was structured data? One culprit is Generation Y, also known as Millennials, who expect more than plain text outputs for their communication. Used to incorporating images, sounds, formatting and even video into their communications, these “digital natives,” (so dubbed by the Pew Research Center) are sharing and storing data at record-breaking rates. Their sophisticated usage of technology (along with a growing tech dependence of all other generations, industries and organizations) means we are storing more data than ever.

The Math: How Flash Lowers Costs

Knowing that data will continue to grow—and fast—let’s do the math to see where flash is helping to lower costs. We can start with an example of a data center with 500 TBs of used storage (assuming 88%, 440 TBs is unstructured data and the remaining 12%, 60 TBs is structured data).

When we consider the 440 TBs of unstructured data, let’s assume that 100 TBs is stale data that is never looked at, but is needed for retention reasons. What if we could move that 100 TBs to AWS or Azure in the cloud but allow users to still see it as part of their active file systems? According to Forrester Research the cost of storing 100 TBs in the cloud per year is about $250,000 compared to keeping 100 TBs on internal storage at $950,000 per year. This $700,000 in savings takes into account acquisition costs of the hardware, RAID considerations, maintenance, administrators, facilities and migrations.

That $700,000 in savings gained by moving 100 TBs to the cloud is enough to start a purchase of an all-flash array, and it’s only one cost saving opportunity. Here’s another way to bring costs down further. Take the remaining 340 TBs of unstructured, still active data and use deduplication and compression to reduce it by 65-75%. Those 340 TBs could fall to only 120 TBs and that’s still not all we can do to bring costs down.

Perhaps that 60 TBs of structured data is made up of two copies of a database, one for production and one for reporting. Here is a two-step process to save storage costs.

  1. Make the reporting database a space-efficient copy of the production database to cut the storage in half to 30 TBs.
  2. Use compression and deduplication to shrink the database by 65%, which would bring us down to 10.5 TBs of structured data!

With this impressive blend of flash and cloud technology, 500 TBs of costly data storage fell to just around 132 TBs. Those numbers are exactly why we are seeing businesses begin to shift away from traditional storage and look to the innovative and cost saving possibilities of the flash-cloud combination.

Contact IDS to learn more about all-flash data center possibilities and how storage technologies are changing for better.

Attend Fast Forward 2016 to learn more about what’s next in Cloud, Security and Big Data — July 13 in Chicago