The world that we live in is being disrupted by technology at a more rapid pace than ever. Cloud, big data, and security are just a few topics evolving daily. To be a successful IT leader, you need to do more than just keep pace—you need to move faster than your competition. At IDS Fast Forward 2016, learn how to do this firsthand from industry leaders, hundreds of your peers, and our world-class sponsors. Fast Forward is more than a conference—it’s an opportunity to learn how to drive your career forward. Read More
Last week, IDS hosted a dinner featuring a presentation about current cloud trends and how IDS is helping our customers create their cloud strategy. The topic generated a lot of very interesting conversation and one of the biggest discussions was about cloud contracts. If you’re going to trust a partner to host your data, it is critical to have a contract in-place that protects the interests of both parties. Many large cloud providers have a “take it or leave it” approach to contracts and others will work to customize to your needs. Regardless of the flexibility, it is imperative that you understand all aspects of your cloud contract and what it means to your business. Since this was a highly discussed item at our dinner, I decided to create a list of the top 5 contract considerations when evaluating Cloud Providers. Read More
Thinking back on it, I can very specifically remember when I started to really care about “The Cloud” and how drastically it has changed my current way of thinking about any services that are provided to me. Personally, the moment of clarity on cloud came shortly after I got both my iPhone and iPad and was becoming engrossed in the plethora of applications available to me. Everything from file sharing and trip planning to Angry Birds and Words with Friends … I was overwhelmed with the amount of things I could accomplish from my new mobile devices and how less dependent I was becoming on my physical location, or the specific device I was using, but completely dependent on the applications that I used on a day-to-day basis. Now I don’t care if I’m on my iPad at the beach or at home on my computer as long as I can access applications like TripIt or Dropbox because I know my information will be there regardless of my location.
As I became more used to this concept, I quickly became an application snob and wouldn’t consider any application that wouldn’t allow me cross-platform access to use from many (or all) of my devices. What good is storing my information in an application on my iPhone if I can’t access it from my iPad or home computer? As this concept was ingrained, I became intolerant of any applications that wouldn’t sync without my manual interaction. If I had to sync via a cable or a third party service, it was too inconvenient and would render the application useless to me in most cases. I needed applications that would make all connectivity and access magically happen behind the scenes, while providing me with the most seamless and simplistic user interface possible. Without even knowing it, I had become addicted to the cloud.
Cloud takes the emphasis away from infrastructure and puts it back where it should be: on the application. Do I, as a consumer, have anything to benefit from creating a grand infrastructure at home where my PC, iPhone, iPad, Android phone, and Mac can talk to one another? I could certainly develop some sort of complex scheme with a network of sync cables and custom-written software to interface between all of these different devices …
But how would I manage it? How would I maintain it as the devices and applications change? How would I ensure redundancy in all of the pieces so that a hardware or software failure wouldn’t take down the infrastructure that would become critical to my day-to-day activities? And how would I fund this venture?
I don’t want to worry about all of those things. I want a service … or a utility. I want something I can turn on and off and pay for only when I use it. I want someone else to maintain it for me and provide me SLAs so I don’t have to worry about the logistics on the backend. Very quickly I became a paying customer of Hulu, Netflix, Evernote, Dropbox, TripIt, LinkedIn, and a variety of other service providers. They provide me with the applications I require to solve the needs I have on a day-to-day basis. The beautiful part is that I don’t ever have to worry about anything but the application and the information that I put into it. Everything else is taken care of for me as part of a monthly or annual fee. I’m now free to access my data from anywhere, anytime, from any device and focus on what really matters to me.
If you think about it, this concept isn’t at all foreign to the business world. How many businesses out there really make their money from creating a sophisticated backend infrastructure and mechanisms for accessing that infrastructure? Sure, there are high-frequency trading firms and service providers that actually do make their money based on this. But the majority of businesses today run complex and expensive infrastructures simply because that is what their predecessors have handed down to them and they have no choice but to maintain it.
Why not shift that mindset and start considering a service or utility-based model? Why spend millions of dollars building a new state-of-the-art Data Center when they already exist all over the World and you can leverage them for an annual fee? Why not spend your time developing your applications and intellectual property which are more likely to be the secret to your company’s success and profitability and let someone else deal with the logistics of the backend?
This is what the cloud means to business right now. Is it perfect for everyone? Not even close. And unfortunately the industry is full of misleading cloud references because it is the biggest buzzword since “virtualization” and everyone wants to ride the wave. Providing a cloud for businesses is a very complex concept and requires a tremendous amount of strategy, vision, and security to be successful. If I’m TripIt and I lose your travel information while you’re leveraging my free service, do you really have a right to complain? If you’re an insurance company and your pay me thousands of dollars per month to securely house your customer records and I lose some of them, that’s a whole different ballgame. And unfortunately there have been far too many instances of downtime, lost data, and leaked personal information that the cloud seems to be moving from a white fluffy cloud surrounded by sunshine to an ominous gray cloud that brings bad weather and destruction.
The focus of my next few blogs will be on the realities of the cloud concept and how to sort through the myth and get to reality. There is a lot of good and bad out there and I want to highlight both so that you can make more informed decisions on where to use the cloud concept both personally and professionally to help you achieve more with less…because that’s what the whole concept is about. Do more by spending less money, with less effort, and less time.
I will be speaking on this topic at an exclusive breakfast seminar this month … to reserve your space please contact Shannon Nelson: firstname.lastname@example.org .
Picture Credit: Shannon Nelson
Two weeks ago, I had the honor of representing IDS at EMC World to receive our Velocity Services Quality Award. I say honor because this is one of those awards that really matters (in my mind) because it is solely based on Customer Feedback—the thing that ultimately drives our business. A little background on the award for anyone who is curious:[framed_box bgColor=”#EFEFEFEF” rounded=”true”]Several years ago, EMC implemented a program called the Authorized Services Network (ASN). There were hundreds of resellers in North America certified to sell EMC, but only a select handful could qualify to be ASN-certified and actually perform EMC implementations for their customers. This program requires rigorous testing of multiple Pre-Sales and Post-Sales Engineers to prove that the company is dedicated to not just selling EMC equipment, but providing their customers the highest level of service with their engineering expertise.
Back in 2007, EMC decided to recognize the best of the best by creating an ASN Quality Award for the top implementation partner in North America, based completely on customer feedback. After a reseller performs an implementation for a customer, that customer receives a third-party survey asking how the implementation went, would they use the reseller again, would they recommend them to peers in the industry, etc. Based on those responses, the ASN Partners were ranked and IDS finished at the top of the list, receiving EMC’s first ever ASN Quality Award.
In 2008, EMC decided to open up the Award a bit and presented the award to two partners. In subsequent years, a few more Partners made the list as well. Fast forward to 2011. EMC changed the name of the award to the Velocity Services Quality (VSQ) Award but the concept is exactly the same.
This year, 14 partners received the honor at EMC World for their dedication to engineering excellence and customer satisfaction. They started the awards by naming the first-time winners, then two-time winners, etc. At the tail-end was IDS being announced as the only five-time winner of the prestigious award. To be named as the #1 Partner for the largest storage manufacturer in the world based on entirely Customer Satisfaction is a huge honor and I was proud to be there accepting on behalf of the IDS team.[/framed_box]
First off, I would like to say thank you to our customers. Your dedication to IDS and the services that we provide is what makes us great. We appreciate the long-term business Partnerships and look forward to many more years of joint prosperity.
To our Engineers: thank you for making this award possible! You work long hours at customer sites, study technical materials at night to keep your expertise at the highest possible level, and frequently spend time away from your families supporting the customers that ultimately give us these high marks. We appreciate everything that you do and our customers do as well. You are the lifeblood of this organization and we appreciate everything that you do.
And finally, to the other VSQ Award Winners this year, congratulations. It is an elite group to be in and I can appreciate all of the hard work that it takes to achieve this level of accomplishment. I look forward to seeing you at the award ceremony for many years to come … and, of course, always being the last man standing.
After weeks of marketing centered on “record-breaking performance,” EMC unveiled their new VNX and VNXe mid-range storage platforms this morning. Here are a few high-level notes about the release:
- The VNX is positioned as the next-generation of the CLARiiON (CX) and Celerra (NS) lines
- The VNXe is positioned as the next-generation of the AX and NX lines
- Both will still offer only a dual-controller architecture. I personally don’t see this changing any time soon since the VMAX already plays in the scale-out controller architecture arena.
- The backend is moving from 4Gb FC to four-lane 6Gb SAS
- Supported drive types will be EFD, SAS, and near-line SAS
- Software packages have been simplified via bundles and new array-based licensing models
The overall theme of the release is maintaining Enterprise-class high availability and ease of use while dramatically increasing performance along the way. This is being done via the updated SAS backend architecture as well as the more powerful Westmere processors in the Storage Processors and Data Movers. While these are all great advancements, the majority of my customers aren’t making decisions on storage based on scalability of hundreds of thousands of IOPS and 10GB/sec+ of throughput. It is more often that I see customers making decisions on efficiency, ease of use, and application integration.
That being said, what do I find the most exciting about the announcement that EMC made today?
All of the hype from EMC will continue to focus on the larger VNX platform, but everyone should really be paying attention to what is happening on the smaller VNXe platform. While the VNX still uses dedicated Storage Processors for SAN operations and dedicated Data Movers for NAS operations, its little brother VNXe is accomplishing both SAN and NAS via only two controllers. While this is commonplace for some other manufacturers, this is a totally new concept for EMC and gives us a taste of what the future may hold.
So given the fact that EMC runs totally different operating systems for SAN (FLARE) and NAS (DART), how are they finally getting to this single OS model? I haven’t been able to get anyone to specifically answer this yet, but piecing together bits of information, some assumptions can be made as to what they’re up to.
A Google search for EMC and CSX currently only returns about 2 meaningful articles, but they appear to provide information regarding where EMC is going long-term with their mid-range storage arrays. According to Steve Todd’s blog post here, he considers this little-known CSX technology to be the single most meaningful innovation that has happened inside of EMC over the last decade. That’s an awfully bold statement for something nobody has ever heard of. I highly recommend reading Steve’s post as it is one of the more insightful I have seen from inside of EMC in a while.
So what is CSX?
With the limited amount of information publically available on the technology, it would appear that EMC has created an execution environment that can invoke kernel services independent of the Operating System running on the hardware. Couple this with the fact that they have been working towards standardizing hardware in all of their platforms over the last 5 years, this concept would make a lot of sense. Then in order to take advantage of this concept, they have developed an API that is being published to all of the internal groups (Celerra, CLARiiON, Atmos, Centera, RecoverPoint, etc) so that they can develop features that are platform-independent.
So it sounds like EMC has created an execution environment that handles fundamental things such as memory allocation, basic provisioning, drivers, etc. From there, they are layering services that previously depended on Operating Systems such as FLARE and DART on top of it via the published API. It doesn’t appear that EMC has created a full hypervisor, but more of an encapsulation environment for a new code base. If this is true, the idea of CSX does open up a lot of big possibilities, especially if we see it get pushed up the VNX line and above.
With all of the cool software packages and appliances that EMC has in the portfolio, what if you could run all of those as services on a common set of hardware and eliminate the need for disparate platforms? Ultimately this is where we all want to be, right? And with hardware being more and more commoditized each day, isn’t it in EMC’s best interest to focus on software as a true differentiator? Who would have thought…EMC becoming a software-focused company.
Could that be what the future holds? Nobody outside of Hopkinton probably knows the true answer, but on a day when EMC is yelling at the top of their lungs about more performance and bigger, badder, faster hardware, I’d be focusing on what they are doing on the software side—that’s where the really cool things are being very quietly done.
Now that EMC owns both Data Domain and Avamar, I am constantly being asked which technology is better. Before the Data Domain acquisition, it was tough to get a straight answer because the two deduplication giants were constantly slugging it out and slandering each other to try and find an edge and gain more market share. With the two technologies now living under the same umbrella, sometimes it is hard to tell where one technology ends and the other begins.
Is deduplication really all it’s cracked up to be?
With everyone in the industry talking about deduplication, you can’t go 2 minutes without hearing how great it is and outlandish claims regarding deduplication rates. So the question is… is dedupe really all it’s cracked up to be? The answer isn’t really in the deduplication technology itself. It’s actually in the make-up of the data you’re looking to deduplicate. So how do you know if dedupe is the right technology for you?
Do you have a ton of highly-compressed images or multimedia files? These aren’t the ideal data types for deduplication.
Does your environment contain a lot of large databases like SQL, Exchange, Oracle, and Exchange? If that is the case, dedupe can help, but not as much as those crazy marketing numbers say.
Do you have large File Servers? Lots of VMware? Remote offices you need to backup? This is where dedupe really shines and you can really add some efficiencies in your environment. It is also where those numbers like 200:1 or 500:1 come from and can actually be beat in some cases.
Now of course your data doesn’t nicely fit into just one of those categories above. It likely spans two of them, if not all three. So we’re back to the original question…is dedupe the right technology for you? One of the keys to deploying an *effective* deduplication solution is to know where to deploy it, how to deploy it, and what to expect.
Just because you can deduplicate your live VMware or database environment doesn’t necessarily mean you should. There are a lot of implications to trying to deduplicate data that is frequently accessed and performance can severely suffer in some cases. While dedupe is a great technology, it can bring your environment to its knees if implemented incorrectly. I’ll address this in another post because that is a whole different topic. Today I want to focus on how to figure out if, and how, deduplication can benefit your business in the backup process.
Knowing where to deploy deduplication and what rates to expect can really only be determined through an assessment of your current environment. EMC has a great tool called the Commonality Assessment Tool (CAT) that will allow you to look at a subset of your data and see exactly what the commonality is. This tool can be downloaded from the IDS website for free here (click the link for the EMC CAT tool download).
So what is this tool and what can you expect from it? EMC offers a deduplication solution called Avamar, which is a backup software and backup-to-disk appliance all wrapped into one. The CAT Tool is essentially a modified Avamar client that will perform a simulated backup on your server(s) and instead of actually backing the data up, it just tracks what the deduplication rate is and how long the actual backup would have taken. I’ll quickly take you through the process of running this tool and show you how easy it is to figure out exactly how much commonality is in your data.
***One important thing to note before beginning is that the CAT Tool has the same impact on your system as a normal backup client. It is recommended to run it off-hours and not during your regular backup window.
- Download the CAT Tool from the IDS website (click for the deduplication rate test tool download). A link will be emailed to you where you can download a zip file containing the tool.
- Extract the zip file to C:Avasst. The directory will contain avtar.exe and avasst.exe. *Note: This directory must exist or the tool will not run correctly.
- Open a command prompt by going to Start/Run and running cmd.exe.
- Browse to the CAT directory by typing “cdAvasst”
- Run the CAT tool by typing “avasst”
- You will be prompted to select a folder to scan. In this example, I will scan the D Drive by entering “d:”. If you want to scan multiple folders at once, see the notes at the end.
- The first time you run the tool, you can expect the tool to take approximately 1 hour per 100GB of file and 1 hour per million files. However, subsequent runs will be much quicker due to deduplication.
- When the tool has completed running, you will see the following screen:
- Now if you look in the c:Avasst folder, you will see several files that are tracking the deduplication rates and backup times for your data. They are just raw data and need to be run through a tool in order to interpret the results.
- In order to see the full benefits of deduplication, you will want to run this tool against the same dataset several times (at least 3). You can also run it across several different datasets to see commonality across several servers. Since the commonality tracking is stored locally in the c:Avasst folder, you will want to mount directories from other servers and scan them from this server across the network.
- When you have scanned your datasets, zip the results and send them to your IDS Engineer to have the results interpreted.
Some other notes on the CAT Tool:
- If you want to scan data from other servers, you can simply mount another server’s drive to a drive letter on the local server and scan that drive.
- In a real deduplication solution, all data will be deduplicated globally against other servers and backup sets. Since the assessment tool tracks deduplication locally, you will need to scan all datasets from the same server to see global deduplication benefits.
- The CAT Tool can easily be scheduled using the built-in Windows Scheduler. Those instructions are included in a Word document included with the CAT Tool download.
- If you want to scan multiple folders at once, you will need to create a silent file that contains the folders you want to scan. Simply create a file named “Silent” with no extension in the c:Avasst folder. Inside of that file, just put a line for each drive you want to scan (see screenshot below)
**Note that you cannot end any entries with a backslash. For example, C: and D:Users are valid, but C: and D:Users are invalid.
What is an EMC/VMware Center of Excellence?
For several years, EMC and VMware have been working hand-in-hand to provide customers with solutions to drive costs out of the Data Center and provide the highest levels of performance and availability. Taking the next step to demonstrate their solutions for the virtual Data Center, EMC and VMware have teamed up with strategic Integrators to create eight Centers of Excellence across the United States. Integrated Data Storage is proud to be chosen as one of these Centers of Excellence and the first in the Midwest.
The IDS Center of Excellence is comprised of a Demo Center which showcases EMC and VMware’s leading technology solutions, and a staff of highly trained Engineers with field experience in the integrated solution offerings. The combination of VMware’s virtualization technology, EMC’s information infrastructure solutions, and IDS’s expertise in architecture and implementation create a unique opportunity for customers to see working implementations of the solutions they can use to increase the return on technology investments.
The Demo Center contains a full lab of EMC and VMware technologies including:
- VMware Virtual Infrastructure with VMotion, DRS, and VM HA
- EMC Celerra Unified Storage platform
- EMC Celerra Virtual Storage Appliance (VSA)
- EMC Replication Manager for application integrated snaps and clones
- Ontrack Power Controls for Exchange single-message restore
- VMware backup with EMC Avamar for source-based de-duplicated backups
- VMware backup with EMC Networker and Disk Library 3D 1500 for target-based de-duplicated backups
- Disaster Recovery using EMC Celerra Replicator and VMware Site Recovery Manager
- Virtual Desktop Infrastructure with VMware View
- EMC Storage Viewer plug-in to view your EMC Storage through vCenter
With all of these technologies on display, customers can come in and witness how the solutions work in-person. Customers evaluating solutions for their virtual Data Center are encouraged to come in and test drive the technologies with the help of an IDS Engineer to see the EMC and VMware integration first-hand.