3 Reasons Why You Need a Cloud Compliance Policy Now

By | Cloud Computing, Security | No Comments

cloud compliance policy blog - header image - smallerWhile the debate is still continuing for most as to what the “Cloud” means, the point that can’t be argued is that cloud models are already here and growing.

Whether one is talking about a fully hosted cloud model for hosting systems, networks and applications at a 3rd party provider, or looking at a hybrid model to address resource overflow or expansion, there are numerous cloud providers offering a myriad of options for one to choose from. The questions posed with these solutions follow the path from security, access, monitoring, compliance and SLAs.

As more departments within organizations look at the potential of cloud offerings, the time is here for organizations to address how to control these new resources—the reasons are no small matter.

Reason 1: Office Automation

Organizations have longed searched for ways to place standard business applications outside the organization. Document collaboration and email seemed to be a perfect fit. However, for multi-national organizations, there’s a hidden dark side.

Some countries do not allow specific types of data to leave the bounds of the country. For example, if you are a UK-based company, or an organization in the US with a UK presence, that means emails and documents containing personal client and employee information may not be replicated outside to the US. I would argue understanding the cloud provider’s model and how they move data is just as important as how they safeguard and offer redundancy within their own infrastructure. If your data is not managed and not secured as specified by the law, you could have more to answer than just the availability of your data.

“Part of our job as a cloud provider is not only to understand our customers’ data needs, but how our model impacts their business and what we can do to align the two,” states Justin Mescher, CTO of IDS.

There is not a set boilerplate of questions to ask for every given scenario. The main driver of the questions should be around the business model of the organization and how the specific needs to protect its’ data compares to what the cloud provider does with the data. If data is replicated, where is it replicated and how is it restored?

Reason 2: Test Development

One of the biggest drivers for cloud initiatives is development and testing of applications. Some developers have found it easier to develop applications in a hosted environment, rather than proceed through change control or specific documentation requesting testing resources and validation planning of applications on the corporate infrastructure.

Companies I have spoken to cite a lack of resources for their test/dev environments as being the main motivation for moving to the cloud. While this sounds like a reasonable solution to push development off to the cloud, what potentially is lacking is a sound test and validation plan to move an application from design to development to test to production.

John Squeo, Director of Strategic IT Innovation & Solutions Development at Vanguard Health Systems states, “If done properly, with the correct controls, the cloud offers us a real opportunity to quickly develop and test applications. Instead of weeks configuring infrastructure, we have cut that down to days.”

John further commented that, “While legacy Healthcare applications don’t port well to the cloud due to work flow and older hardware and OS requirements, most everything else migrates well.”

If the development group is the only group with access to the development data, the organization potentially loses its’ biggest asset … the intellectual property which put it in business in the first place. As stated above, “if done properly”, this includes a detailed life cycle testing plan, defining what the test criteria are, as well as those that have access to test applications and data.

Reason 3: Data Security

Most organizations have spent much time developing policies and procedures around information security. When data is moved off site, the controls around data security, confidentiality and integrity become even more critical.

Justin Mescher, CTO of IDS adds, “While we have our own security measures to protect both our assets, as well as our customers, we work hand in hand with our customers to ensure we have the best security footprint for their needs.”

Financial institutions have followed the “Know your customer, know your vendor” mentality for some time. Understanding the cloud providers’ security model is key to developing a long lasting relationship. This includes understanding and validating the controls they have in place for hiring support staff, how they manage the infrastructure containing your key systems and data, as well as whether or not they can deliver your required reporting. The consequences of not performing appropriate vendor oversight can lead to additional exposure and risk.

Whether your senior management is or is not planning on using the cloud, I guarantee you this: there are departments in your organization that are. The challenge is now in defining an acceptable usage and governance policy. Don’t be left on the outside and surprised one day when someone walks away with your data when you didn’t know it left in the first place.

Photo credit: erdanziehungskraft via Flickr

Why, Oh Why To Do VDI ?

By | Cloud Computing, Security, Storage, View, Virtualization, VMware | No Comments

I recently became a Twit on Twitter, and have been tweeting about my IT experiences with several new connections. In doing so, I came across a tweet about a contest to win some free training, specifically VMware View 5 Essentials from @TrainSignal – sweet!

Below is a screen capture of the tweet:


A jump over to the link provided in the tweet – explains that one or all of the below questions should be commented on in the blog post – in order to win. Instead of commenting on that blog, why not address ALL of the questions in my own blog article at IDS?!  Without further ado, let’s jump right in to the questions:

Why are Virtual Desktop technologies important nowadays, in your opinion?

Are you kidding me?!

If you are using a desktop computer, workstation at work or a laptop at home/work – you are well aware that technology moves so fast, updated versions are released as soon as you buy a “new” one. Not to mention the fact usually laptops are already configured with what the vendor or manufacturer thinks you should be using, not what is best, more efficient or fastest. More times than not, you are provided with what someone else thinks is best for the user. The reality is that only you – the user – knows what you need and if no one bothers to ask you, there can be a feelings of being trapped, having no options, or resignation, which all tend to lead to the dreaded “buyer’s remorse.”

When you get the chance to use a virtual desktop, you finally get a “tuned-in” desktop experience similar to or better than the user experience that you have on the desktop or laptop from Dell, HP, IBM, Lenovo, Gateway, Fujitsu, Acer and so on.

Virtual desktops offer a “tuned” experience because architects design an infrastructure and solution from the operating system in the virtual desktop, be it Windows XP to Windows 7; soon to be Windows 8, to the right amount of virtual CPUs (vCPUs), capacity of  guest memory, disk IOPS, network IOPS and everything else that you wouldn’t want to dive into the details of. A talented VDI Architect will consider every single component when designing  a virtual desktop solution because the user experience matters – there is no selling them on the experience “next time.” Chances are if you have a negative experience the first time, you will never use a virtual desktop again, nor will you have anything good to say when the topic comes up at your neighborhood barbecue or pool party.

The virtual desktop is imparitive because it drives the adoption of heads up displays (HUD) in vehicles, at home and the workplace, as well as slimmer interface tablet devices. Personally, when I think about the future of VDI I envision expandable OLED flex screens that will connect wirelessly to private or public cloud based virtual desktops with touch-based (scratch-resistant) interfaces that connect to private cloud based virtual desktops. The virtual desktop is the next  frontier, leaving behind the antiquated desktop experience that has been dictated to the consumer by vendors and manufacturers that simply does not give us what is needed the first time.

What are the most important features of VDI in your opinion?

Wow, the best features of VDI require a VIP membership into the exclusive VDI community. Seriously though, the users and IT Support staff are the last to know the most important features, but the users and IT Support are the first to be impacted when a solution is architected because those two groups of people are the most in lock-step with the desktop user experience.

The most effective way for me to leave a lasting impression is to lay out the most important features out in a couple of bullet statements:

  • Build a desktop in under 10 minutes –  how about 3-minutes?
  • Save personal settings and recover personal desktop settings, immediately after rebuilding a desktop.
  • Increased speed by which more CPU or RAM can be added to a virtual desktop.
  • Recovery from malware, spyware, junkware, adware, trojans, viruses, everything-ware – you can save money by just rebuilding in less than 10-minutes.
  • Access to the desktop from anywhere, securely.
  • It just works, like your car’s windshield!

That last point brings me to the most important part of VDI, that when architected, implemented and configured properly, it just works. My mantra in technology is “Technology should just work, so you don’t have to think about technology, freeing you up to just do what you do best!”

What should be improved in VDI technologies that are now on the market?

The best architects, solution providers and companies are the best because they understand the current value of a solution, in this case VDI, as well as the caveats and ask themselves this exact question. VDI has very important and incredibly functional features, but there is a ton of room for improvement.

So, let me answer this one question with two different hats on – one hat being a VDI Architect and the other hat being a VDI User. My improvement comments are based on the solution provided by VMware as I am most familiar with VMware View.  In my opinion, there is no other vendor in the current VDI market who can match the functionality, ease of management and speed that VMware has with the VMware View solution.

As a VDI Architect, I am looking for VMware to improve their VMware View product by addressing the below items:

  • Separate VMware View Composer from being on the VMware vCenter Server.
  • Make ALL of the VMware View infrastructure applications, appliances and components 64-bit.
  • Figure out and support Linux-based linked-clones. (The Ubuntu distribution is my preference.)
  • Get rid of the VMware View Client application – this is 2012.
  • Provide a fully functional web-based or even .hta based access to the VMware View virtual desktop that is secure and simple.
  • Build database compatibility with MySQL, so there is a robust FREE alternative to use.
  • Build Ruby-on-Rails access to manage the VMware View solution and database. Flash doesn’t work on my iPad!

As a VDI User, I am looking for VMware to improve:

  • Access to my virtual desktop, I hate installing another application that requires “administrator” rights.
  • Fix ThinPrint and peripheral compatibility or provide a clearer guide for what is supported in USB redirection.
  • Support USB 3.0 – I don’t care that my network or Internet connection cannot handle the speed – I want the sticker that says that the solution is USB 3.0 compatible and that I could get those speeds if I use a private cloud based VDI solution.
  • Tell me that you will be supporting the Thunderbolt interface and follow through within a year.
  • Support web-cams, I don’t want to know about why it is difficult, I just want it to work.
  • Support Ubuntu Linux-based virtual desktops.

In summary, you never know what you will find when using social media. The smallest of tweets or the longest of blog articles can elicit a thought that will provoke either a transformation in process or action in piloting a solution. If you are looking to pilot a VDI solution, look no further… shoot me an email or contact Integrated Data Storage to schedule a time to sit down and talk about how we can make technology “just work” in your datacenter!  Trust me when I say, your users will love you after you implement a VDI solution.

Photo Credit: colinkinner

My Personal Journey To The Cloud: From Angry Birds to Business Critical Applications

By | Backup, Cloud Computing, Security, Storage, Virtualization | No Comments

Thinking back on it, I can very specifically remember when I started to really care about “The Cloud” and how drastically it has changed my current way of thinking about any services that are provided to me. Personally, the moment of clarity on cloud came shortly after I got both my iPhone and iPad and was becoming engrossed in the plethora of applications available to me. Everything from file sharing and trip planning to Angry Birds and Words with Friends … I was overwhelmed with the amount of things I could accomplish from my new mobile devices and how less dependent I was becoming on my physical location, or the specific device I was using, but completely dependent on the applications that I used on a day-to-day basis. Now I don’t care if I’m on my iPad at the beach or at home on my computer as long as I can access applications like TripIt or Dropbox because I know my information will be there regardless of my location.

As I became more used to this concept, I quickly became an application snob and wouldn’t consider any application that wouldn’t allow me cross-platform access to use from many (or all) of my devices. What good is storing my information in an application on my iPhone if I can’t access it from my iPad or home computer? As this concept was ingrained, I became intolerant of any applications that wouldn’t sync without my manual interaction. If I had to sync via a cable or a third party service, it was too inconvenient and would render the application useless to me in most cases. I needed applications that would make all connectivity and access magically happen behind the scenes, while providing me with the most seamless and simplistic user interface possible. Without even knowing it, I had become addicted to the cloud.

Cloud takes the emphasis away from infrastructure and puts it back where it should be: on the application. Do I, as a consumer, have anything to benefit from creating a grand infrastructure at home where my PC, iPhone, iPad, Android phone, and Mac can talk to one another? I could certainly develop some sort of complex scheme with a network of sync cables and custom-written software to interface between all of these different devices …

But how would I manage it? How would I maintain it as the devices and applications change? How would I ensure redundancy in all of the pieces so that a hardware or software failure wouldn’t take down the infrastructure that would become critical to my day-to-day activities? And how would I fund this venture?

I don’t want to worry about all of those things. I want a service … or a utility. I want something I can turn on and off and pay for only when I use it. I want someone else to maintain it for me and provide me SLAs so I don’t have to worry about the logistics on the backend. Very quickly I became a paying customer of Hulu, Netflix, Evernote, Dropbox, TripIt, LinkedIn, and a variety of other service providers. They provide me with the applications I require to solve the needs I have on a day-to-day basis. The beautiful part is that I don’t ever have to worry about anything but the application and the information that I put into it. Everything else is taken care of for me as part of a monthly or annual fee. I’m now free to access my data from anywhere, anytime, from any device and focus on what really matters to me.

If you think about it, this concept isn’t at all foreign to the business world. How many businesses out there really make their money from creating a sophisticated backend infrastructure and mechanisms for accessing that infrastructure? Sure, there are high-frequency trading firms and service providers that actually do make their money based on this. But the majority of businesses today run complex and expensive infrastructures simply because that is what their predecessors have handed down to them and they have no choice but to maintain it.

Why not shift that mindset and start considering a service or utility-based model? Why spend millions of dollars building a new state-of-the-art Data Center when they already exist all over the World and you can leverage them for an annual fee? Why not spend your time developing your applications and intellectual property which are more likely to be the secret to your company’s success and profitability and let someone else deal with the logistics of the backend?

This is what the cloud means to business right now. Is it perfect for everyone? Not even close. And unfortunately the industry is full of misleading cloud references because it is the biggest buzzword since “virtualization” and everyone wants to ride the wave. Providing a cloud for businesses is a very complex concept and requires a tremendous amount of strategy, vision, and security to be successful. If I’m TripIt and I lose your travel information while you’re leveraging my free service, do you really have a right to complain? If you’re an insurance company and your pay me thousands of dollars per month to securely house your customer records and I lose some of them, that’s a whole different ballgame. And unfortunately there have been far too many instances of downtime, lost data, and leaked personal information that the cloud seems to be moving from a white fluffy cloud surrounded by sunshine to an ominous gray cloud that brings bad weather and destruction.

The focus of my next few blogs will be on the realities of the cloud concept and how to sort through the myth and get to reality. There is a lot of good and bad out there and I want to highlight both so that you can make more informed decisions on where to use the cloud concept both personally and professionally to help you achieve more with less…because that’s what the whole concept is about. Do more by spending less money, with less effort, and less time.

I will be speaking on this topic at an exclusive breakfast seminar this month … to reserve your space please contact Shannon Nelson: .

Picture Credit: Shannon Nelson

Do You Learn From Data Breaches And Disasters Or Observe Them?

By | Backup, Disaster Recovery, Security | No Comments

How many articles or blog posts have you read that talked about the “lessons we learned” from 9/11, the Japanese earthquake/tsunami, the Joplin tornado, Hurricane Katrina, or <insert disastrous event here>? I see them all the time, and after reading a very interesting article in the Winter issue of the Disaster Recovery Journal (you may have to register to view the full article), I got to thinking about this concept.

What is the indication that we have learned something? The word learn has several definitions, but my favorite (thanks to is this:

to gain (a habit, mannerism, etc.) by experience, exposureto example, 

or the like; acquire …

If you learn something, you gain a new habit or mannerism; in other words, you change something.

What does it mean to observe? Again, from

to regard with attention, especially so as to see or learn something …

Just notice the difference. Learning means to take action, observing means to watch so you can learn. This really hits home with me and how I talk to my customers, because we talk A LOT about all of the lessons we have learned from various disasters. I don’t think it’s just me, either. Do a Google search on the phrase “lessons learned from hurricane Katrina” and you get 495,000 hits. Do a search on “lessons learned from Japanese tsunami” and you get 2.64 million hits. This gets talked about A LOT.

But how much are we really learning? After Katrina, how many of you proactively, objectively assessed or had someone assess your ability to maintain a revenue stream if a debilitating disaster struck your center of operations, whatever your business is? How many of you looked at what happened in Japan, or in Joplin, MO, and said: if that happened to us, we’d be able to sustain our business and we aren’t just fooling ourselves?

Let’s put this in a less dramatic and more regularly occurring context. How many of you saw the completely insane events surrounding the breach of HBGary and actually DID SOMETHING to change behavior, or build new habits to insure you didn’t suffer a similar fate? Many of us observed the event, were aghast at it’s simplicity of execution and the thoroughness with which information was exposed, but how many people actually changed the way their security is addressed and learned from the event? Have you looked at the ten year breach at Nortel, or the data breach at Symantec and set in motion a course of events in your own organization that will do everything possible to prevent similar issues in your organization?

These problems are not going away. They are becoming more and more prevalent and they are not solely the problem of global Fortune 500 companies. Any organization who does any type of business – has data that could potentially be useful for nefarious purposes in the wrong hands. It is our responsibility as stewards of the data to learn the lessons and take action to secure and protect our data as though it was our money — because it is.

Photo Credit: Cherice

To The Cloud! The Reality Behind The Buzzword

By | Cloud Computing, How To, Security, Virtualization | No Comments

I always chuckle when I think back to those Microsoft Windows Live commercials where they exclaim: “To the Cloud!” like they’re super heroes. In 2006-2007 the term “Cloud” was an overused buzzword that had no official meaning – at that time, it seemed like a lot of people were talking about cloud computing or putting things in the cloud but no one could actually articulate what that meant in simple terms or how it would work.

A real understanding and documentation in the technology community about cloud computing probably didn’t come together until mid-to-late 2008.

Today is a much different story. This year Gartner reported that:

nearly one third of organizations either already use or plan to use cloud or software-as-a-service (SaaS) offerings to augment their core business…

It is truly amazing to see how much this segment has matured in such a short period. We’re well past the buzzword stage and “The Cloud” is a reality. As we change the nature and meaning of the traditional infrastructure, we also need to ensure that the way your organization approaches security changes with it.

Fundamentally, we cannot implement cloud security the same way we go about implementing traditional security. The biggest difference being that some of the infrastructure components and computational resources are owned and operated by an outside third party. This third party may also host multiple organizations together in a multi-tenant platform.
To break the buzzword down in terms of cloud + security, here are the three best steps to help you both develop a cloud strategy as well as ensure that security is involved to minimize risk:
Get Involved
Security professionals should be involved early on in the process of choosing a cloud vendor with the focus being on the CIA triad of information security: Confidentiality, Integrity and Availability.
Concerns about regulatory compliance, controls and service level agreements can be dealt with up front to quickly approve or disqualify vendors.
It’s Still Your Data
You know what is best for your company and understand how policies and regulations effect your business. It’s not reasonable to expect your provider to fully understand how your business should be governed. You are ultimately responsible for the protection of your data and to ensure that your provider can implement the best and most necessary security measures.
Continuously Assess Risk
It’s important to identify the data that will be migrated. Does it make sense to migrate credit card data, sensitive information or personally identifiable information? If so, what measure will you put in place to ensure that this information continue to be protected once you migrate it to the cloud? How will you manage this data differently? What are the metrics around security controls will you use to report to audit and compliance?
These questions plus many more will help you to assess where your risk is. As each of these questions are answered they must be documented in your policies and procedures going forward.
Photo Credit: fifikins

To Snapshot Or Not To Snapshot? That Is The Question When Leveraging VNX Unified File Systems

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Replication, Security, VMware | No Comments

For those of you who are leveraging VNX Unified File systems, were you aware that you have the ability to checkpoint your file systems?

If you don’t know what checkpoints are, checkpoints are a point-in-time copy of your file system. The VNX gives you the ability to automate the checkpoint process. Checkpoints can run every hour, or any designated length of time, plus keep those files for whatever length of time is necessary (assuming of course that your data center has enough space available in the file system).

Checkpoints by default are read-only and are used to revert files, directories and/or the entire file system to a single point in time.  However, you can create writable checkpoints which allow you to snap an FS, export it, and test actual production data without affecting front-end production. 

VNX Checkpoint also leverages Microsoft VSS: allowing users to restore their files to previous points created by the VNX. With this integration you can allow users to restore their own files and avoid the usual calls from users who have accidently corrupted or deleted their files.  Yet, there are some concerns as to how big snapshots can get. VNX will dynamically increase the checkpoints based on how long you need them and how many you take on a daily basis. Typically the most a snapshot will take is 20% of the file system size and even that percentage is based on how much data you have and how frequently the data changes.

For file systems that are larger than 16TB, accruing successful backup can be a difficult task. With NDMP (network data management protocol) integration you are able to backup the checkpoints and store just the changes instead of the entire file system.

Take note that replicating file systems with other VNX arrays will carry your checkpoints over, giving you an off-site copy of the checkpoint made to the production FS. Backups on larger file systems can become an extremely difficult and time consuming job – by leveraging VNC Replicator and checkpoints you gain the ability to manage the availability of your data from any point in time you choose.

Photo Credit: Irargerich

Top 3 Security Resolutions For 2012: Moving Forward From “The Year Of The Breach”

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Security | No Comments
I always feel a sense of renewal with the turn of the calendar. Many people use this time to set new goals for the new year and take the opportunity to get re-grounded and move toward accomplishing their goals. Yet, as I reflect on the security landscape in 2011, aptly named “The Year of the Breach”; I thought it would be a perfect time to make some resolutions for 2012 that everyone with any data to protect could benefit from.


1. Focus More on Security and Not Just on Compliance

On a day to day basis I speak to a wide range of companies and often see organizations who are so concerned about checking the box for compliance that they lose sight of actually minimizing risk and protecting data. Regardless of the regulation in the long list of alphabet soup (SOX, GLBA, PCI, HIPAA) – maintaining compliance is a daunting task.
As a security practitioner, focusing on limiting exposure to every business has always been my key concern. How can I enable the business while also minimizing risk? With this mindset, compliance helps to ensure that I am doing my due diligence and that all of my documentation is in order to prove that I’m doing my due diligence to keep our customers and stakeholders happy and protected.
2. Ready Yourself for Mobile Device Explosion
The iPad is a pretty cool device. I’m no Apple Fanboy by any stretch, but this tablet perfectly bridges the gap between my smart phone and my laptop. I am not the only one seeing these devices becoming more prevalent in the workforce as well. People are using them to take notes in meetings and give presentations, yet users are not driving the business to support these devices. Many organizations instead are simply allowing their employees to purchase their own devices and use them on corporate networks.
If employees can work remotely and be more happy and efficient with these devices, security admins can’t and shouldn’t stand in the way. We must focus on protecting these endpoints to ensure they don’t get infected with malware. We’ve also got to protect the data on these devices to ensure that corporate data isn’t misused or stolen when spread over so many variations of devices.
3. Play Offense, Not Defense
I’ve worked in IT Security for a long time and unfortunatley along the way I’ve seen and heard a lot of things that I wish I hadn’t. Yet, I can’t afford to have my head in the sand regarding security. I need to have my finger on the pulse of the organization and understand what’s happening in the business. It’s important that I also understand how data is being used and why. Once this happens, I am able to put controls in place and be in a better position to recognize when something is abnormal. With the prevalence of bot-nets and other malware, it is taking organizations 4-16 weeks before they even realize they have been compromised. Once this surfaces, they have to play catchup in order to assess the damage, clean the infection and plug the holes that were found. Breaches can be stopped before they start, if the company and/or security admin are adamant about being on the offense.
These are my top three resolutions to focus on for 2012 – what is your list? I invite you to list your security resolutions in the comment section below, I’d love to know what your organization is focused on!
Photo Credit: simplyla

The Shifting IT Workforce Paradigm Part II: Why Don’t YOU Know How “IT” Works In Your Organization?

By | Backup, Cloud Computing, How To, Networking, Security | No Comments

When I write about CIO’s taking an increased business-oriented stance in their jobs, I sometimes forget that without a team of people who are both willing and able to do that, their ability to get out of the data center and into the board room is drastically hampered.

I work with a company from time to time that embodies for me the “nirvana state” of IT: they know how to increase revenue for the organization. They do this while still maintaining focus on IT’s other two jobs — avoiding risk and reducing cost. How do they accomplish this? They know how their business works, and they know how their business uses their applications. The guys in this IT shop can tell you precisely how many IOPS any type of end-customer business transaction will create. They know that if they can do something with their network, their code, and/or their gear that provides an additional I/O or CPU tick back to the applications, they can serve X number of transactions and that translates into Y dollars in revenue, and if they can do that without buying anything, it creates P profit.

The guys I work with aren’t the CIO, although I do have relationships with the COO, VP of IT, etc. To clarify – there aren’t business analysts who crossed over into IT from the business who provide this insight. These are the developers, infrastructure guys, security specialists, etc. At this point, I think if I asked the administrative assistant who greets me at the door every visit, she’d be able to tell me how her job translates into the business process and how it affects revenue.

Some might say that since this particular company is a custom development shop that should be easy. Yet, they have to know the business processes to write the code that drives them. Yes and no. I think that most people who make that statement haven’t closely examined the developers coming out of college these days. I have a number of nieces, nephews, and children of close friends who are all going into IT, and let me tell you, the stuff they’re teaching in the development classes these kids are taking isn’t about optimization of code to a business process and it isn’t about utility of IT.

It’s about teaching a foreign language more than teaching them the ‘why you do this’ of things. You’re not getting this kind of thought and thought-provoking behavior out of the current generation of graduates. This comes from caring. In my estimation it comes from those at the top giving enough latitude to make intelligent decisions and demanding that people understand what the company is doing and more importantly – where they want to be.

They set goals, clarify those goals, and they make it clear that everyone in the organization can and does play a role in achieving those goals. These guys don’t go to work every day wondering why they are sitting in the cubicle, behind the desk, with eight different colored lists on their whiteboard around a couple of intricately complicated diagrams depicting a new code release. They aren’t cogs in a machine, and they’re made not to feel as though they are. If you want to be a cog, you don’t fit in this org, pretty simple.  That’s the impression I get of them, anyway.

The other important piece of this is that they don’t trust their vendors. That’s probably the wrong way to say it. It’s more about questioning everything from their partners, taking nothing for granted, and demanding that their vendors explain how everything works so they understand how they plug into it and then take advantage of it. They don’t buy technology for the sake of buying technology. If older gear works, they keep the older gear, but they understand the ceiling of that gear, and before they hit it, they acquire new. They don’t always buy the cheapest, but they buy the gear that will drive greater profitability for the business.

That’s how IT should be buying. Not a cost matrix of four different vendors who are all fighting to be the apple the others are compared to. Rather – which solution will help me be more profitable as a business because I can drive more customer transactions through the system? Of course, 99% of organizations I deal with couldn’t tell you what the cost of a business transaction is. Probably 90% of them couldn’t tell you what the business transaction of their organization looks like.

These guys aren’t perfect, they have holes. They are probably staffed too thin to reach peak efficiency and they could take advantage of some newer technologies to be more effective. They could probably use a little more process in a few areas. But at the end of the day, they get it. They get that IT matters, they get that information is the linchpin to their business, and they get that if the people who work in the organization care, the organization is better. They understand that their business is unique and they have a limited ability to stay ahead of the much larger companies in their field; thus they must innovate, but never stray too far from their foundation or the core business will suffer.

It’s refreshing to work with a company like this. I wish there were more stories like this organization and that the trade rags would highlight them more prominently. They deserve a lot of credit for how they operate and what they look to IT to do for them.

Even though I can’t name them here I’ll just say good job guys, keep it up, and thanks for working with us.

Photo Credit: comedy_nose


Following “The Year of the Breach” IT Security Spending Is On The Rise

By | Backup, Data Loss Prevention, Disaster Recovery, RSA, Security, Virtualization | No Comments

In IT circles, the year 2011 is now known as “The Year of the Breach”. Major companies such as RSA, Sony, Epsilon, PBS, Citigroup, etc. have experienced serious high profile attacks. Which begs the question: if major players such as these huge multi-million dollar companies are being breached, what does that mean for my company? How can I take adequate precautions to ensure that I’m protecting my organization’s data?

If you’ve asked yourself these questions, you’re in good company. A recent study released by TheInfoPro states that:
37% of information security professionals are planning to increase their security spending in 2012.
In light of the recent security breaches, as well as the increased prevalence of mobile devices within the workplace, IT security is currently top of mind for many organizations. In fact, with most of the companies that IDS is working with I’m also seeing executives taking more of an interest in IT security. CEO’s and CIO’s are gaining a better understanding of technology and what is necessary to improve the company’s security position in the future. This is a huge win for security practitioners and administrators because they are now able to get the top level buy-in needed to make important investments in infrastructure. IT security is fast becoming part of the conversation when making business decisions.
I expect the IT infrastructure to continue to rapidly change as virtualization continues to grow and cloud-based infrastructures become more mature. We’re also dealing with an increasingly mobile workforce where employees are using their own laptops, smart phones and tablets instead of those issued by the company. Protection of these assets become even more important as compliance regulations become increasingly strict and true enforcement begins.
Some of the technologies that have grown in 2011 and which I foresee increasing their growth in 2012, include Data Loss Prevention, Application-aware Firewalls and Enterprise Governance Risk and Compliance. Each of these technologies focus on protecting sensitive information to ensure that authorized individuals are using this information responsibly. Moving forward into 2012, my security crystal ball tells me that everyone, top level down will increase not only their security spend, but most importantly their awareness of IT security and just how much their organizations data is worth to protect.
Photo Credit: Don Hankins

What Happens When You Poke A Large Bear (NetApp SnapMirror) And An Aggressive Wolf (EMC RecoverPoint)?

By | Backup, Clariion, Data Loss Prevention, Deduplication, Disaster Recovery, EMC, NetApp, Replication, Security, Storage | No Comments

This month I will take an objective look at two competitive data replication technologies – NetApp SnapMirror and EMC RecoverPoint. My intent is not to create a technology war, but I do realize that I am poking a rather large bear and an aggressive wolf with a sharp stick.

A quick review of both technologies:


  • NetApp’s controller based replication technology.
  • Leverages the snapshot technology that is fundamentally part of the WAFL file system.
  • Establishes a baseline image, copies it to a remote (or partner local) filer and then updates it incrementally in a semi-synchronous or asynchronous (scheduled) fashion.


  • EMC’s heterogeneous fabric layer journaled replication technology.
  • Leverages a splitter driver at the array controller, fabric switch, and/or host layer to split writes from a LUN or group of LUNs to a replication appliance cluster.
  • The split writes are written to a journal and then applied to the target volume(s) while preserving write order fidelity.

SnapMirror consistency is based on the volume or qtree being replicated. If the volume contains multiple qtrees or LUNs, those will be replicated in a consistent fashion. In order to get multiple volumes replicated in a consistent fashion, you will need to quiesce the applications or hosts accessing each of the volumes and then take snapshots of all the volumes and then SnapMirror those snapshots. An effective way to automate this process is leveraging SnapManager.

After the initial synchronization SnapMirror targets are accessible as read-only. This provides an effective source volume for backups to disk (SnapVault) or tape. The targets are not read/write accessible though, unless the SnapMirror relationship is broken or FlexClone is leveraged to make a read/write copy of the target. The granularity of the replication and recovery is based off a schedule (standard SnapMirror) or in a semi-synchronous continual replication.

When failing over, the SnapMirror relationship is simply broken and the volume is brought online. This makes DR failover testing and even site-to-site migrations a fairly simple task. I’ve found that many people use this functionality as much for migration as data protection or Disaster Recovery. Failing back to a production site is simply a matter of off-lining the original source, reversing the replication, and then failing it back once complete.

In terms of interface, SnapMirror is traditionally managed through configuration files and the CLI. However, the latest version of ONCommand System Manager includes an intuitive easy to use interface for setting up and managing SnapMirror Connections and relationships.

RecoverPoint is like TIVO® for block storage. It continuously records incoming write changes to individual LUNs or groups of LUNs in a logical container aptly called a consistency group. The writes are tracked by a splitter driver that can exist on the source host, in the fabric switch or on a Clariion (VNX) or Symmetrix (VMAXe only today) array. The host splitter driver enables replication between non-EMC and EMC arrays (Check ESM for latest support notes).

The split write IO with RecoverPoint is sent to a cluster of appliances that package, compress and de-duplicate the data, then sends it over a WAN IP link or local fibre channel link. The target RecoverPoint Appliance then writes the data to the journal. The journaled writes are applied to the target volume as time and system resources permit and are retained as long as there is capacity in the journal volume in order to be able to rewind the LUN(s) in the consistency group to any point in time retained.

In addition to remote replication, RecoverPoint can also replicate to local storage. This option is available as a standalone feature or in conjunction with remote replication.

RecoverPoint has a standalone Java application that can be used to manage all of the configuration and operational features. There is also integration for management of consistency groups by Microsoft Cluster Services and VMWare Site Recovery Manager. For application consistent “snapshots” (RecoverPoint calls them “bookmarks”) EMC Replication Manager or the KVSS command line utilities can be leveraged. Recently a “light” version of the management tool has been integrated into the Clariion/VNX Unisphere management suite.

So, sharpening up the stick … NetApp SnapMirror is a simple to use tool that leverages the strengths of the WAFL architecture to replicate NetApp volumes (file systems) and update them either continuously or on a scheduled basis using the built-in snapshot technology. Recent enhancements to the System Manager have made it much simpler to use, but it is limited to NetApp controllers. It can replicate SAN volumes (iSCSI or FC LUNs) in NetApp environments – as they are essentially single files within a Volume or qtree.

RecoverPoint is a block-based SAN replication tool that splits writes and can recover to any point in time which exists in the journal volume. It is not built into the array, but is a separate appliance that exists in the fabric and leverages array, and fabric or host based splitters. I would make the case that RecoverPoint is a much more sophisticated block-based replication tool that provides a finer level of recoverable granularity, at the expense of being more complicated.

 Photo Credit: madcowk