Choosing the right MDM for your organization

Choosing the Right Enterprise Mobile Device Management Solution

By | How To, Strategy, Virtualization | No Comments

Recently there’s been a lot of buzz in the technology marketplace about Mobile Device Management, or MDM. It is certainly one of the new trendsetters and hot topics within Information Technology management. MDM has become even more relevant with the heavy adoption of Bring Your Own Device (BYOD) models by many organizations. With all the information surrounding MDM to digest, the big question is, what is the right MDM product for your organization?

There are a lot of factors to consider when investigating any new technology solution and it’s easy to get bogged down in the process. Today I’d like to walk you through the right steps to take when determining the best Mobile Device Management solution for your company.

5 Steps to Choosing the Right MDM Solution for Your Organization

  1. Check in with Gartner Magic Quadrant. I always look at Gartner Magic Quadrant as they have been the leading trusted resource for qualifying technologies. Gartner’s 2014 leaders in EMM are AirWatch, MobileIron, Citrix, Good Technology and IBM.
  2. Consider existing technologies. Take the time to consider existing technologies in your environment. For example, if you have Citrix XenDesktop or XenApp, then you probably use Netscaler devices to access these environments from the Internet. Depending on your existing technologies and considerations for hardware, licenses and optimization, you can determine which solutions may be a good fit.
  3. Determine requirements. Start asking questions related to the requirements you need to meet. One example of many questions you should ask might be if you only want to provide company resources on these devices while users are on premise. If that’s the case, you will need to determine whether the product has Geo-fencing capabilities.
  4. Analyze the options. Once you have narrowed your focus to two or three choices, it’s time to do some additional homework. Ask your trusted solution advisors that you have used in the past, and look at any industry related postings about experiences with the products.
  5. Complete a proof of concept. When you are ready to make a decision and have your ideal solution in mind, it’s time to do a proof of concept. For this type of project, a POC is highly recommended to ensure that the product will work in your environment, meet all of your requirements and perform to expectation. From there, you can expand into a larger pilot group, and finally roll into production.

It’s important to ensure that you make proper business decisions when it comes to your strategy for managing and controlling devices accessing company resources. In addition to the steps listed above, you can also help prevent mistakes by running business decisions by a focus group. This way you know how receptive the user community will be when you implement products that manage personal and company owned devices. Ultimately this strategy helps you communicate properly with the user group and gives you the opportunity to get them excited about new technologies.


Why VDI Is So Hard and What To Do About It

By | EMC, View, Virtualization | No Comments

Rapid consumerization, coupled with the availability of powerful, always-connected mobile devices and the capability for anytime, anywhere access to applications and data is fundamentally transforming the relationship of IT and the end user community for most of our customers.

IT departments are now faced with the choice to manage an incredible diversity of new devices and access channels, as well as the traditional desktops in the old way, or get out of the device management business and instead deliver IT services to the end-user in a way that aligns with changing expectations. Increasingly, our customers are turning to server-hosted virtual desktop solutions—which provide secure desktop environments accessible from nearly any device—to help simplify the problem. This strategy, coupled with Mobile Device Management tools, helps to enable BYOD and BYOC initiatives, allowing IT to provide a standardized corporate desktop to nearly any device while maintaining control.

However, virtual desktop infrastructure (VDI) projects are not without risk. This seems to be well understood, because it’s been the “year of the virtual desktop” for about four years now (actually, I’ve lost count). But we’ve seen and heard of too many VDI projects that have failed due to an imperfect understanding of the related design considerations or a lack of data-driven, fact-based decision making.

There is really only one reason VDI projects fail: The provided solution fails to meet or exceed end-user expectations. Everything else can be rationalized – for example as an operational expense reduction, capital expense avoidance, or security improvement. But a CIO who fails to meet end user expectations will either have poor adoption, decreased productivity, or an outright mutiny on his/her hands.

Meeting end-user expectations is intimately related to storage performance. That is to say, end user expectations have already been set by the performance of devices they have access to today. That may be a corporate desktop with a relatively slow SATA hard drive or a MacBook Air with an SSD drive. Both deliver dedicated I/O and consistent application latency. Furthermore, the desktop OS is written with a couple of salient underlying assumptions – that the OS doesn’t have to be a “nice neighbor” in terms of access to CPU, Memory, or Disk, and that the foreground processes should get access to any resources available.

Contrast that with what we’re trying to do in a VDI environment. The goal is to cram as many of these resource-hungry little buggers on a server as you can in order to keep your cost per desktop lower than buying and operating new physical desktops.

Now, in the “traditional” VDI architecture, the physical host must access a shared pool of disk across a storage area network, which adds latency. Furthermore, those VDI sessions are little resource piranhas (Credit: Atlantis Computing for the piranhas metaphor). VDI workloads will chew up as many as IOPS as you throw at them with no regard for their neighbors. This is also why many of our customers choose to purchase a separate array for VDI in order to segregate the workload. This way, VDI workloads don’t impact the performance of critical server workloads!

But the real trouble is that most VDI environments we’ve evaluated average a whopping 80% random write at an average block size of 4-8K.

So why is this important? In order to meet end-user expectations, we must provide sufficient IO bandwidth at sufficiently low latency. But most shared storage arrays should not be sized based on front-end IOPS requirements. They must be sized based on backend IOPS and it’s the write portion of the workload which suffers a penalty.

If you’re not a storage administrator, that’s ok. I’ll explain. Due to the way that traditional RAID works, a block of data can be read from any disk on which it resides, whereas for a write to happen, the block of data must be written to one or more disks in order to ensure protection of the data. RAID1, or disk mirroring, suffers a write penalty factor of 2x because the writes have to happen on two disks. RAID5 suffers a write penalty of 4x because for each change to the disk, we must read the data, read the parity information, then write the data and write the parity to complete one operation.

Well, mathematically this all adds up. Let’s say we have a 400 desktop environment, with a relatively low 10 IOPS per desktop at 20% read. So the front-end IOPS at steady state would be:

10 IOPS per desktop x 400 Desktops = 4000 IOPS

 If I was using 10k SAS drives at an estimated 125 IOPS per drive, I could get that done with an array of 32 SAS drives. Right?

Wrong. Because the workload is heavy write, the backend IOPS calculation for a RAID5 array would look like this:

(2 IOPS read x 400 desktops) + (8 IOPS write x 400 desktops x 4 R5 Write Penalty) IOPS

This is because 20% of the 10 IOPS are read and 80% of the IOPS are write. So the backend IOPS required here is 13,600. On those 125 IOPS drives, we’re now at 110 drives (before hot-spares) instead of 32.

But all of the above is still based on this rather silly concept that our users’ average IOPS is all we need to size for. Hopefully we’ve at least assessed the average IOPS per user rather than taking any of the numerous sizing assumptions in vendor whitepapers, e.g. Power Users all consume 12-18 IOPS “steady state”. (In fairness, most vendors will tell you that your mileage will vary.)

Most of our users are used to at least 75 IOPS (a single SATA drive) dedicated to their desktop workload. Our users essentially expect to have far more than 10 IOPS available to them should they need it, such as when they’re launching Outlook. If our goal is a user experience on par with physical, sizing to the averages is just not going to cut it. So if we use this simple sizing methodology, we need to include at least 30% headroom. So we’re up to 140 disks on our array for 400 users assuming traditional RAID5. This is far more than we would need based on raw capacity.

The fact is that VDI workloads are very “peaky.” A single user may average 12-18 IOPS once all applications are open, but opening a single application can consume hundreds or even thousands of IOPS if it’s available. So what happens when a user comes in to the office, logs in, and starts any application that generates a significant write workload—at the same time everyone else is doing the same? There’s a storm of random reads and writes on your backend, your application latency increases as the storage tries to keep up, and bad things start to happen in the world of IT.

So What Do We Do About It?

I hope the preceding discussion gives the reader a sense of respect for the problem we’re trying to solve. Now, let’s get to some ways it might be solved cost-effectively.

There are really two ways to succeed here:

1)    Throw a lot of money at the storage problem, sacrifice a goat, and dance in a circle in the pale light of the next full moon [editor’s notes: a) IDS does not condone animal sacrifice and b) IDS recommends updating your resume LinkedIn Profile in this case];

2)    Assess, Design, and Deliver Results in a disciplined fashion.

Assess, Don’t Assume

The first step is to Assess. The good news is that we can understand all of the technical factors for VDI success as long as we pay attention to end user as well as administrator experience. And once we have all the data we need, VDI is mostly a math problem.

Making data-driven fact-based decisions is critical to success. Do not make assumptions if you can avoid doing so. Sizing guidelines outlined in whitepapers, even from the most reputable vendors, are still assumptions if you adopt them without data.

You should always perform an assessment of the current state environment. When we assess the current state from a storage perspective, we are generally looking for at least a few metrics, categorized by a user persona or use case.

  • I/O Requirements (I/O per Second or IOPS)
  • I/O Patterns (Block Size and Read-to-Write Ratio)
  • Throughput
  • Storage Latency
  • Capacity Requirements (GB)
  • Application Usage Profiles

Ideally, this assessment phase involves a large statistical set and runs over a complete business cycle (we recommend at least 30 days). This is important to develop meaningful average and peak numbers.

Design for Success

There’s much more to this than just storage choices and these steps will depend upon your choice of hypervisor and virtual desktop management software, but as I put our fearless VDI implementers up a pretty big tree earlier with the IOPS and latency discussion, let’s resolve some of that.

Given the metrics we’ve gathered above, we can begin to plan our storage environment. As I pointed out above this is not as simple as multiplying the number of users times the average I/O. We also cannot size based only on averages – we need at least 30% headroom.

Of course, while we calculated the number of disks we’d need to service the backend IOPS requirements in RAID5 above, we’d look at improved storage capabilities and approaches to reduce the impact of this random write workload.

Solid State Disks

Obviously, Solid State Disks offer over 10 times the IOPS per disk than spinning disks, at greatly reduced access times due to the fact that there are no moving parts. If we took the 400 desktop calculation above and used a 5000 IOPS SSD drive as the basis for our array we’d need very few to service the IOPS.

Promising. But there are both cost and reliability concerns here. The cost per GB on SSDs is much higher and write endurance on an SSD drive is finite. (There have been many discussions of MLC, eMLC, and SLC write endurance, so we won’t cover that here).

Auto-Tiering and Caching

Caching technologies can certainly provide many benefits, including reducing the number of spindles needed to service the IOPS requirements and latency reduction.

With read caching, certain “hot” blocks get loaded into an in-memory cache or more recently, an flash-based tier. When the data is requested, instead of having to seek the data on spindles, which can incur tens of milliseconds of latency, the data is available in memory or on a faster tier of storage. So long as the cache is intelligent enough to cache the right blocks, there can be a large benefit for the read portion of the workload. Read caching is a no-brainer. Most storage vendors have options here and VMware offers a host-based Read Cache.

But VDI workloads are more write intensive. This is where write buffering comes in.

Most storage vendors have write buffers serviced by DRAM or NVRAM. Basically, the storage system acknowledges the write before the write is sent to disk. If the buffer fills up, though, latency increases as the cache attempts to flush data out to the relatively slow spinning disk.

Enter the current champion in this space, EMC’s FAST Cache, which alleviates some concerns around both read I/O and write I/O.  In this model Enterprise Flash is used to extend a DRAM Cache, so if the spindles are too busy to deal with all the I/O, the extended cache is used. Benefits to us: more content in the read cache and more writes in the buffer waiting to be coalesced and sent to disk. Of course, it’s rather more complex than that, but you get the idea.

EMC FAST Cache is ideal in applications in which there is a lot of small block random I/O – like VDI environments – and where there’s a high degree of access to the same data. Without FAST Cache, the benefit of the DRAM Cache alone is about 20%. So 4 out of every 5 I/Os has to be serviced by a slow spinning disk. With FAST Cache enabled, it’s possible to reduce the impact of read and write I/O by as much as 90%. That case would be if the FAST Cache is dedicated to VDI and all of the workloads are the largely the same. Don’t assume that this means you can leverage your existing mixed workload array without significant planning.

Ok, so if we’re using an EMC VNX2 with FAST Cache and this is dedicated only to VDI, we hope to obtain a 90% reduction of back-end write IO. Call me conservative, but I think we’ll dial that back a bit for planning purposes and then test it during our pilot phase to see where we land. We calculated 12,800 in backend write IO earlier for 400 desktops. Let’s say we can halve that. We’re now at 7200 total IOPS for 400 VDI desktops. Not bad.

Hybrid and All-Flash Arrays

IDS has been closely monitoring the hybrid-flash and all-flash array space and has selected solutions from established enterprise vendors like EMC and NetApp as well as best-of-breed newer players like Nimble Storage and Pure Storage.

The truly interesting designs recognize that SSDs should not be used as if they are traditional spinning disks. Instead these designs optimize the data layout for write. As such, even though they utilize RAID technology, they do not incur a meaningful write penalty, meaning that it’s generally pretty simple to size the array based on front-end IOPS. This also reduces some of the concern about write endurance on the SSDs. When combined with techniques which both coalesce writes and compress and de-duplicate data in-line, these options can be attractive on a cost-per-workload basis even though the cost of Flash remains high.

Using a dedicated hybrid or flash-based array would get us to something like a single shelf needed for 400 users. At this point, we’re more sizing for capacity than I/O and latency, a situation that’s more familiar to most datacenter virtualization specialists. But we’re still talking about an approach with a dedicated array at scale.

Host-Based Approaches

A variety of other approaches to solving this problem have spring up, including the use of host-based SSDs to offload portions of the IO, expensive Flash memory cards providing hundreds of thousands of I/O’s per card, and software approaches such as Atlantis Computing’s ILIO virtual appliances which leverage relatively inexpensive system RAM as a low-latency de-duped data store and functionally reduce VDI’s impact on existing storage.  (Note: IDS is currently testing the Atlantis Computing solution in our Integration Lab).

Design Conclusion

Using a combination of technology approaches, it is now possible to provide VDI user experience that exceeds current user expectations at a cost per workload less than the acquisition cost of a standard laptop. The server-hosted VDI approach has many benefits in terms of operational expense reduction as well as data security.

Delivering Results

In this article, we’ve covered one design dimension that influences the success of VDI projects, but there’s much more to this than IOPS and latency. A disciplined engineering and delivery methodology is the only way to deliver results reliably for your VDI project. At minimum, IDS recommends testing your VDI environment at scale using tools such as LoginVSI or View Planner as well as piloting your solution with end user champions.

Whether you’re just getting started with your VDI initiative, or you’ve tried and failed before, IDS can help you achieve the outcomes you want to see. Using our vendor-agnostic approach and disciplined methodology, we will help you reduce cost, avoid business risk, and achieve results.

We look forward to helping you.


Photo credit: linademartinez via Flickr

Choosing the Best Replication with VMware vCenter Site Recovery Manager: vSphere vs. Array-based

By | Replication, Virtualization, VMware | No Comments

I recently had the opportunity to implement VMware vCenter Site Recovery Manager (SRM) in three different environments using two different replication technologies (vSphere and Array-based Replication). The setup and configuration of the SRM software is pretty much straightforward. The differences come into play when deciding on what the best replication option is for your business needs.

vSphere Replication

vSphere Replication is built into SRM 5.0 and is included no matter what replication technology you decide to use. With vSphere Replication, you do not need to have costly identical storage arrays at both your sites, because the replication is managed through vCenter. With the ability to manage though vCenter, you are given more flexibility in regard to which VMs are protected. VMs can be protected individually, as opposed to doing so at the VMFS datastore. vSphere Replication is deployed and managed by virtual appliances installed at both sites. Replication is then handled by the ESXi hosts, with the assistance of the virtual appliances. vSphere Replication supports RPOs as low as 15 minutes.

[framed_box] vSphere Replication Benefits:

  • No need for costly storage arrays at both sites
  • More flexibility in choosing which VMs are protected (can do so individually)
[/framed_box] [divider_padding]

Array-based Replication

The two Array-based Replication technologies that I implemented were EMC MirrorView and EMC Symmetrix. Both of these tie into SRM using a storage replication adapter (SRA). The SRA is a program that is provided by the array vendor that allows SRM access to the array. Configuration of replication is done outside of vCenter at the array level. Unlike vSphere Replication, Array-based Replication requires you to protect an entire VMFS datastore or LUN, as opposed to individual VMs. One of the biggest benefits of Array-based Replication is its ability to provide automated re-protection of the VMs and near-zero RPOs.

[framed_box] vSphere Replication Benefits:

  • Automated re-protection of VMs
  • Near-zero RPOs
[/framed_box] [divider_padding]

Final Thoughts

VMware vCenter Site Recovery Manger gives you disaster recovery management that is highly sought after in today’s market, allowing you to perform planned migrations, failover and failback, automated failback and non-disruptive testing.

Photo credit: adamhenning via Flickr

IT, Cloud, IDS, Integrated Data Storage, Networking,

Your Go-To Guide For IT Optimization & Cloud Readiness, Part II

By | Cloud Computing, How To, Networking, Storage, Virtualization | No Comments

[Note: This post is the second in a series about the maturity curve of IT as it moves toward cloud readiness. Read the first post here about standardizing and virtualizing.]

I’ve met with many clients over the last several months that have reaped the rewards of standardizing and virtualizing their data center infrastructure. Footprints have shrunk from rows to racks. Power and cooling costs have been significantly reduced, while increasing capacity, uptime and availability.

Organizations that made these improvements made concerted efforts to standardize, as this is the first step toward IT optimization. It’s far easier to provision VMs, manage storage, and network from a single platform and the hypervisor is an awesome tool that creates the ability to do more with less hardware.

So now that you are standardized and highly virtualized, what’s next?

My thought on the topic is that after you’ve virtualized your Tier 1 applications like e-mail, ERP, and databases, the next step is to work toward building out a converged infrastructure. Much like cloud, convergence is a hyped technology term that means something different to every person who talks about it.

So to me, a converged infrastructure is defined as a technology system where compute, storage, and network resources are provisioned and managed as a single entity.

it optimized blog pyramid

Sounds obvious and easy, right?! Well, there are real benefits that can be gained; yet, there are also some issues to be aware of. The benefits I see companies achieving include:

→ Reducing time to market to deploy new applications

  • Improves business unit satisfaction with IT, with the department now proactively serving the business’s leaders, instead of reacting to their needs
  • IT is seen as improving organizational profitability

→ Increased agility to handle mergers, acquisitions, and divestitures

  • Adding capacity for growth can be done in a scalable, modular fashion within the framework of a converged infrastructure
  • When workloads are no longer required (as in a divestiture), the previously required capacity is easily repurposed into a general pool that can be re-provisioned for a new workload

→ Better ability to perform ongoing capacity planning


  • With trending and analytics to understand resource consumption, it’s possible to get ahead of capacity shortfalls by understanding when it will occur several months in advance
  • Modular upgrades (no forklift required) afford the ability to add capacity on demand, with little to no downtime

Those are strong advantages when considering convergence as the next step beyond standardizing and virtualizing. However, there are definite issues that can quickly derail a convergence project. Watch out for the following:

→ New thinking is required about traditional roles of server, storage and network systems admins

  • If you’re managing your infrastructure as a holistic system, it’s overly redundant to have admins with a singular focus on a particular infrastructure silo
  • Typically means cross training of sys admins to understand additional technologies beyond their current scope

→ Managing compute, storage, and network together adds new complexities

  • Firmware and patches/updates must be tested for inter-operability across the stack
  • Investment required either in a true converged infrastructure platform (like Vblock or Exadata) or a tool to provide software defined Data Center functionality (vCloud Director)

In part three of IT Optimization and Cloud Readiness, we will examine the OEM and software players in the infrastructure space and explore the benefits and shortcomings of converged infrastructure products, reference architectures, and build-your-own type solutions.

Photo credit: loxea on Flickr

Faster and Easier: Cloud-based Disaster Recovery Using Zerto

By | Cloud Computing, Disaster Recovery, How To, Replication, Virtualization | No Comments

Is your Disaster Recovery/Business Continuity plan ready for the cloud? Remember the days when implementing DR/BC meant having identical storage infrastructure at the remote site? The capital costs were outrageous! Plus, the products could be complex and time consuming to setup.

Virtualization has changed the way we view DR/BC. Today, it’s faster and easier than ever to setup. Zerto allows us to implement replication at the hypervisor layer. It is purpose built for virtual environments. The best part: it’s a software-only solution that is array agnostic and enterprise class. What does that mean? Gone are the days of having an identical storage infrastructure at the DR site. Instead, you replicate to your favorite storage—it doesn’t matter what you have. It allows you to reduce hardware costs by leveraging existing or lower-cost storage at the replication site.

zerto visio graphic

How does it work? You install the Zerto Virtual Manager on a Windows server at the primary and remote sites. Once installed, the rest of the configuration is completed through the Zerto tab in VMware vCenter. Simply select the Virtual Machines you want to protect and that’s about it. It supports fully automated failover and failback and the ability to test failover, while still protecting the production environment. Customers are able to achieve RTOs of minutes and RPOs of seconds through continuous replication and journal-based, point-in-time recovery.

Not only does Zerto protect your data, it also provides complete application protection and recovery through virtual protection groups.

Application protection:

  • Fully supports VMware VMotion, Storage VMotion, DRS, and HA
  • Journal-based point-in-time protection
  • Group policy and configuration
  • VSS Support

Don’t have a replication site? No problem. You can easily replicate your VMs to a cloud provider and spin them up in the event of a disaster.

Photo credit: josephacote on Flickr

IT, Cloud, IDS, Integrated Data Storage, Networking,

Your Go To Guide For IT Optimization & Cloud Readiness, Part I

By | Cloud Computing, How To, Networking, Virtualization | No Comments

As an Senior IT Engineer, I spend a lot of time in the field talking with current or potential clients. Over the last two years I began to see a trend in questions that company decision makers were asking and this revolves around developing and executing the right cloud strategy for their organization.

With all the companies I’ve worked with, there are three major areas that C-level folks routinely inquire about and those topics include reducing cost, improving operations and reducing risk. Over the years I’ve learned that an accurate assessment of the organization is imperative as it’s a valuable key to understand the current state of the companies IT infrastructure, people and processes. When discovering these key items of an organization, I’ve refined the following framework to help decision makers effectively become cloud ready.

Essentially IT infrastructure optimization and cloud readiness adhere to the same maturity curve, moving upstream from standardized to virtualized/consolidated and then converged.  From there, the remaining journey is about automation and orchestration.  It ultimately depends on where an organization currently resides. Within that framework it will dictate my recommendations for tactical next steps to reach more strategic goals.

IT, IT Optimization, Cloud, Cloud Readiness, IT Cloud, Cloud GuideStandardization is the first topic which needs to be explored as that is the base of all business operations and directions. The main drive to standardize is in efforts to reduce the number of server and storage platforms in the data center.

The more operating systems and hardware management consoles your administrators need to know, the less efficient they become.  There’s little use for Windows Server 2003 expertise in 2013 and it is important to find a way to port the app to your current standard.  The fewer standards your organization can maintain, the fewer the variables exist when trouble shooting issues. Ultimately, fewer standards will allow you to return to IT to focus on initiatives essential to the business.  Implementing asset life-cycle policies can limit costly maintenance on out of warranty equipment and ensure your organization is always taking advantage of advances in technology.

After implementing a higher degree of standardization, organizations are better equipped to take the next step by moving to a highly virtualized state and by greatly reducing the amount of physical infrastructure that’s required to serve the business.  By now most everyone has at least leveraged virtualization to some degree.  The ability to consolidate multiple physical servers onto a single physical host dramatically reduces IT cost as an organization can provide all required compute resources on far fewer physical servers.

I know this because I’ve worked with several organizations who’ve experienced consolidation ratios of 20-1 or greater.  One client I’ve worked with has extensively reduced their data center footprint, migrating 1200 physical servers onto 55 total virtual hosts. While the virtual hosts tend to be much more robust than the typical physical application server, the cost avoidance is undeniable.  The power savings from decommissioning 1145 servers at their primary data center came to over $1M in the first year alone.

It is also important to factor in cooling and a 3 year refresh cycle that will require a 1100+ servers to be purchased as the savings start to add up quickly.  In addition to the hard dollar cost savings, virtualization produces additional operational benefits.  Business continuity and disaster recovery exposure can be mitigated by using high availability and off site replication functionality embedded into today’s hypervisors.  Agility to the business can increase as well, as time required to provision a virtual server on an existing host is typically weeks to months faster than what’s required to purchase, receive, rack, power, and configure a physical server.

Please look for Part II of “Your Guide To IT Optimization & Cloud Readiness” as Mr. Rosenblum breaks down Convergence and Automation.

photo by “reway2007

How To: Replicating VMware NFS Datastores With VNX Replicator

By | Backup, How To, Replication, Virtualization, VMware | No Comments

To follow up on my last blog regarding NFS Datastores, I will be addressing how to replicate VMware NFS Datastores with VNX replicator. Because NFS Datastores exist on VNX file systems, the NFS Datastores are able to replicate to an off-site VNX over a WAN. 

Leveraging VNX replicator allows you to use your existing WAN link to sync file systems with other VNX arrays. VNX only requires you to enable the Replication license of an offsite VNX and the use of your existing WAN link. There is no additional hardware other then the replicating VNX arrays and the WAN link.

VNX Replicator leverages checkpoints (snapshots) to record any changes made to the file systems. Once there are changes made to the FS the replication checkpoints initiates writes to the target keeping the FS in sync. 

Leveraging Replicator with VMware NFS DS will create a highly available virtual environment that will keep your NFS DS in sync and available remotely for whenever needed. VNX replicator will allow a maximum of ten minutes of “out-of-sync” time. Depending on WAN bandwidth and availability, your NFS DS can be restored ten minutes from the point of failure.

The actual NFS failover process can be very time consuming: once you initiate the failover process you will still have to mount the DS to the target virtual environment and add each VM into the inventory. When you finally have all of the VMs loaded, next you must configure the networking. 

Fortunately VMware Site Recovery Manager SRM has a plug-in which will automate the entire process. Once you have configured the policies for failover, SRM will mount all the NFS stores and bring the virtual environment online. These are just a few features of VNX replicator that can integrate with your systems, if you are looking for a deeper dive or other creative replication solutions, contact me.

Photo Credit: hisperati

Why, Oh Why To Do VDI ?

By | Cloud Computing, Security, Storage, View, Virtualization, VMware | No Comments

I recently became a Twit on Twitter, and have been tweeting about my IT experiences with several new connections. In doing so, I came across a tweet about a contest to win some free training, specifically VMware View 5 Essentials from @TrainSignal – sweet!

Below is a screen capture of the tweet:


A jump over to the link provided in the tweet – explains that one or all of the below questions should be commented on in the blog post – in order to win. Instead of commenting on that blog, why not address ALL of the questions in my own blog article at IDS?!  Without further ado, let’s jump right in to the questions:

Why are Virtual Desktop technologies important nowadays, in your opinion?

Are you kidding me?!

If you are using a desktop computer, workstation at work or a laptop at home/work – you are well aware that technology moves so fast, updated versions are released as soon as you buy a “new” one. Not to mention the fact usually laptops are already configured with what the vendor or manufacturer thinks you should be using, not what is best, more efficient or fastest. More times than not, you are provided with what someone else thinks is best for the user. The reality is that only you – the user – knows what you need and if no one bothers to ask you, there can be a feelings of being trapped, having no options, or resignation, which all tend to lead to the dreaded “buyer’s remorse.”

When you get the chance to use a virtual desktop, you finally get a “tuned-in” desktop experience similar to or better than the user experience that you have on the desktop or laptop from Dell, HP, IBM, Lenovo, Gateway, Fujitsu, Acer and so on.

Virtual desktops offer a “tuned” experience because architects design an infrastructure and solution from the operating system in the virtual desktop, be it Windows XP to Windows 7; soon to be Windows 8, to the right amount of virtual CPUs (vCPUs), capacity of  guest memory, disk IOPS, network IOPS and everything else that you wouldn’t want to dive into the details of. A talented VDI Architect will consider every single component when designing  a virtual desktop solution because the user experience matters – there is no selling them on the experience “next time.” Chances are if you have a negative experience the first time, you will never use a virtual desktop again, nor will you have anything good to say when the topic comes up at your neighborhood barbecue or pool party.

The virtual desktop is imparitive because it drives the adoption of heads up displays (HUD) in vehicles, at home and the workplace, as well as slimmer interface tablet devices. Personally, when I think about the future of VDI I envision expandable OLED flex screens that will connect wirelessly to private or public cloud based virtual desktops with touch-based (scratch-resistant) interfaces that connect to private cloud based virtual desktops. The virtual desktop is the next  frontier, leaving behind the antiquated desktop experience that has been dictated to the consumer by vendors and manufacturers that simply does not give us what is needed the first time.

What are the most important features of VDI in your opinion?

Wow, the best features of VDI require a VIP membership into the exclusive VDI community. Seriously though, the users and IT Support staff are the last to know the most important features, but the users and IT Support are the first to be impacted when a solution is architected because those two groups of people are the most in lock-step with the desktop user experience.

The most effective way for me to leave a lasting impression is to lay out the most important features out in a couple of bullet statements:

  • Build a desktop in under 10 minutes –  how about 3-minutes?
  • Save personal settings and recover personal desktop settings, immediately after rebuilding a desktop.
  • Increased speed by which more CPU or RAM can be added to a virtual desktop.
  • Recovery from malware, spyware, junkware, adware, trojans, viruses, everything-ware – you can save money by just rebuilding in less than 10-minutes.
  • Access to the desktop from anywhere, securely.
  • It just works, like your car’s windshield!

That last point brings me to the most important part of VDI, that when architected, implemented and configured properly, it just works. My mantra in technology is “Technology should just work, so you don’t have to think about technology, freeing you up to just do what you do best!”

What should be improved in VDI technologies that are now on the market?

The best architects, solution providers and companies are the best because they understand the current value of a solution, in this case VDI, as well as the caveats and ask themselves this exact question. VDI has very important and incredibly functional features, but there is a ton of room for improvement.

So, let me answer this one question with two different hats on – one hat being a VDI Architect and the other hat being a VDI User. My improvement comments are based on the solution provided by VMware as I am most familiar with VMware View.  In my opinion, there is no other vendor in the current VDI market who can match the functionality, ease of management and speed that VMware has with the VMware View solution.

As a VDI Architect, I am looking for VMware to improve their VMware View product by addressing the below items:

  • Separate VMware View Composer from being on the VMware vCenter Server.
  • Make ALL of the VMware View infrastructure applications, appliances and components 64-bit.
  • Figure out and support Linux-based linked-clones. (The Ubuntu distribution is my preference.)
  • Get rid of the VMware View Client application – this is 2012.
  • Provide a fully functional web-based or even .hta based access to the VMware View virtual desktop that is secure and simple.
  • Build database compatibility with MySQL, so there is a robust FREE alternative to use.
  • Build Ruby-on-Rails access to manage the VMware View solution and database. Flash doesn’t work on my iPad!

As a VDI User, I am looking for VMware to improve:

  • Access to my virtual desktop, I hate installing another application that requires “administrator” rights.
  • Fix ThinPrint and peripheral compatibility or provide a clearer guide for what is supported in USB redirection.
  • Support USB 3.0 – I don’t care that my network or Internet connection cannot handle the speed – I want the sticker that says that the solution is USB 3.0 compatible and that I could get those speeds if I use a private cloud based VDI solution.
  • Tell me that you will be supporting the Thunderbolt interface and follow through within a year.
  • Support web-cams, I don’t want to know about why it is difficult, I just want it to work.
  • Support Ubuntu Linux-based virtual desktops.

In summary, you never know what you will find when using social media. The smallest of tweets or the longest of blog articles can elicit a thought that will provoke either a transformation in process or action in piloting a solution. If you are looking to pilot a VDI solution, look no further… shoot me an email or contact Integrated Data Storage to schedule a time to sit down and talk about how we can make technology “just work” in your datacenter!  Trust me when I say, your users will love you after you implement a VDI solution.

Photo Credit: colinkinner

My Personal Journey To The Cloud: From Angry Birds to Business Critical Applications

By | Backup, Cloud Computing, Security, Storage, Virtualization | No Comments

Thinking back on it, I can very specifically remember when I started to really care about “The Cloud” and how drastically it has changed my current way of thinking about any services that are provided to me. Personally, the moment of clarity on cloud came shortly after I got both my iPhone and iPad and was becoming engrossed in the plethora of applications available to me. Everything from file sharing and trip planning to Angry Birds and Words with Friends … I was overwhelmed with the amount of things I could accomplish from my new mobile devices and how less dependent I was becoming on my physical location, or the specific device I was using, but completely dependent on the applications that I used on a day-to-day basis. Now I don’t care if I’m on my iPad at the beach or at home on my computer as long as I can access applications like TripIt or Dropbox because I know my information will be there regardless of my location.

As I became more used to this concept, I quickly became an application snob and wouldn’t consider any application that wouldn’t allow me cross-platform access to use from many (or all) of my devices. What good is storing my information in an application on my iPhone if I can’t access it from my iPad or home computer? As this concept was ingrained, I became intolerant of any applications that wouldn’t sync without my manual interaction. If I had to sync via a cable or a third party service, it was too inconvenient and would render the application useless to me in most cases. I needed applications that would make all connectivity and access magically happen behind the scenes, while providing me with the most seamless and simplistic user interface possible. Without even knowing it, I had become addicted to the cloud.

Cloud takes the emphasis away from infrastructure and puts it back where it should be: on the application. Do I, as a consumer, have anything to benefit from creating a grand infrastructure at home where my PC, iPhone, iPad, Android phone, and Mac can talk to one another? I could certainly develop some sort of complex scheme with a network of sync cables and custom-written software to interface between all of these different devices …

But how would I manage it? How would I maintain it as the devices and applications change? How would I ensure redundancy in all of the pieces so that a hardware or software failure wouldn’t take down the infrastructure that would become critical to my day-to-day activities? And how would I fund this venture?

I don’t want to worry about all of those things. I want a service … or a utility. I want something I can turn on and off and pay for only when I use it. I want someone else to maintain it for me and provide me SLAs so I don’t have to worry about the logistics on the backend. Very quickly I became a paying customer of Hulu, Netflix, Evernote, Dropbox, TripIt, LinkedIn, and a variety of other service providers. They provide me with the applications I require to solve the needs I have on a day-to-day basis. The beautiful part is that I don’t ever have to worry about anything but the application and the information that I put into it. Everything else is taken care of for me as part of a monthly or annual fee. I’m now free to access my data from anywhere, anytime, from any device and focus on what really matters to me.

If you think about it, this concept isn’t at all foreign to the business world. How many businesses out there really make their money from creating a sophisticated backend infrastructure and mechanisms for accessing that infrastructure? Sure, there are high-frequency trading firms and service providers that actually do make their money based on this. But the majority of businesses today run complex and expensive infrastructures simply because that is what their predecessors have handed down to them and they have no choice but to maintain it.

Why not shift that mindset and start considering a service or utility-based model? Why spend millions of dollars building a new state-of-the-art Data Center when they already exist all over the World and you can leverage them for an annual fee? Why not spend your time developing your applications and intellectual property which are more likely to be the secret to your company’s success and profitability and let someone else deal with the logistics of the backend?

This is what the cloud means to business right now. Is it perfect for everyone? Not even close. And unfortunately the industry is full of misleading cloud references because it is the biggest buzzword since “virtualization” and everyone wants to ride the wave. Providing a cloud for businesses is a very complex concept and requires a tremendous amount of strategy, vision, and security to be successful. If I’m TripIt and I lose your travel information while you’re leveraging my free service, do you really have a right to complain? If you’re an insurance company and your pay me thousands of dollars per month to securely house your customer records and I lose some of them, that’s a whole different ballgame. And unfortunately there have been far too many instances of downtime, lost data, and leaked personal information that the cloud seems to be moving from a white fluffy cloud surrounded by sunshine to an ominous gray cloud that brings bad weather and destruction.

The focus of my next few blogs will be on the realities of the cloud concept and how to sort through the myth and get to reality. There is a lot of good and bad out there and I want to highlight both so that you can make more informed decisions on where to use the cloud concept both personally and professionally to help you achieve more with less…because that’s what the whole concept is about. Do more by spending less money, with less effort, and less time.

I will be speaking on this topic at an exclusive breakfast seminar this month … to reserve your space please contact Shannon Nelson: .

Picture Credit: Shannon Nelson

To The Cloud! The Reality Behind The Buzzword

By | Cloud Computing, How To, Security, Virtualization | No Comments

I always chuckle when I think back to those Microsoft Windows Live commercials where they exclaim: “To the Cloud!” like they’re super heroes. In 2006-2007 the term “Cloud” was an overused buzzword that had no official meaning – at that time, it seemed like a lot of people were talking about cloud computing or putting things in the cloud but no one could actually articulate what that meant in simple terms or how it would work.

A real understanding and documentation in the technology community about cloud computing probably didn’t come together until mid-to-late 2008.

Today is a much different story. This year Gartner reported that:

nearly one third of organizations either already use or plan to use cloud or software-as-a-service (SaaS) offerings to augment their core business…

It is truly amazing to see how much this segment has matured in such a short period. We’re well past the buzzword stage and “The Cloud” is a reality. As we change the nature and meaning of the traditional infrastructure, we also need to ensure that the way your organization approaches security changes with it.

Fundamentally, we cannot implement cloud security the same way we go about implementing traditional security. The biggest difference being that some of the infrastructure components and computational resources are owned and operated by an outside third party. This third party may also host multiple organizations together in a multi-tenant platform.
To break the buzzword down in terms of cloud + security, here are the three best steps to help you both develop a cloud strategy as well as ensure that security is involved to minimize risk:
Get Involved
Security professionals should be involved early on in the process of choosing a cloud vendor with the focus being on the CIA triad of information security: Confidentiality, Integrity and Availability.
Concerns about regulatory compliance, controls and service level agreements can be dealt with up front to quickly approve or disqualify vendors.
It’s Still Your Data
You know what is best for your company and understand how policies and regulations effect your business. It’s not reasonable to expect your provider to fully understand how your business should be governed. You are ultimately responsible for the protection of your data and to ensure that your provider can implement the best and most necessary security measures.
Continuously Assess Risk
It’s important to identify the data that will be migrated. Does it make sense to migrate credit card data, sensitive information or personally identifiable information? If so, what measure will you put in place to ensure that this information continue to be protected once you migrate it to the cloud? How will you manage this data differently? What are the metrics around security controls will you use to report to audit and compliance?
These questions plus many more will help you to assess where your risk is. As each of these questions are answered they must be documented in your policies and procedures going forward.
Photo Credit: fifikins