Category

Virtualization

Bringing Sexy Back! With Cisco, VMware and EMC Virtualization

By | Cisco, EMC, Virtualization, VMware | No Comments

Yeah I said it: “IDS just brought Sexy Back!”

For a refresh a recent customer sought to finally step into the Virtual Limelight. This particular customer, whose vertical is in the medical industry; purchased four Cisco Chassis and eleven B200 blades.  Alongside the Cisco server they purchased an EMC VNX 5500 OE Unified Array with two Cisco MDS 9148 FC switches.

Our plan was to migrate over one hundred Virtual Machines onto fifteen physical ESX hosts to the new Cisco/VMware 5.0 environment.

Once we successfully moved the VM’s over we began virtualizing the remaining physical hosts. Now the reality is that not all hosts could be moved so abruptly, thus we are still in the process of converting the hosts. However, by just moving the ESX hosts and ten physical servers our client is already seeing tremendous drops in power usage, server management and data center capacity.

Here is what we started with, otherwise know as the “before sexy”:

A picture is worth a thousand words, so let me just show you exactly what “sexy” looks like in their current data center:

The moral of the story is not to dive head first into centralized storage and virtualization, but to consider what it costs to manage multiple physical servers with applications that under-utilize your hardware. Also good to keep in mind is what is costs to keep those servers operational (power/cooling) and maintained. If you don’t know what these figures look like, or how to bring sexy back into your data center – just ask me, resident Justin Timberlake over here at IDS.

Photo Credit: PinkMoose

Integrating EMC RecoverPoint Appliance With VMware Site Recovery Manager

By | Disaster Recovery, EMC, How To, Virtualization, VMware | No Comments

For  my “from the field” post today, I’ll be writing about integrating EMC RecoverPoint Appliance (RPA) with VMware Site Recovery Manager (SRM). However, before we dive in, if you are not familiar with RPA technology, let me explain first with a high overview:

RPAs are a block LUN IP based replication appliance. RPAs are zoned via FC with all available storage ports.  RPAs leverage a “Replication Journal” to track changes within a LUN, once the LUNs have fully seeded between the two sites, the journal log will only send changed deltas over the WAN.  This allows you to keep your existing WAN link and not spend more money on WAN expansion.  The RPA’s use of the journal log allows it to efficiently track changes to the LUNS and replicate the differences over the WAN.  Because RPA can track the changes to the LUNs it can create a Bookmark every 5-10 sec depending on the rate of change and bandwidth.  This will keep your data up to date and within a 10 second recover point objective.  RPA can also allow you to restore or test your replicated data from any one of the bookmarks created.

Leveraging RPA with VMware LUNs greatly increases the availability of your data upon any maintenance or disaster.  Because RPAs replicate block LUNs, RPAs will replicate LUNs that have datastores formatted on them.

At high overview, to failover a datastore you would:

  1. Initiate a failover on the RPA.
  2. Add the LUNs into an existing storage group in the target site.
  3. Rescan your HBAs in Vsphere O.
  4. Once the LUNs are visible you will notice a new data store available.
  5. Open the datastore and add all the VMs into inventory.
  6. Once all the VMs added configure your networking and power up your machine.

Although this procedure may seem straight forward, your RTO (Recovery Time Objective) will increase.

With VMware Site Recovery Manager (SRM) integration, plug-in the failover procedure can be automated.  With SRM you have the ability to build policies as to which v-switch you want each VM to move to as well as which VM you want to power up first.  Once the policies are built and tested (yes you can test failover), to failover your virtual site you simply hit the failover button and watch the magic happen.

SRM will automate the entire failover process and bring your site online in a matter of a few seconds or minutes depending on the size of your virtual site.  If you are considering replicating your virtual environment, I’d advise considering how long you can sustain to be down and how much data you can sustain to lose.  The use of Recover Point Appliance and Site Recovery Manager can assure that you can achieve your disaster recovery goals.

How To: VMware High Availability for Blade Chassis

By | Cisco, Virtualization, VMware | No Comments

Vmware High Availability (HA) is a great feature that allows a guest Virtual Machines in a Cluster to survive a host failure. Some quick background is that a Cluster is a group of hosts that work together harmoniously and operate as a single unit. A host is a physical machine running a Hypervisor such as ESX.

So, what does HA do? If a host in the cluster fails then all of the machines fail. HA will power up the guests on another host in the cluster which can reduce downtime significantly, especially if your Datacenter is 30 minutes from your house at 2am. You can continue to sleep and address the host failure in the morning. Sounds great, so what’s the catch?

The catch is in how HA configures itself in the cluster. The first 5 hosts in a cluster are called primary node and all the other hosts are secondary nodes. A primary node synchronizes settings and status of all hosts in the cluster with other primary nodes. A secondary node basically reports its status to the primary node. Secondary nodes can be promoted to primary nodes, but only under specific circumstances. Circumstances include: putting a host in maintenance node and disconnecting a node from a cluster. HA only needs one primary node to function. I don’t see a catch here…?

The catch comes into the use of a blade center. Suppose you have Chassis A and Chassis B:

We bought two blade chassis for redundancy. Redundant power, switches, electricity, and cluster hosts spread across both. If one chassis fails then other one has plenty of resources. Fully redundant! Maybe. If I was to add my first 5 hosts to my cluster from chassis A then all of my primary nodes would be on chassis A. If chassis A fails, NO guests from the failed host will be powered up on chassis B. Why? All chassis B hosts are secondary nodes and HA requires at least 1 primary! It’s 2 am and now you’re half asleep driving to the datacenter despite all the redundancy.

To avoid this issue, when adding hosts to a cluster, alternate between chassis.

Following “The Year of the Breach” IT Security Spending Is On The Rise

By | Backup, Data Loss Prevention, Disaster Recovery, RSA, Security, Virtualization | No Comments

In IT circles, the year 2011 is now known as “The Year of the Breach”. Major companies such as RSA, Sony, Epsilon, PBS, Citigroup, etc. have experienced serious high profile attacks. Which begs the question: if major players such as these huge multi-million dollar companies are being breached, what does that mean for my company? How can I take adequate precautions to ensure that I’m protecting my organization’s data?

If you’ve asked yourself these questions, you’re in good company. A recent study released by TheInfoPro states that:
37% of information security professionals are planning to increase their security spending in 2012.
In light of the recent security breaches, as well as the increased prevalence of mobile devices within the workplace, IT security is currently top of mind for many organizations. In fact, with most of the companies that IDS is working with I’m also seeing executives taking more of an interest in IT security. CEO’s and CIO’s are gaining a better understanding of technology and what is necessary to improve the company’s security position in the future. This is a huge win for security practitioners and administrators because they are now able to get the top level buy-in needed to make important investments in infrastructure. IT security is fast becoming part of the conversation when making business decisions.
 
I expect the IT infrastructure to continue to rapidly change as virtualization continues to grow and cloud-based infrastructures become more mature. We’re also dealing with an increasingly mobile workforce where employees are using their own laptops, smart phones and tablets instead of those issued by the company. Protection of these assets become even more important as compliance regulations become increasingly strict and true enforcement begins.
 
Some of the technologies that have grown in 2011 and which I foresee increasing their growth in 2012, include Data Loss Prevention, Application-aware Firewalls and Enterprise Governance Risk and Compliance. Each of these technologies focus on protecting sensitive information to ensure that authorized individuals are using this information responsibly. Moving forward into 2012, my security crystal ball tells me that everyone, top level down will increase not only their security spend, but most importantly their awareness of IT security and just how much their organizations data is worth to protect.
 
Photo Credit: Don Hankins
 

The Shifting IT Workforce Paradigm: From Sys Admin to Capacity Planners

By | Cloud Computing, EMC, Virtualization | No Comments

We talk about a lot of paradigm shifts in IT. The shift to a converged network, the shift to virtualization, etc. There is a more important shift happening however, that we aren’t talking nearly enough about. The absolutely necessary shift in the people who make up our IT work force.

The IT field as a whole is at or will soon be approaching one of those critical points in our developed skill curve where today’s critical skills are going to be all but obsolete. Similar to the sudden onset of open systems after the mainframe realized the end of its dominance in the datacenter. We had no one who knew how to operate and tune these new systems, and it kept the adoption curve somewhat slow while that was resolved through re-training and an influx of workers who’d had exposure to UNIX through their college education.

We’re at that stage again, or pretty near approaching it. The concept of the “private cloud” is going to stall soon, I believe, not because the technology doesn’t work, and not because it isn’t useful, but because we don’t have people in IT who are trained to deal with it. Let’s be very clear – this isn’t a tools issue. I’ve written about the tools problem we have with private cloud in the past, but this is different. This issue is actually much harder to resolve because it isn’t as simple as taking an employee who is used to CICS commands and teaching them Solaris commands to use instead. This requires a different mindset, a different way of thinking about IT and a realization that the value of the IT worker is not in how well they can script a complex set of commands, but in harnessing the power of the information they ultimately control.

“Private Cloud” is not about a technology. It is about creating an agile utility the business can use any way they need anytime they want. It is about getting out of the business of clicking Agree, Next, Next, Next, Next, Finish and getting into the business of strategic capacity management and information analytics. This involves skills most IT people either don’t have or aren’t allowed to use, because they are currently machine managers, rack and stack specialists, and uptime wizards. These new skills require less mechanical action and more interaction with the business. We need to shift from being simply systems administrators to capacity planners (and more).

I’ve been a capacity planner and a systems engineer in IT departments. They’re different jobs, entail different ways of thinking, require different levels of interaction with the business, and don’t have a lot of crossover in skill sets other than a fundamental knowledge of how systems work. I’ve talked to several customers and prospects about this, and they all seem to recognize a skills train is headed toward them, but they don’t have any idea how big the train is, what direction it’s traveling, whether it left Chicago or Philadelphia, or how to get on it.

There are a few folks out there who seem to realize what is happening and they’re trying to get in front of it. Although EMC is wrapping the concept all around their over-hyped, buzz-centric use of the word “cloud”, they are offering some new courses within their Proven Professional program that seem to grasp the shift. I’ve seen a few seminar fliers come through my mail that might hit the mark. The problem is they’re all skimming the surface. We need some fundamental changes at the University level and perhaps a change away from technology focus in the whole certification thinking to accelerate the paradigm shift.

I’m interested in comments here. Is your organization training you to be the most useful asset you can be to the business in this shift or are they taking the new technology and keeping you in your same role? Are there new educational opportunities I’m not seeing in other parts of the country to help us move from system administrators to business capacity analysts?

Let me know.

Photo Credit: BiblioArchives/LibraryArchives

VMware View Client: It’s All Fun And Games Until Someone Can’t Remotely Log On …

By | How To, View, Virtualization, VMware, vSphere | No Comments

…  to their virtual desktop while traveling in Europe.

Why is this issue occurring?

Did you configure the vSphere environment correctly?

Did the View Administrator make a change that you are unaware of?

Where is the documentation binder, if you even have one?

Where should you check first?

Since we are focusing on a remote virtual desktop, let’s trace from the client into the virtual environment … similar to following the OSI model from the Physical Layer up to the Application Layer until the problem is found. The exception is that we are following the issue from outside our network in – reference the diagram below:

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/10/Dominic-Blog-Vizio-Comp-2.jpg” width=”615″ height=”325″]

 

In order for our scenario to play out, let’s assume the following:

  1. A virtual machine (VM) has a connection to the virtual network.
  2. A desktop pool has been created with a dedicated desktop for the user.
  3. DNS is functioning properly – forward and reverse in the environment.
  4. SSL is configured correctly.
  5. The user is part of the proper group with appropriate permissions and entitlements.
  6. Networking on the virtual desktop is configured correctly.
  7. There are NO issues with the VM operating system (OS).
  8. Ports are configured properly for your network environment.
  9. PCoIP is configured as the primary remote display protocol and RDP is the secondary.
  10. Both display protocols are functioning properly.

Having confirmed connectivity, ports, protocols and finally that the VM’s are operational, based on the above assumptions – where should we check next?

Jump into VMware View Administrator using your specific URL – https://<view-connection-server-FQDN>/admin. Once you logon, open Inventory|Desktops. In the Filter field enter the name of the user’s assigned virtual desktop and determine if the virtual desktop is in use by someone other than the user. If everything checks out, open vCenter using the vSphere Client. Select Home|VM’s and Templates. Once again, locate the user’s assigned virtual machine desktop. Select the Console tab and determine if you can see the MS Windows desktop or if the screen has been locked by an administrator.

The most overlooked problem as an administrator is forgetting that a console session is viewed as a logged on user, therefore it must be logged out. When viewing the virtual desktop reboot, it is easy to forget that there is a session open. Disconnecting from the console or the VM desktop will eventually lock the current user, thus preventing someone from remotely connecting to their VMware View Client virtual desktop from that trip in Europe.

Certainly after remedying this issue, an experienced administrator would dive into vCenter to check the console for a particular VM desktop – knowing exactly what to look for. What it comes down to is learning how to perform troubleshooting the long way, so that there is a deep understanding of how all the technical components work. This knowledge will lead to more efficient troubleshooting and quicker resolution of issues in the future.

In the end it is about working smarter, not harder.

Photo Credit: VMware

The PCI Council Throws The Book At Virtualization Security

By | Security, Virtualization, VMware | No Comments

The adoption of virtualization has skyrocketed in the past few years. Companies have found tremendous cost savings when migrating to virtual environments. There have been savings in CapEx expenditures, not to mention lowering energy bills. Most importantly, improvements within operational efficiencies by introducing business continuity, automation and meeting service level agreements. This spike in the interest level of virtualization has led to a 300% increase in the Virtualization Practice here at IDS in the past year.

Traditionally, as ease of use increases, security decreases. There is always a balancing act between security and ease of use. When organizations first began using virtualization I was immediately skeptical about how it would take off from a security perspective. There were too many questions surrounding how secure these environments would be over time or could be made over time.

Now that VMware and other big names have been running the virtualization game for some time, and virtualization has been adopted by so many different organizations, it has proved to be a solid technology and many security components have been built in and around these products. We’re just to the point now where (virtually) any hardware device can be virtualized to some extent. This is hugely important when dealing with security because we are now seeing the emergence of virtual intrusion prevention systems, firewalls and antivirus. Many of these technologies are still fairly new and really have not been truly stress tested in the field. So for all intents and purposes we hope these virtual security products will provide the same protection as their hardware counterparts.

As we in the IT field vet these virtual security appliances we also need guidance about how we can maintain compliance with the various regulatory mandates such as PCI, SOX and HIPAA.

When dealing with virtual environments, should they be treated differently than physical environments? If so, then how?

At the end of June the PCI Security Standards Council provided an update on how organizations should view their regulatory compliance as it pertains to PCI. It’s good to see this shift and it is a step towards providing clarified information about how we can continue to protect sensitive data.

The PCI DSS Virtualization Guidelines cover the various risks associated with virtualized environments as well as their recommendations for securing these environments. This document is geared toward organizations who have or are considering the use of virtualization in their cardholder data environment. While we still have a long road ahead in proving that we can provide the same amount of security for virtual environments as we do in the physical world, this document is a step in the right direction about how we might be able to accomplish it.

Photo Credit: umjanedoan

Clearing The VMware vSphere 5 Licensing Fog

By | Virtualization, VMware, vSphere | No Comments

Last week, VMware announced the upcoming release of  vSphere 5. While the release contains many new exciting features, it has been clouded with noise about the new licensing. VMware has decided to move to vRAM licensing and is removing the previous maximums on the amount of RAM or the number of CPU cores (see the VMware PDF for further detail).

I will be completely honest and transparent here and tell you that, yes, my initial reaction upon learning there would be a new licensing model was fear. However, after having time to digest the changes and ruminate how it affects our customer base, I don’t necessarily have the same emotion. I think it does give us as integrators an opportunity to guide our customers down the appropriate path for VMware licensing. Yet, I do believe that the licensing does open up the financial value of memory overcommitment and high consolidation ratios, not the technical value.

The biggest concern I’ve heard from customers and peers is that licenses will cost much, much, much more for the new vSphere 5 than they did for the “old” vSphere 4. You can imagine innumerable scenarios that would indicate the new licensing model as much more expensive, which would attribute to existing customers not upgrading to vSphere 5. This is false.

Just by looking at our customer base,  less than 5% would be affected by upgrading with the new licensing model. I encourage customers who feel they may be affected to re-evaluate how they are deploying virtual machines.

At IDS, the buzz has been around how important capacity planning tools like VMware’s Capacity IQ are, and how overallocation can potentially cost an organization more money than they potentially have to spend. Utilizing capacity planning tools to scan your environment is important to reclaim resources and see just where every dollar is spent.

To really get an understanding of how many vSphere 5 licenses you will need, sum up the total amount of vRAM allocated in all powered-on VMs, and divide that total amount by the vRAM entitlement for the particular vSphere 5 edition you are running. vSphere 5 licensing needs are determined by only 3 factors:

  1. Number of VMs.
  2. Amount of vRAM per VM.
  3. What vSphere 5 edition you are running. The entitlements for the different editions are available here.

There are also many resources available to help customers in this transition, VMware has provided a tool that can help you add the total amount of vRAM allocated in your VMs so that you can run this calculation. There is also an accompanying video explaining how to use the tool. In addition, we have noticed a few similar tools developed by the community.

Also keep in mind that if you are an existing customer with a valid support contract, you get a free  upgrade of your CPU license to vSphere 5. Current vSphere 4 Advanced Edition customers will be upgraded to vSphere 5 Enterprise license since there is no more Advanced edition in vSphere 5.

vSphere will cost you more when you go above the 32GB per CPU (or 48GB per CPU for Enterprise Plus). But even then there are a number of things to keep in mind:

  • A new Standard license is $200 more expensive but now includes vMotion and Data Recovery compared to the old Standard license.
  • Old Advanced customers get a free upgrade to Enterprise which gives them Storage vMotion and DRS.
  • Don’t just compare physical RAM to vRAM but keep at least an 85% ratio.
  • This is the time to take a closer look the difference between VM RAM usage and VM RAM assignment.
  • VMware View Premier Bundled licensing will not be affected, it will still be based on concurrent desktop connections. This is extremely important to know as we see many of our higher consolidation ratios in the desktop space.

Photo credit: jeffsmallwood

Deploying New Applications in the Private Cloud: Using Chargeback Cost Accounting?

By | Cloud Computing, Virtualization, VMware | No Comments

It’s exciting to see more and more organizations really buying into the idea of a utility computing data center (a “private cloud” to you marketing-speak aficionados). One of the issues regularly being raised when they do convert is how to properly show costing back to business units when they wish to deploy new applications.

I’m working with several organizations right now and examining how to move away from project-centric budgeting into more of a utility model—it’s not an easy thing to do, for a couple of reasons:

1. This is one place where you butt right up against some really old, institutionalized thinking.  ‘We’ve always done it this way’. There’s an ingrained sense of ownership in infrastructure a business unit acquires as part of an application deployment that the unit needs to serve the rest of the business. That sense of “my server, my storage”—unique and discreet pieces of hardware they could go look at in the data center—is a very tough issue to overcome.

2. The idea that charge-back billing is almost required when you implement these kinds of infrastructures or no one will want to pay for anything going forward. This sometimes creates insurmountable cultural challenges where a CIO is unwilling to take on the other executives to re-work their cost models and how people are held accountable for the resources they use.

So how do you overcome this if you’re going to go 100% virtual, without just taking a “suck it up and deal with it” IT attitude? There’s no magic bullet to this and, like many things I deal with on a daily basis, there is a strong element of “well, it depends” that goes into it.

Where I like to start, though, is dispelling the idea that you can’t continue a project-based approach to your discussions with the business units. It’s not easy, and it requires tools in the environment (I’m particularly fond of vCenter CapacityIQ for these purposes). However, if you really understand how your CPU, RAM, network and storage are being used in the environment, you can correlate what a new application will use of each resource and charge the project just like you would if you were buying individual servers. Collect that as part of the process from the project, put it into the house account and buy infrastructure capacity when it is needed. Again, that requires the use of capacity planning tools and proactive management of the data center’s capacity.

You can alternatively approach this physically in the same manner as you did before. Being that an app requires X number of servers and Y terabytes of space, tell the project to budget for them, and then buy them and add it to the virtual environment just as you would the physical. I see a lot of organizations doing this, and for most it seems to work okay. I’m not particularly fond of it, because I believe that if you’re going to get the most out of this type of infrastructure, you have to change how you manage it and account for it. This doesn’t really lend itself to that notion.

You obviously also have the option of implementing a full charge-back model. I’m a big fan of this. All business units tend to share the cost of real estate, utilities, copying, etc., letting everyone share the cost of this critical business tool. It’s best to ease into this, without dropping what may be seen as a hammer on other organizations. It seems to work best when you begin with showing leadership on how much resource each department uses and working to get buy-in from them on how to best attack the new cost model. This again requires tools. Charge-back reporter and CapacityIQ are champs at helping you dig into the details of whose using what and how, if properly deployed.

This is one of those things that you’re going to have do eventually—might as well start now.

Photo credits: nmcil and Guillaume Brialon via Flickr

Life’s A Beach With Remote vSphere Management on the iPad

By | View, Virtualization, VMware | No Comments

Leaving for Bali? Vacationing in upper northwest Indiana? Just heading to Grandma’s for the weekend? Then this is the blog post for you!

As your virtual travel guide, here are the four things you need to manage your vSphere environment while on vacation (or take a vacation while managing your vSphere environment).

Before leaving the office, a few things need to be in place…

1. Make sure you’ve downloaded the latest vCMA virtual appliance from VMware Labs:
a) Head to: http://labs.vmware.com/flings/vcma.
b) Install into your infrastructure, and give the appliance an IP address.
2. You will need an iPad with 3G capabilities.
3. VPN connectivity to your private network.
a) Cisco AnyConnect Client for the iPad works great, as shown below:

[image title=”Slide1″ size=”small” align=”center” width=”400″ height=”300″]http://www.integrateddatastorage.com/wp-content/uploads/2011/05/Slide-1.jpg[/image]

b) You can also use the native VPN ability of the iPad.
4. vSphere Client for the iPad.

Once you’ve gotten to your destination of choice, follow these steps to gain access to your vSphere environment:

1. Go to iPad Settings >>Apps >>vSphere Client.
a) Set the Web Server to the IP address of the vCMA appliance.
2. Establish VPN connectivity.
3. Launch the vSphere Client and log into VCenter, as seen on the initial login screen:

[image title=”Slide2″ size=”small” align=”center” width=”400″ height=”300″]http://www.integrateddatastorage.com/wp-content/uploads/2011/05/Slide2.jpg[/image]

After entering you should now see the summary screen of your VCenter environment:

[image title=”Slide3″ size=”small” align=”center” width=”400″ height=”300″]http://www.integrateddatastorage.com/wp-content/uploads/2011/05/Slide3.jpg[/image]

From the summary screen you can drill into your ESX servers and be able to do the following:

• View ESX Server CPU, memory, disk & network load.
• View ESX Server Hardware summary and performance:

[image title=”Slide4″ size=”small” align=”center” width=”400″ height=”300″]http://www.integrateddatastorage.com/wp-content/uploads/2011/05/Slide4.jpg[/image]

• Inventory of the VM’s on the server.
• From this page you can reboot your ESX Server or enter Maintenance Mode.

From the ESX server screen you can drill into the VM:

[image title=”Slide5″ size=”small” align=”center” width=”400″ height=”300″]http://www.integrateddatastorage.com/wp-content/uploads/2011/05/Slide5.jpg[/image]

Within this screen you will be able to do the following:

• View VM Server CPU, memory, and disk load.
• View VM and the latest VM events.
• View & restore any snapshots associated to the VM.
• You can also Start, Stop, Restart and Suspend the VM.

I’ve only tested this scenario from the beach, but I’m sure it works on the golf course too.

Photo Credit: skylerf

float(5)