All Posts By

IDS Engineer

To The Cloud! The Reality Behind The Buzzword

By | Cloud Computing, How To, Security, Virtualization | No Comments

I always chuckle when I think back to those Microsoft Windows Live commercials where they exclaim: “To the Cloud!” like they’re super heroes. In 2006-2007 the term “Cloud” was an overused buzzword that had no official meaning – at that time, it seemed like a lot of people were talking about cloud computing or putting things in the cloud but no one could actually articulate what that meant in simple terms or how it would work.

A real understanding and documentation in the technology community about cloud computing probably didn’t come together until mid-to-late 2008.

Today is a much different story. This year Gartner reported that:

nearly one third of organizations either already use or plan to use cloud or software-as-a-service (SaaS) offerings to augment their core business…

It is truly amazing to see how much this segment has matured in such a short period. We’re well past the buzzword stage and “The Cloud” is a reality. As we change the nature and meaning of the traditional infrastructure, we also need to ensure that the way your organization approaches security changes with it.

Fundamentally, we cannot implement cloud security the same way we go about implementing traditional security. The biggest difference being that some of the infrastructure components and computational resources are owned and operated by an outside third party. This third party may also host multiple organizations together in a multi-tenant platform.
 
To break the buzzword down in terms of cloud + security, here are the three best steps to help you both develop a cloud strategy as well as ensure that security is involved to minimize risk:
 
Get Involved
 
Security professionals should be involved early on in the process of choosing a cloud vendor with the focus being on the CIA triad of information security: Confidentiality, Integrity and Availability.
 
Concerns about regulatory compliance, controls and service level agreements can be dealt with up front to quickly approve or disqualify vendors.
 
It’s Still Your Data
 
You know what is best for your company and understand how policies and regulations effect your business. It’s not reasonable to expect your provider to fully understand how your business should be governed. You are ultimately responsible for the protection of your data and to ensure that your provider can implement the best and most necessary security measures.
 
Continuously Assess Risk
 
It’s important to identify the data that will be migrated. Does it make sense to migrate credit card data, sensitive information or personally identifiable information? If so, what measure will you put in place to ensure that this information continue to be protected once you migrate it to the cloud? How will you manage this data differently? What are the metrics around security controls will you use to report to audit and compliance?
 
These questions plus many more will help you to assess where your risk is. As each of these questions are answered they must be documented in your policies and procedures going forward.
 
Photo Credit: fifikins

vSphere 5 Storage: Yet Another Reason To Upgrade …

By | VMware, vSphere | No Comments

vSphere Version 4 previously had an absurdly low limit for iSCSI and Fiber Channel datastores at 2 Terabytes –512KB.  

Why do I say absurdly low?  

2TB isn’t that much these days and that is especially true when running SQL, Oracle, and Exchange servers. Files servers are almost always beyond the 2TB limit. With these limitations many companies were forced to commit to continuing to run servers physically. This leads to greater cost, low utilization … the list goes on and on.

Along comes vSphere 5 and I am positive in my assertion that it is the most well rounded and thought out version yet. LUNs can now be up to 64 TB in size! File sizes are still limited to 2TB –512K, but when using raw device mapping, as you normally would for such a large database, you also can present a physical RDM up to 64TB. 

Not to mention, VMFS-5 has a number of new space saving features such as smaller sub blocks and small file support (1KB or less).  

What does this mean?

I think it means we have finally eliminated the last reason to keep any server physical no matter what it is or does.

Photo Credit: jamiesrabbits

How To: VMware High Availability for Blade Chassis

By | Cisco, Virtualization, VMware | No Comments

Vmware High Availability (HA) is a great feature that allows a guest Virtual Machines in a Cluster to survive a host failure. Some quick background is that a Cluster is a group of hosts that work together harmoniously and operate as a single unit. A host is a physical machine running a Hypervisor such as ESX.

So, what does HA do? If a host in the cluster fails then all of the machines fail. HA will power up the guests on another host in the cluster which can reduce downtime significantly, especially if your Datacenter is 30 minutes from your house at 2am. You can continue to sleep and address the host failure in the morning. Sounds great, so what’s the catch?

The catch is in how HA configures itself in the cluster. The first 5 hosts in a cluster are called primary node and all the other hosts are secondary nodes. A primary node synchronizes settings and status of all hosts in the cluster with other primary nodes. A secondary node basically reports its status to the primary node. Secondary nodes can be promoted to primary nodes, but only under specific circumstances. Circumstances include: putting a host in maintenance node and disconnecting a node from a cluster. HA only needs one primary node to function. I don’t see a catch here…?

The catch comes into the use of a blade center. Suppose you have Chassis A and Chassis B:

We bought two blade chassis for redundancy. Redundant power, switches, electricity, and cluster hosts spread across both. If one chassis fails then other one has plenty of resources. Fully redundant! Maybe. If I was to add my first 5 hosts to my cluster from chassis A then all of my primary nodes would be on chassis A. If chassis A fails, NO guests from the failed host will be powered up on chassis B. Why? All chassis B hosts are secondary nodes and HA requires at least 1 primary! It’s 2 am and now you’re half asleep driving to the datacenter despite all the redundancy.

To avoid this issue, when adding hosts to a cluster, alternate between chassis.

Top 3 Security Resolutions For 2012: Moving Forward From “The Year Of The Breach”

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Security | No Comments
I always feel a sense of renewal with the turn of the calendar. Many people use this time to set new goals for the new year and take the opportunity to get re-grounded and move toward accomplishing their goals. Yet, as I reflect on the security landscape in 2011, aptly named “The Year of the Breach”; I thought it would be a perfect time to make some resolutions for 2012 that everyone with any data to protect could benefit from.

 

1. Focus More on Security and Not Just on Compliance

On a day to day basis I speak to a wide range of companies and often see organizations who are so concerned about checking the box for compliance that they lose sight of actually minimizing risk and protecting data. Regardless of the regulation in the long list of alphabet soup (SOX, GLBA, PCI, HIPAA) – maintaining compliance is a daunting task.
 
As a security practitioner, focusing on limiting exposure to every business has always been my key concern. How can I enable the business while also minimizing risk? With this mindset, compliance helps to ensure that I am doing my due diligence and that all of my documentation is in order to prove that I’m doing my due diligence to keep our customers and stakeholders happy and protected.
 
2. Ready Yourself for Mobile Device Explosion
 
The iPad is a pretty cool device. I’m no Apple Fanboy by any stretch, but this tablet perfectly bridges the gap between my smart phone and my laptop. I am not the only one seeing these devices becoming more prevalent in the workforce as well. People are using them to take notes in meetings and give presentations, yet users are not driving the business to support these devices. Many organizations instead are simply allowing their employees to purchase their own devices and use them on corporate networks.
 
If employees can work remotely and be more happy and efficient with these devices, security admins can’t and shouldn’t stand in the way. We must focus on protecting these endpoints to ensure they don’t get infected with malware. We’ve also got to protect the data on these devices to ensure that corporate data isn’t misused or stolen when spread over so many variations of devices.
 
3. Play Offense, Not Defense
 
I’ve worked in IT Security for a long time and unfortunatley along the way I’ve seen and heard a lot of things that I wish I hadn’t. Yet, I can’t afford to have my head in the sand regarding security. I need to have my finger on the pulse of the organization and understand what’s happening in the business. It’s important that I also understand how data is being used and why. Once this happens, I am able to put controls in place and be in a better position to recognize when something is abnormal. With the prevalence of bot-nets and other malware, it is taking organizations 4-16 weeks before they even realize they have been compromised. Once this surfaces, they have to play catchup in order to assess the damage, clean the infection and plug the holes that were found. Breaches can be stopped before they start, if the company and/or security admin are adamant about being on the offense.
 
These are my top three resolutions to focus on for 2012 – what is your list? I invite you to list your security resolutions in the comment section below, I’d love to know what your organization is focused on!
 
Photo Credit: simplyla
 
 

The Shifting IT Workforce Paradigm Part II: Why Don’t YOU Know How “IT” Works In Your Organization?

By | Backup, Cloud Computing, How To, Networking, Security | No Comments

When I write about CIO’s taking an increased business-oriented stance in their jobs, I sometimes forget that without a team of people who are both willing and able to do that, their ability to get out of the data center and into the board room is drastically hampered.

I work with a company from time to time that embodies for me the “nirvana state” of IT: they know how to increase revenue for the organization. They do this while still maintaining focus on IT’s other two jobs — avoiding risk and reducing cost. How do they accomplish this? They know how their business works, and they know how their business uses their applications. The guys in this IT shop can tell you precisely how many IOPS any type of end-customer business transaction will create. They know that if they can do something with their network, their code, and/or their gear that provides an additional I/O or CPU tick back to the applications, they can serve X number of transactions and that translates into Y dollars in revenue, and if they can do that without buying anything, it creates P profit.

The guys I work with aren’t the CIO, although I do have relationships with the COO, VP of IT, etc. To clarify – there aren’t business analysts who crossed over into IT from the business who provide this insight. These are the developers, infrastructure guys, security specialists, etc. At this point, I think if I asked the administrative assistant who greets me at the door every visit, she’d be able to tell me how her job translates into the business process and how it affects revenue.

Some might say that since this particular company is a custom development shop that should be easy. Yet, they have to know the business processes to write the code that drives them. Yes and no. I think that most people who make that statement haven’t closely examined the developers coming out of college these days. I have a number of nieces, nephews, and children of close friends who are all going into IT, and let me tell you, the stuff they’re teaching in the development classes these kids are taking isn’t about optimization of code to a business process and it isn’t about utility of IT.

It’s about teaching a foreign language more than teaching them the ‘why you do this’ of things. You’re not getting this kind of thought and thought-provoking behavior out of the current generation of graduates. This comes from caring. In my estimation it comes from those at the top giving enough latitude to make intelligent decisions and demanding that people understand what the company is doing and more importantly – where they want to be.

They set goals, clarify those goals, and they make it clear that everyone in the organization can and does play a role in achieving those goals. These guys don’t go to work every day wondering why they are sitting in the cubicle, behind the desk, with eight different colored lists on their whiteboard around a couple of intricately complicated diagrams depicting a new code release. They aren’t cogs in a machine, and they’re made not to feel as though they are. If you want to be a cog, you don’t fit in this org, pretty simple.  That’s the impression I get of them, anyway.

The other important piece of this is that they don’t trust their vendors. That’s probably the wrong way to say it. It’s more about questioning everything from their partners, taking nothing for granted, and demanding that their vendors explain how everything works so they understand how they plug into it and then take advantage of it. They don’t buy technology for the sake of buying technology. If older gear works, they keep the older gear, but they understand the ceiling of that gear, and before they hit it, they acquire new. They don’t always buy the cheapest, but they buy the gear that will drive greater profitability for the business.

That’s how IT should be buying. Not a cost matrix of four different vendors who are all fighting to be the apple the others are compared to. Rather – which solution will help me be more profitable as a business because I can drive more customer transactions through the system? Of course, 99% of organizations I deal with couldn’t tell you what the cost of a business transaction is. Probably 90% of them couldn’t tell you what the business transaction of their organization looks like.

These guys aren’t perfect, they have holes. They are probably staffed too thin to reach peak efficiency and they could take advantage of some newer technologies to be more effective. They could probably use a little more process in a few areas. But at the end of the day, they get it. They get that IT matters, they get that information is the linchpin to their business, and they get that if the people who work in the organization care, the organization is better. They understand that their business is unique and they have a limited ability to stay ahead of the much larger companies in their field; thus they must innovate, but never stray too far from their foundation or the core business will suffer.

It’s refreshing to work with a company like this. I wish there were more stories like this organization and that the trade rags would highlight them more prominently. They deserve a lot of credit for how they operate and what they look to IT to do for them.

Even though I can’t name them here I’ll just say good job guys, keep it up, and thanks for working with us.

Photo Credit: comedy_nose

 

Following “The Year of the Breach” IT Security Spending Is On The Rise

By | Backup, Data Loss Prevention, Disaster Recovery, RSA, Security, Virtualization | No Comments

In IT circles, the year 2011 is now known as “The Year of the Breach”. Major companies such as RSA, Sony, Epsilon, PBS, Citigroup, etc. have experienced serious high profile attacks. Which begs the question: if major players such as these huge multi-million dollar companies are being breached, what does that mean for my company? How can I take adequate precautions to ensure that I’m protecting my organization’s data?

If you’ve asked yourself these questions, you’re in good company. A recent study released by TheInfoPro states that:
37% of information security professionals are planning to increase their security spending in 2012.
In light of the recent security breaches, as well as the increased prevalence of mobile devices within the workplace, IT security is currently top of mind for many organizations. In fact, with most of the companies that IDS is working with I’m also seeing executives taking more of an interest in IT security. CEO’s and CIO’s are gaining a better understanding of technology and what is necessary to improve the company’s security position in the future. This is a huge win for security practitioners and administrators because they are now able to get the top level buy-in needed to make important investments in infrastructure. IT security is fast becoming part of the conversation when making business decisions.
 
I expect the IT infrastructure to continue to rapidly change as virtualization continues to grow and cloud-based infrastructures become more mature. We’re also dealing with an increasingly mobile workforce where employees are using their own laptops, smart phones and tablets instead of those issued by the company. Protection of these assets become even more important as compliance regulations become increasingly strict and true enforcement begins.
 
Some of the technologies that have grown in 2011 and which I foresee increasing their growth in 2012, include Data Loss Prevention, Application-aware Firewalls and Enterprise Governance Risk and Compliance. Each of these technologies focus on protecting sensitive information to ensure that authorized individuals are using this information responsibly. Moving forward into 2012, my security crystal ball tells me that everyone, top level down will increase not only their security spend, but most importantly their awareness of IT security and just how much their organizations data is worth to protect.
 
Photo Credit: Don Hankins
 

The Shifting IT Workforce Paradigm: From Sys Admin to Capacity Planners

By | Cloud Computing, EMC, Virtualization | No Comments

We talk about a lot of paradigm shifts in IT. The shift to a converged network, the shift to virtualization, etc. There is a more important shift happening however, that we aren’t talking nearly enough about. The absolutely necessary shift in the people who make up our IT work force.

The IT field as a whole is at or will soon be approaching one of those critical points in our developed skill curve where today’s critical skills are going to be all but obsolete. Similar to the sudden onset of open systems after the mainframe realized the end of its dominance in the datacenter. We had no one who knew how to operate and tune these new systems, and it kept the adoption curve somewhat slow while that was resolved through re-training and an influx of workers who’d had exposure to UNIX through their college education.

We’re at that stage again, or pretty near approaching it. The concept of the “private cloud” is going to stall soon, I believe, not because the technology doesn’t work, and not because it isn’t useful, but because we don’t have people in IT who are trained to deal with it. Let’s be very clear – this isn’t a tools issue. I’ve written about the tools problem we have with private cloud in the past, but this is different. This issue is actually much harder to resolve because it isn’t as simple as taking an employee who is used to CICS commands and teaching them Solaris commands to use instead. This requires a different mindset, a different way of thinking about IT and a realization that the value of the IT worker is not in how well they can script a complex set of commands, but in harnessing the power of the information they ultimately control.

“Private Cloud” is not about a technology. It is about creating an agile utility the business can use any way they need anytime they want. It is about getting out of the business of clicking Agree, Next, Next, Next, Next, Finish and getting into the business of strategic capacity management and information analytics. This involves skills most IT people either don’t have or aren’t allowed to use, because they are currently machine managers, rack and stack specialists, and uptime wizards. These new skills require less mechanical action and more interaction with the business. We need to shift from being simply systems administrators to capacity planners (and more).

I’ve been a capacity planner and a systems engineer in IT departments. They’re different jobs, entail different ways of thinking, require different levels of interaction with the business, and don’t have a lot of crossover in skill sets other than a fundamental knowledge of how systems work. I’ve talked to several customers and prospects about this, and they all seem to recognize a skills train is headed toward them, but they don’t have any idea how big the train is, what direction it’s traveling, whether it left Chicago or Philadelphia, or how to get on it.

There are a few folks out there who seem to realize what is happening and they’re trying to get in front of it. Although EMC is wrapping the concept all around their over-hyped, buzz-centric use of the word “cloud”, they are offering some new courses within their Proven Professional program that seem to grasp the shift. I’ve seen a few seminar fliers come through my mail that might hit the mark. The problem is they’re all skimming the surface. We need some fundamental changes at the University level and perhaps a change away from technology focus in the whole certification thinking to accelerate the paradigm shift.

I’m interested in comments here. Is your organization training you to be the most useful asset you can be to the business in this shift or are they taking the new technology and keeping you in your same role? Are there new educational opportunities I’m not seeing in other parts of the country to help us move from system administrators to business capacity analysts?

Let me know.

Photo Credit: BiblioArchives/LibraryArchives

Internet Running Out of IP Addresses?! Fear Not, IPv6 Here to Save the Day

By | Backup, How To, Networking | No Comments

As everyone may (or may not be) aware, we are running out of IP version 4 addresses. Okay, not really, but they have almost all been given out to service providers to pass on to customers. At that point, they will eventually run out. Fear not. This doesn’t mean that the internet will come to a screeching halt.  It only means that it will be time to move on to the next iteration of networking called IP version 6 (IPv6 for short).  Most of the rest of the world is already running it to high degree.

With this post, I’m going to take some time to lift the veil off of this. The reason is that every time I mention it to anyone, be it a customer, old coworker, or longtime networker, it draws a sense of fear. Don’t be afraid of IPv6, people! It’s not as scary as it seems.

Let’s start with a quick comparison. Currently, there are approximately 4.3 billion IPv4 addresses using the current 32 bit scheme. That’s less than 1 for every person in the world! Think about how many you are using right now.  Here’s me:

1.  Cell phone

2. Air card

3. Home network

4. My work computer

5. TV

We’ve gotten around the limitation by using something called Port Address Translation (PAT). PAT should really be called “PATCH,” because we are patching the IPv4 network due to a gross underestimate of the growth of the internet. PAT normally occurs on a firewall. We can use one public IP address to represent outgoing/incoming traffic to our network. That is why we have RFC 1918 addresses (10/8, 192.168…and so on). Those addresses needed to be reserved so that they could hide behind a public IP address, and therefore every company could have as many IP addresses as they needed. Because of the reserved address space, the available IP addresses are layout 3.2 billion. That’s less than 1 for ever two people!

Theoretically, a single PAT IP could represent over 65000 clients (you may see flames begin to shoot out of your firewall). So, what are the drawbacks? For one, it requires a higher degree of difficulty to troubleshoot connection issues. Also, setting firewall rules become more difficult and can result in connectivity issues. Plus, the idea of end-to-end connectivity is thrown out the door since it truly is not at that point. Lastly, as translations occur, you are placing higher and higher loads on firewall, which could be doing other things such as improving latency and throughput.  PAT’s time is through!  Thanks, but good riddance!

IPv6 uses 128 bit addressing.  That’s about 340,000,000,000,000,000,000,000,000,000,000,000,000 or 18,000,000,000,000,000,000 for every person on earth.  For a comparison in binary:

IPv4: 10101010101010101010101010101010

IPv6:  10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010
1010101010101010101010101010101010101010101010101010101010101010101010101010101010

Luckily, IPv6 addressing is represented in HEX.  Though the above binary number looks painful and overwhelming, a single IPV6 address on your network can be as simple as this:

2002::1/64

That’s not so bad, is it? In a follow-up post, I will demystify the IPv6 addressing scheme.

For up to date IPv6  statistics and IPv4 exhaustion dates around the world, look here:  http://www.apnic.net/community/ipv6-program

Photo credit: carlospons via Flickr

Sick Over Gateway Redundancy? Cisco’s Got A Solution For That …

By | Cisco, How To, Networking | No Comments

A testament to the ever adapting pioneers that they are, Cisco has developed the first gateways redundancy protocol: Hot Standby Router (HSRP). HSRP allows for default gateways to be failed over to another router, based on a priority that can rise or fall contingent upon interface tracking.

The Internet Engineering Task Force (IETF) created a standard that is almost identical: Virtual Router Redundancy Protocol (VRRP), as identified in RFC 2338. The only real differentiator is the terminology. If you have non-Cisco routers or are pairing between Cisco and another vendor then you are using VRRP.

Here is an example of the old days:

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/10/Nick-Blog-Pic-12222.jpg” width=”535″ height=”525″]

 

Next in the long line of gateway redundancy protols came HSRP, which allows for failover of the default gateway. The only way to load balance was by creating two different HSRP groups: multiple HSRP (MSHRP), using different IP addresses for the default gateways. Hence you would have to configure Dynamic Host Configuration Protocol (DHCP) pools that give two separate gateway addresses for the SAME IP range. Sound painful, right?

Let’s look at general HSRP operation. For example: you could have Router 1 and Router 2 running HSRP which would both be tracking their WAN links. Below is normal HSRP operation: the router on the left is actively forwarding traffic as the default gateway, and the one on the right is waiting for it fail or lose its WAN link. Notice that the top router is doing absolutely nothing, aside from looking pretty.

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/10/Nick-Blog-Pic-2-Rev.jpg” width=”605″ height=”450″]

 

Now, the WAN link fails and the other router takes over.

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/10/Nick-Blog-Pic-3-Final.jpg” width=”605″ height=”440″]

 

When the link goes down the other router takes over forwarding traffic. It is a time tested strategy, but if you have two routers why not utilize both?

Introducing another Cisco first: Global Load Balancing Protocol (GLBP). GLBP introduces two router roles:

  1. The Active Virtual Gateway (AVG): responsible for giving out the Media Access Control (MAC) address to the other routers as well as responding to clients Address Resolution Protocol (ARP) requests.
  2. The Active Virtual Forwarded (AVF).

The AVG generally gives out the MAC address in a round robin fashion (though there are other choices). Some clients get MAC for Router 1 and some recieve ONE IP address.

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/10/Nick-Blog-Fourth.jpg” width=”605″ height=”490″]

 

Normal Operation:

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/10/Nick-Blog-Pic-5.jpg” width=”625″ height=”525″]

 

Now, I’m sure you are wondering what happens on a link failure or router loss.

Since there are only two routers in these scenarios, the AVG would take over for the MAC address, making the failover absolutely seamless. The router on the right would lose it’s link and report that it is no longer able to forward traffic. Ok, it might be a little more complicated than that, but you get the gist. 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/10/Nick-blog-Pic-6-Again.jpg” width=”625″ height=”525″]

 

GLBP is a great solution for load balance and it offers your users seamless failover of their default gateway upon the failure of a router.

Perhaps the IETF will make this a standard too!

Photo Credit:DominiqueGodbout

Don’t Get Hung Out To Dry With The HCL: There’s OneCommand Manager for VMware vCenter …

By | Cisco, How To, View, VMware, vSphere | No Comments

Is nothing sacred?

As the professionally paranoid, we know all too well that we cannot take anything for granted when deploying a new solution.

However, one list that has long gone un-scrutinized by the typical IT professional is the published VMware Hardware Compatibility List. A fellow friend of mine in the IT space recently underwent the less than pleasant experience of having the beloved HCL fail him – resulting in the worst kind of IT issue: intermittent complete outages of his VMware hosts. He was hung – no vMotion – the only course of action being to reboot the ESXi host and pray the VM’s survive.

With weeks between host outages, the problem was almost impossible to pinpoint. Through detailed troubleshooting eventually the breadcrumbs led to the 10G Qlogic single port converged network adaptor (CNA). You’ll be as surprised as my friend was to find that this particular card is well documented as “supported” on VMware’s HCL.

Yes! Betrayed by the HCL! Making matters worse is the fact that the card is also fully supported by HP in his new DL385 G7 servers, as well as the Cisco Nexus switch into which it was plugged. While Qlogic is a well established player in the HBA/CNA space, their email only support did not live up to the Qlogic reputation. My friend and his entire team spent countless hours working on the issue with minimal to no support from Qlogic.

Backed into a corner they decided to take a chance on Emulex OCe11102-FX converged adapters, another formidable player in the market. Issues did arise again – but not stability issues: CIM functionality issues. Unlike their competition, Emulex stepped up to the plate and served up a home run. They took the time to recreate his issue in their lab and boiled it down to the order of the CIM software.

OneCommand Manager for VMware vCenter was then installed. Once the Emulex CIM was installed prior to the HP CIM, my friend finally achieved sustained stability and solid CIM functionality. Some lessons that were learned or reinforced by this experience:

  1. Make certain the hardware you are looking to invest in is on the VMware HCL.
  2. Google the specific hardware for reviews and/or comments on the VMware support forums.
  3. Research that the hardware vendor you select offers phone AND email support – not just email support.

Photo Credit: gemtek1

float(10)