Category

VMware

Internet Cloud Server Cabinet

Dude, Where’s My Server!?!

By | Azure, Backup, Cloud Computing, Design & Architecture, Infrastructure, Storage, Virtualization, VMware | No Comments

Hello Cloud, Goodbye Constant Configuration

I have to admit that when I log into a Linux box and realize that I have some technical chops left, I get a deep feeling of satisfaction. I am also in the habit of spinning up a Windows Server in order to test network routes/ACLs in the cloud since I like using the Windows version of tools like Wireshark.  Despite my love for being logged into a server, I do see the writing on the wall. Logging into a server to do installs or make configuration changes is fast becoming a thing of the past. Given the number of mistakes we humans make, it’s probably about time. Read More

EMC Takes Control of VCE

By | Cloud Computing, EMC, Strategy, VMware | No Comments

EMC recently announced that they were buying out most of Cisco’s interest in VCE with Cisco only retaining a 10 percent stake in the company. VCE published that they would keep their mission intact, and continue to create new solutions using their industry-leading VBlock Systems. EMC has also made headlines lately for being nominated as one of the “World’s Best Multinational Workplaces,” and for some speculation that they may be planning a reorg, which may include the formation of a new cloud business unit.

What Does The EMC Transition Mean for VCE?

While there are always different rumblings of opinions throughout an industry, many analysts maintain that the VCE transition towards becoming an EMC business is an entirely natural one, and will probably help to skyrocket their growth. In the cio-today.com article “EMC Buys Cisco’s Stake in VCE, Eyeing Hybrid Cloud Potential” analyst Zeus Kerravala from ZK Research explained that joint ventures are only meant to last for certain period of time.

Kerravala said “If VCE Is going to earn billions more, they are obviously going to have to find a way of growing beyond organic growth. That will probably be through mergers and acquisitions or a change of channel strategy, and it’s going to require making faster decisions.” He went on to say that since there will now be streamlined decision making under EMC, he believes it’s a good move for VCE.

Our Take on the VCE Transition to EMC

With a big industry move like this one, we wanted to talk to IDS Chief Technology Officer, Justin Mescher, and get his take on the VCE transition. Mescher explained that the move might help solidify previous marketplace suspicions.

He said,“Ever since VMware acquired Nicira in 2012 and created their own software-defined networking stack, speculation has been swirling that EMC, VMware, and Cisco would start to grow further apart. While this move seems to confirm the rumors, I think it will be a positive move overall.”

Mescher went on to explain that VCE’s biggest value has been bringing fully validated and pre-integrated systems to customers to accelerate time to value, reduce risk and increase efficiency, and that mantra of the offerings shouldn’t change.

He explained that it will be interesting to see is how the recent EMC re-structuring to create a Cloud Management and Orchestration group will impact this acquisition. EMC has proclaimed that this new business unit will focus on helping customers work in both the private and public cloud independently of the technology running underneath it. This will include EMC’s “software-defined” portfolio as well as some of their new acquisitions targeted at cloud enablement and migration.

Concluding his thoughts, Mescher said,“Could EMC take the framework and concept that VCE made successful and start to loosen some of the vendor-specific requirements? While this would certainly not be typical of EMC, if they are serious about shifting from a hardware company to focusing on the software-defined Data Center, what more impactful place to start?”

About VCE

VCE was started in 2009 as a joint venture between three of the top IT industry companies, EMC, Cisco and WMware as an effort to provide customers integrated products solutions through a single entity. In 2010 VCE introduced their Vblock Systems which provided a new approach to optimizing technology solutions for cloud computing. Since then they have continued to grow their customer portfolio and improve their solutions and be a leader in the industry. See the complete VCE history.

VMware DRS Revisited

By | VMware | No Comments

Recently, I was contacted by a customer who was concerned about DRS (Distributed Resource Scheduler) and wanted a health-check blessing that DRS in their environment was functioning properly. As any committed EngineerAdministrator would, I began to troubleshoot ESXi hosts, vCenter and associated settings accordingly.

This is what we were seeing:

drs resource - first image

And this:

drs - second image - screen shot

As you can see there is not a true balance at this moment in time. Additionally, if usage would spike even further, DRS wouldn’t necessarily migrate to balance the load. DRS is not a true Load-Balancer per se, as most would think. Yes, in times of contention, it will suggest or migrate for you, VMs in which it deems necessary to alleviate any threat of a failure.

A true Load Balancer would make sure load is balanced across all participating resources. However, DRS’s function is to maintain resources to all VMs to ensure no failures. Additionally, if resource pools are used, then DRS would complement those pools in regards to distributing resources in a given cluster accordingly.

DRS also has some advanced option settings that can be configured to further control the algorithms’ actions. These can be found in the Best Practices Whitepaper on VMware’s website, located HERE. The CPU scheduler changed in vSphere 5.1, more information can be found HERE.

VMware Backup Using Symantec NetBackup: 3 Methods with Best Practices

By | Backup, How To, VMware | No Comments

Symantec’s NetBackup has been in the business of protecting the VMware virtual infrastructures for a while. What we’ve seen over the last couple of versions is the maturing of a product that at this point works very well and offers several methods to back up the infrastructure.

Of course, the Query Builder is the mechanism that is used to create and define what is backed up. The choices can be as simple as servers in this folder, on this host or cluster—or more complex, defined by the business data retention needs.

Below are the high level backup methods with my thoughts around each and merits thereof.
 

1: SAN Transport

To start, the VMware backup host must be a physical host in order to use the SAN transport. All LUNS (FC or iSCSI) that are used as datastores by the ESX clusters must also be masked and zoned (FC) to the VMware backup host.

When the backup process starts, the backup host can read the .vmdk file directly from the datastores using vADP

Advantage

The obvious advantage here is one can take advantage of the SAN fabric thus bypassing all resources from the ESX hosts to backup the virtual environments. Backup throughput from what I’ve experienced is typically greater than backups via Etnernet.

A Second Look

One concern I typically hear from customers specifically with the VMware team is that of presenting the same LUNS that are presented to the ESX cluster to the VMware backup host. There are a few ways to protect the data on these LUNS if this becomes a big concern, but I’ve never experienced any issues with a rogue NBU Admin in all the years I’ve been using this.
 

2: Hot-add Transport

Unlike the SAN Transport a dedicated VMware backup host is not needed to backup the virtual infrastructure. For customers using filers such as NetApp or Isilon and NFS, Host-add is for you.

Advantage

Just like the SAN Transport, this offers protection by backing up the .vmdk’s directly from the datastores. Unlike the SAN Transport, the backup host (media server) can be virtualized saving additional cost on hardware.

A Second Look

While the above does offer some advantages over SAN Transport, the minor drawback is ESX host resources are utilized in this method. There are numerous factors to determine how much if any the impact will be on your ESX farm.
 

3: NBD Transport

The backup method used with NBD is IP based. When the backup host starts a backup process a NFC session is started between the backup host and ESX host. Like Hot-add Transport, the backup host may be virtual.

Advantage

The benefit of this option is it is the easiest to configure and simplistic in concept compared to the other options.

A Second Look

As with everything in life, something easy always has drawbacks. Some of the drawbacks are cost of resources to the ESX host. Resources are definitely used and noticeable the more hosts that are backed up.

With regard to NFC (Network File Copy), there is one NFC session per virtual server backup. If you were backing up 10 virtual servers off of one host, there would be 10 NFC sessions made to the ESX host VMkernel port (management port). While this won’t affect the virtual machine network, if your management network is 1GB, that will be the bottleneck for backups of the virtual infrastructure. Plus VMware limits the number if NFC sessions based upon the hosts transfer buffers, that being 32MB.
 

Wrap-up: Your Choice

While there are 3 options for backing up a virtual infrastructure, once you choose one, you are not limited to sticking with it. To get backups going, one could choose NBD Transport and eventually change to SAN Transport … that’s the power of change.

Photo credit: imuttoo

Cisco, Cisco UCS, Cisco Unified Computer System, UCS Chassis, VMWare, VMware Deployment, host, host cluster, Fabric Interconnects, Ethernet Uplinks, core, switching

Cabling, Keeping it Simple

By | Cisco, UCS, VMware | No Comments

Cisco, Cisco UCS, Cisco Unified Computer System, UCS Chassis, VMWare, VMware Deployment, host, host cluster, Fabric Interconnects, Ethernet Uplinks, core, switching

When I was shadowing and learning the Cisco Unified Computer System (UCS) one thing my mentor kept commenting on was how clean the cabling was and how much of it was reduced. I am typically implementing the UCS Chassis with a VMware deployment and cabling is a top priority.

Before working with the UCS Chassis a typical VMware deployment would be a 3 to 5 physical host cluster.

Now, let’s just take a step back and do quick check list on what the physical components might look like.

 

5 Physical Hosts, each host has the following:

• 2 Power Cords
• 6 or 4 Ethernet Cables for (management, vMotion, VM Network and Storage)
• 1 for DRAC, iLO, CICM or similar remote management

That totals up a possible of 9 cables and this would be the minimum, some instances call for more. Remember that is only 1 host. We still need to cable 4 more hosts! Now your cable total is at 45.

45 CABLES! Are you kidding me?!?!

Take a look at a UCS Chassis with 5 blades. Let’s assume we are using a Cisco UCS 5108 Chassis and 2 Cisco 6248 Fabric Interconnects with 1GB Ethernet Uplinks to the core switching and direct connect to the storage device.

• 8 Power Cords (2 for each Fabric Interconnect & 4 for the UCS Blade Chassis)
• 8 (Twinax) Server Uplinks from the Chassis to the Fabric Interconnects
• 8, 1GB Ethernet (4 from each Fabric Interconnect) to the core
• 4, Fiber (2 from each Fabric Interconnect) to the Storage device

That totals up to 28 cables for the entire environment that is almost half of the physical servers. Plus you still have three slots available on the UCS Chassis to add 3 more blades and you don’t have to add any more cables.

Another beauty of the Cisco UCS Chassis is that if you need to add more network adapters to your hosts there are no additional cabling required. Just a few simple changes on your blades and you’re done.

Photo by Lacrymosa

Choosing the Best Replication with VMware vCenter Site Recovery Manager: vSphere vs. Array-based

By | Replication, Virtualization, VMware | No Comments

I recently had the opportunity to implement VMware vCenter Site Recovery Manager (SRM) in three different environments using two different replication technologies (vSphere and Array-based Replication). The setup and configuration of the SRM software is pretty much straightforward. The differences come into play when deciding on what the best replication option is for your business needs.

vSphere Replication

vSphere Replication is built into SRM 5.0 and is included no matter what replication technology you decide to use. With vSphere Replication, you do not need to have costly identical storage arrays at both your sites, because the replication is managed through vCenter. With the ability to manage though vCenter, you are given more flexibility in regard to which VMs are protected. VMs can be protected individually, as opposed to doing so at the VMFS datastore. vSphere Replication is deployed and managed by virtual appliances installed at both sites. Replication is then handled by the ESXi hosts, with the assistance of the virtual appliances. vSphere Replication supports RPOs as low as 15 minutes.

[framed_box] vSphere Replication Benefits:

  • No need for costly storage arrays at both sites
  • More flexibility in choosing which VMs are protected (can do so individually)
[/framed_box] [divider_padding]

Array-based Replication

The two Array-based Replication technologies that I implemented were EMC MirrorView and EMC Symmetrix. Both of these tie into SRM using a storage replication adapter (SRA). The SRA is a program that is provided by the array vendor that allows SRM access to the array. Configuration of replication is done outside of vCenter at the array level. Unlike vSphere Replication, Array-based Replication requires you to protect an entire VMFS datastore or LUN, as opposed to individual VMs. One of the biggest benefits of Array-based Replication is its ability to provide automated re-protection of the VMs and near-zero RPOs.

[framed_box] vSphere Replication Benefits:

  • Automated re-protection of VMs
  • Near-zero RPOs
[/framed_box] [divider_padding]

Final Thoughts

VMware vCenter Site Recovery Manger gives you disaster recovery management that is highly sought after in today’s market, allowing you to perform planned migrations, failover and failback, automated failback and non-disruptive testing.

Photo credit: adamhenning via Flickr

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

Advice from the Expert, Best Practices in Utilizing Storage Pools

By | Backup, Cisco, Data Loss Prevention, EMC, How To, Log Management, Networking, Storage, VMware | No Comments

Storage Pools for the CX4 and VNX have been around a while now, but I continue to still see a lot of people doing things that are against best practices. First, let’s start out talking about RAID Groups.

Traditionally to present storage to a host you would create a RAID Group which consisted of up to 16 disks, the most typical used RAID Groups were R1/0, R5, R6, and Hot Spare. After creating your RAID Group you would need to create a LUN on that RAID Group to present to a host.

Let’s say you have 50 600GB 15K disks that you want to create RAID Groups on, you could create (10) R5 4+1 RAID Groups. If you wanted to have (10) 1TB LUNs for your hosts you could create a 1TB LUN on each RAID Group, and then each LUN has the guaranteed performance of 5 15K disks behind it, but at the same time, each LUN has at max the performance of 5 15K disks.
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] What if your LUNs require even more performance?

1. Create metaLUNs to keep it easy and effective.

2. Make (10) 102.4GB LUNs on each RAID Group, totaling (100) 102.4GB LUNs for your (10) RAID Groups.

3. Select the meta head from a RAID Group and expand it by striping it with (9) of the other LUNs from other RAID Groups.

4. For each of the other LUNs to expand you would want to select the meta head from a different RAID Group and then expand with the LUNs from the remaining RAID Groups.

5. That would then provide each LUN with the ability to have the performance of (50) 15K drives shared between them.

6. Once you have your LUNs created, you also have the option of turning FAST Cache (if configured) on or off at the LUN level.

Depending on your performance requirement, things can quickly get complicated using traditional RAID Groups.

This is where CX4 and VNX Pools come into play.
[/framed_box] EMC took the typical RAID Group types – R1/0, R5, and R6 and made it so you can use them in Storage Pools. The chart below shows the different options for the Storage Pools. The asterisks notes that the 8+1 option for R5 and the 14+2 option for R6 are only available in the VNX OE 32 release.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inNow on top of that you can have a Homogeneous Storage Pool – a Pool with only like drives, either all Flash, SAS, or NLSAS (SATA on CX4), or a Heterogeneous Storage Pool – a Storage Pool with more than one tier of storage.

If we take our example of having (50) 15K disks using R5 for RAID Groups and we apply them to pools we could just create (1) R5 4+1 Storage Pool with all (50) drives in it. This would then leave us with a Homogeneous Storage Pool, visualized below.High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

The chart to the right displays what will happen underneath the Pool as it will create the same structure as the traditional RAID Groups. We would end up with a Pool that contained (10) R5 4+1 RAID Groups underneath that you wouldn’t see, you would only see the (1) Pool with the combined storage of the (50) drives. From there you would create your (10) 1TB LUNs on the pool and it will spread the LUNs across all of the RAID Groups underneath automatically. It does this by creating 1GB chunks and spreading them across the hidden RAID Groups evenly. Also you could turn FAST Cache on or off at the Storage Pool level (if configured).

On top of that, the other advantage to using a Storage Pool is the ability to create a Heterogeneous Storage Pool, which allows you to have multiple tiers where the ‘hot’ data will move up to the faster drives and the ‘cold’ data will move down to the slower drives.

Jon Blog photo 4Another thing that can be done with a Storage Pool is create thin LUNs. The only real advantage of thin LUNs is to be able to over provision the Storage Pool. For example if your Storage Pool has 10TB worth of space available, you could create 30TB worth of LUNs and your hosts would think they have 30TB available to them, when in reality you only have 10TB worth of disks.

The problem with this is when the hosts think they have more space than they really do and when the Storage Pool starts to get full, there is the potential to run out of space and have hosts crash. They may not crash but it’s safer to assume that they will crash or data will become corrupt because when a host tries to write data because it thinks it has space, but really doesn’t, something bad will happen.

In my experience, people typically want to use thin LUNs only for VMware yet will also make the Virtual Machine disk thin as well. There is no real point in doing this. Creating a thin VM on a thin LUN will grant no additional space savings, just additional overhead for performance as there is a performance hit when using thin LUNs.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inAfter the long intro to how Storage Pools work (and it was just a basic introduction, I left out quite a bit and could’ve gone over in detail) we get to the part of what to do and what not to do.

Creating Storage Pools

Choose the correct RAID Type for your tiers. At a high level – R1/0 is for high write intensive applications, R5 is high read, and R6 is typically used on large NLSAS or SATA drives and highly recommended to use on those drive types due to the long rebuild times associated with those drives.

Use the number of drives in the preferred drive count options. This isn’t always the case as there are ways to manipulate how the RAID Groups underneath are created but as a best practice use that number of drives.

Keep in mind the size of your Storage Pool. If you have FAST Cache turned on for a very large Storage Pool and not a lot of FAST Cache, it is possible the FAST Cache will be used very ineffectively and be inefficient.

If there is a disaster, the larger your Storage Pool the more data you can lose. For example, if one of the RAID Groups underneath having a dual drive fault if R5, a triple drive fault in R6, or the right (2) disks in R1/0.

Expanding Storage Pools

Use the number of drives in the preferred drive count options. If it is on a CX4 or a VNX that is pre VNX OE 32, the best practice is to expand by the same number of drives in the tier that you are expanding as the data will not relocate within the same tier. If it is a VNX on at least OE 32, you don’t need to double the size of the pool as the Storage Pool has the ability to relocate data within the same tier of storage, not just up and down tiers.

Be sure to use the same drive speed and size for the tier you are expanding. For example, if you have a Storage Pool with 15K 600GB SAS drives, you don’t want to expand it with 10K 600GB SAS drives as they will be in the same tier and you won’t get consistent performance across that specific tier. This would go for creating Storage Pools as well.

Graphics by EMC

Letting Cache Acceleration Cards Do The Heavy Lifiting

By | EMC, How To, Log Management, Networking, Storage, VMware | No Comments

Up until now there has not been a great deal of intelligence around SSD Cache cards and flash arrays because they have primarily been configrued as DAS (Direct Attach Storage). By moving read intensive workload up to the server off of a storage array, both individual application performance as well as overall storage performance can be enhanced. There are great benefits to using SSD Cache cards in new ways yet before exploring new capabilities it is important to remember the history of the products.

The biggest problem with hard drives either local or SAN based is that they have not been able to keep up with Moore’s Law of Transistor Density. In 1965 Gordon Moore, a co-founder of Intel, made the observation that the number of components in integrated circuits doubled every year – he later (in 1975) adjusted that prediction to doubling every two years. So, system processors (CPUs), memory (DRAM), system busses, and hard drive capacity have been doubling in speed every two years, but hard drives performance has stagnated because of mechanical limitations. (mostly heat, stability, and signaling reliability from increasing spindle speeds) This effectively limits individual hard drives to 180 IOPs or 45MB/sec under typical random workloads depending on block sizes.

The next challenge is that in an effort to consolidate storage, increase the number of spindles, availability and efficiency we have pulled the storage out of our servers and placed that data on SAN arrays. There is tremendous benefit to this, however doing this introduces new considerations. The network bandwidth is 1/10th of the system bus interconnect (8Gb FC = 1GB/sec vs PCIe 3.0 x16 = 16GB/sec). An array may have 8 or 16 front-end connections yielding and aggregate of 8-16GB/sec where a single PCIe slot has the same amount of bandwidth. The difference is the array and multiple servers share its resources and each can potentially impact the other.

Cache acceleration cards address both the mechanical limitations of hard drives and the shared-resource conflict of storage networks for a specific subset of data. These cards utilize NAND flash (either SLC or MLC, but more on that later) memory packaged on a PCIe card with an interface controller to provide high bandwidth and throughput for read intensive workloads on small datasets of ephemeral data.

[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] I realize there was a lot of qualification statements there so lets break it down…

  • Why read intensive? As compared to SLC NAND flash, MLC NAND flash has a much higher write penalty making writes more costly in terms of time and overall life expectancy of a drive/card.
  •  Why small datasets? Most Cache acceleration cards are fairly small in comparison to hard drives. The largest top out at ~3TB (typical sizes are 300-700GB) and the cost per GB is much much higher than comparable hard drive storage.
  •  Why ephemeral data and what does that mean? Ephemeral data is data that is temporary, transient, or in process. Things like page files, SQL server TEMPDB, or spool directories.
[/framed_box] Cache acceleration cards address the shared-resource conflict by pulling resource intense activities back onto the server and off of the SAN arrays. How this is accomplished is the key differentiator of the products available today.

SSD Caching , EMC, VFCache, FusionIO, VMWare, SLC, MLC NAND Flash, Gordon Moore, Intel, processors, CPU's, DRAM

FusionIO is one of the companies that has made a name for themselves early in the enterprise PCI and PCIe Flash cache acceleration market. Their solutions have been primarily DAS(Direct Attach Storage) solutions based on SLC and MLC NAND Flash. In early 2011 FusionIO released write-through caching to their SSD cards with their acquisition of ioTurbine software to accelerate VMWare guest performance. More recently – Mid-2012 – FusionIO released their ION enterprise flash array – which consists of a chassis containing several of their PCIe cards. They leverage RAID protection across these cards for availability. Available interconnects include 8Gb FC and Infiniband. EMC release VFCache in 2012 and has subsequently released two additional updates.

The EMC VFCache is a re-packaged Micron P320h or LSI WarpDrive PCIe SSD with a write-through caching driver targeted primarily at read intensive workloads. In the subsequent releases they have enhanced VMWare functionality and added the ability to run in “split-card” mode with half the card utilized for read caching and the other half as DAS. EMC’s worst kept secret is their “Project Thunder” release of the XTremIO acquisition. “Project Thunder” is an all SSD array that will support both read and write workloads similar to the FusionIO ION array.

SSD Caching solutions are an extremely powerful solution to very specific workloads. By moving read intensive workload up to the server off of a storage array, both individual application performance as well as overall storage performance can be enhanced. The key to determining whether or not these tools will help is careful analysis around reads vs writes, and the locality of reference of active data. If random write performance is required consider SLC based cards or caching arrays over MLC.

 

Images courtsey of “the register” and “IRRI images

IT, IT vendors, journey, IDS, learning

Being Successful, My Journey to Stay Sharp

By | Cisco, How To, VMware | No Comments

Learning is the one word that comes to mind when I think about being successful in the world of technology. In previous years I bought into the traditional method of learning by taking vendor training and follow-up exams. After failing an exam last year I began to understand that I had to develop a new methodology of learning. I wanted to pass IT exams on the first attempt and retain the required knowledge. I had to adapt my style of training.

Traditionally, companies understand that in order to keep their IT employees from leaving they have to offer incentives beyond money; most employees want to learn. The majority of companies I have worked for seem to follow a similar approach to training.

1. Determine the technical proficiency required
2. Train and learn the material deemed important by the vendor
A. Attend an authorized training course
B. Read books or PDF’s related to the subject
3. Take the exam
A. Re-take the exam if needed

training, journey, IT vendor, vendors, learning

I found that learning had a profound ripple affect well beyond my personal advantages. The company I worked for benefited from vendor partnerships as certain accreditations earned provided access to different markets and lead generation. When consulting with potential customers on the front-end (sales) or the back-end (implementation) the opportunity for additional business with the customer grows substantially. This happens because the customer feels confident that you are a subject matter expert. You become their trusted adviser.

When I was hired as a Post-Sales Engineer at Integrated Data Storage (IDS), I was informed about the training curriculum and introduced to the company’s learning methodology. The major issue I encountered with the learning cycle was how much there was to learn.

I recall the pain and aggravation of re-taking exams for EMC ISM, VMware VCP 4.1 and VMware VCA4-DT. Even though I spent suitable time studying the content, I was overwhelmingly devastated when I failed these exams. It was my goal to pass these exams on the first attempts; I was determined to diagnose the problem and change it.

One year ago I reviewed my approach to studying and I quickly discovered that all habits resembled that of the traditional learning method. Take a course then take a test. This structure was not working for me, so I began to create my own roadmap for success. I created a list of tools and resources that became indispensable, such as books, PDFs, computer-based training (CBTs), home labs, specialized learning centers, vendor specific training, blogs, and knowledgebase articles. I was immersed in training and embraced my new learning methodology.

In February 2012 I put my new study methods to the test. The results were immediate and positive. By combining multiple study strategies I took and passed the VCP5, VCP5-DT, NetApp NCDA and Citrix XenDesktop on my first attempt(s). Through a restructured training curriculum, I obtained my goal of passing these exams on the first attempt.

While revamping my studying habits I found several training secrets which contributed to my success.

TrainSignal is a Chicago-based company with CBTs that I loaded on my tablet for offline viewing. The instant online access interface is intuitive and easy to use and they offer transcender practice exams with select courses. The trainers at TrainSignal are some of the most respected, certified, talented and personable individuals in the industry. I was able to follow each of them on Twitter and ask questions through social media. The bonus for me was that TrainSignal offers a majority of their individual training courses for around $400.

Current Technologies Computer Learning Center (CTCLC) is a Portage, Indiana, learning center maintained by a team of certified instructors. CTCLC is authorized by vendors across many different technologies which allow easy access to exams and certifications. By being devoted to this local learning center, I was able to obtain extra stick time with valuable classroom hardware. Also, another great benefit to CTCLC is their flexibility in rescheduling courses. When an emergency at work required my immediate attention, the staff at CTCLC was kind enough to help reschedule my courses.

Benchmark Learning is an authorized learning center that specializes in technologies for specific vendors. I used Benchmark Learning for my Citrix XenDesktop certification as I was very impressed with their style and outline. Benchmark Learning kept their training status up-to-date on Citrix’s website. They were very responsive and accommodating to my request for scheduling.

Vendors provided additional training, which helped me obtain additional time learning specific solutions and technologies. Aside from the three companies mentioned, vendors like Nutanix, VMware, Citrix and EMC provided in-depth knowledge through partner related training videos, PDFs and white papers.

home labs, training, lab, exam, tests

Home Labs provided actual hands-on experiences for my training. Combined with the theory-based knowledge learned in classes, CBT videos and online material, I was able to solidify my knowledge about the specific solution and technologies by having these items available at my house. After checking E-bay and Craigslist, I found a VMware vSphere compatible server and began building my lab. My home lab now consists of several Dell servers, a free iSCSI SAN using OpenFiler, WYSE P20 Zero Client, HP laptop as a thin client, iPad, Mac Mini and a handful of trial licenses for VMware, Microsoft, Citrix, VEEAM, Liquidware Labs, TrendMicro and Quantum.

2013 is here and my vision for this year is to rebuild my home lab with even more hardware. My goal is to provide real design examples built on VMware and Citrix technologies to continue to take my learning to the next level.

Networking, Router, VMware

Adventures In Networking: Hardships In Finding The Longest Match

By | How To, Networking, VMware | No Comments

Networking, Router, VMwareSometimes in life you have to learn things the hard way. Recently I learned why the Longest Match Rule (Longest Match Algorithm) works and why it is applied not only to routing, but to other situations as well.

I was adding a new storage array and datastores to an existing VMware cluster using iSCSI. The VMware existing environment was laid out as follows:
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”]vSwitch0 = VM Network & Service Console (10.1.1.0/16)
vSwitch1 = iSCSI (10.12.1.0/16)
vSwitch2 = vMotion (10.12.1.0/16)
vSwitch3 = Testing
[/framed_box]
The new storage array and iSCSI targets landed on a new vSwitch (vSwitch4). The old environment had both iSCSI and vMotion on the same network (10.12.1.0/16). For the new environment I wanted to completely separate the iSCSI and vMotion traffic by assigning them to different networks. Both iSCSI networks needed to stay up for migrations to happen so the new environment was laid out as follows:
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] vSwitch0 = VM Network & Service Console (10.1.1.0/16)
vSwitch1 = iSCSI (10.12.1.0/16)
vSwitch2 = vMotion (10.12.1.0/24)
vSwitch3 = Testing
vSwitch4 = iSCSI (10.12.2.0/24)
[/framed_box]

First, vSwitch4 was created where the new storage was configured and presented to VMware, just as planned. The problem occurred when the subnet mask on vSwtich2 was modified from /16 to /24. As soon as this change to the subnet mask on vSwitch2 happened, access to all the VM went down. After scrambling for about 5 minutes to retrace the steps prior the problem I was able to determine that it was the subnet change that caused the outage. Changing the subnet mask on vSwitch2 back to /16 slowly brought everything back online.

What caused this outage?

One simple mistake!

When the subnet was changed from /16 to /24 the third octet also needed to be changed to differentiate the iSCSI and vMotion networks. When the /24 subnet was applied to vSwitch2 (10.12.1.0 network) the Longest Match Rule matched the longer extended network prefix. This change also applied for vSwitch1 and any data within the /16 network would traverse the /24 thus dropping the iSCSI targets and all its datastores.

A network that has a longest match describes a smaller set of IP’s than the network with a shorter match. This in turn means the longer match is more specific than the shorter match. The 10.12.1.0/24 is the selected path because it has the greatest number of matching bits in the destination IP address of the packets (see below).

[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] #1, 10.12.1.0/24 = 00001010.00001100.00000001.00000000
#2, 10.12.0.0/16 = 00001010.00001100.00000000.00000000
#3, 10.0.0.0/8 = 00001010.00000000.00000000.00000000
[/framed_box]
By simply changing the third octet on vSwitch2 I was able to change the subnet to /24.
The final and working configuration was laid out as follows:
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] vSwitch0 = VM Network & Service Console (10.1.1.0/16)
vSwitch1 = iSCSI (10.12.1.0/16): left for migration
vSwitch2 = vMotion (10.12.3.0/24)
vSwitch3 = Testing
vSwitch4 = iSCSI (10.12.2.0/24)
[/framed_box]

Photo From: maximilian.haack

float(6)