All Posts By

Paul Just

IDS Cloud Update: Exploring Zerto Technologies

IDS Cloud Update: Zerto Technology Review

By | Cloud Computing, Disaster Recovery, IDS | No Comments

At IDS we are continuously evaluating the effectiveness of new products and partners to protect the integrity of our IDS Cloud Services. We built the IDS Cloud to deliver public, private and hybrid Cloud Solutions that facilitate increased efficiency within IT operations. As we evaluate the continually changing technology landscape, we always have our customers in mind. In this edition of the Monthly IDS Cloud Update, we’d like to highlight our experiences with Zerto, a partner we utilize to deliver a flexible and efficient Cloud Disaster Recovery Service offering to our customers.

Zerto uses an innovative Virtual Replication technology specifically built for virtual environments, which delivers disaster recovery functionality with industry-leading automation for testing and failover.

Zerto makes Cloud DR Services efficient and easy to use for customers, a value that shouldn’t be overlooked in the IT industry.

While there are many benefits to Zerto’s technology offerings, today we’d like to breakdown exactly why we chose Zerto to power our IDS Cloud DR services.

Benefits of IDS Cloud DR Services Powered by Zerto

  • Easy setup. The Zerto Cloud DR service installs remotely within hours, with no complicated services required.
  • Customer control. By using Zerto to power the IDS Cloud DR Services, customers have the flexibility to choose which applications to protect, regardless of the storage they live on.
  • Control failover. Zerto enables automated data recovery, failover and failback and lets you select any VM in VMWare’s vCenter. There is no agent required and this process is automated through a vCenter plug-in.
  • Simplified conversions from HyperV to VMware. Zerto Virtual Replication is the first technology with the capacity to automatically convert HyperV to VMWare for seamless migrations between hypervisors.
  • Secure Multi-tenancy. Zerto’s secure multi-tenancy architecture delivers a secure platform for replication to the Cloud, while providing the security required for companies with strict compliance requirements.
  • Flexible control of replication schedule. Zerto compresses changes in the range of 50%+ in order to maintain a consistently low Recovery Point Objective (RPO), and allows for a bandwidth threshold to be assigned for replication as to not impact other services utilizing WAN links.
  • Storage array agnostic. Zerto has the capability to replicate from any storage to any other storage, allowing customers to completely migrate data from one array, vendor or site to another efficiently.
  • Insightful reporting for customer. Zerto’s dashboard gives customers easy access to SLA information, providing great insight for the customer into their Disaster Recovery environment.

Zerto powers a comprehensive IDS Cloud DR Service that eradicates concerns about performance, availability and security while facilitating savings on resource costs.

Stay tuned for more information about the IDS Cloud by following the Monthly IDS Cloud Update.

The Cisco UCS Command Line: Creating “Server” & “Uplink” Ports From Your Command Center

By | Cisco, How To | No Comments

Using the command line on the UCS fabric interconnects is a bit different than your standard IOS or NXOS command line. I had the opportunity to configure a new UCS system from the console and wanted to share the experience with our subscribers. The configuration snippets below highlight the steps you would take to get through your initial configurations.

1. Create “Server Ports” on ports 5, 6, 7, 8 on Fabric Interconnect A:

[image size=”medium” align=”center”][/image]

A couple of notes – You have to scope to “eth-server” in order to configure Server Ports. After each create command, the CLI puts an asterisk on the command line. That means there is a transaction that needs to be committed to the system configuration. The Server Port will not be created until you use the “commit-buffer” command.

2. Create “Uplink Ports” on ports 1 & 2 and set them to 1 Gbps on Fabric Interconnect A:

[image align=”center”][/image]

Note – you need to scope into “eth-uplink”, to create the Uplink Ports.

3. Create “Uplink Ethernet Port Channel” named portchannel1 on Fabric Interconnect A:

[image align=”center”][/image]

Note – still scoped into “eth-uplink”.

4. Add “Uplink Ports” to “Uplink Ethernet Port Channel” named portchannel1 on Fabric Interconnect A:

[image align=”center”][/image]

Note – scoped into “port-channel 1” to add the Uplinks Ports.

5. Verify portchannel1 is operational on Fabric Interconnect A:

[image align=”center”][/image]

And wah-lah: now you have successfully created “Server” and “Uplink” ports using the Cisco UCS Command Line from your own “command center”. If you have any further questions or want to learn more please email me:

Photo Credit: soundman1024

Life’s A Beach With Remote vSphere Management on the iPad

By | View, Virtualization, VMware | No Comments

Leaving for Bali? Vacationing in upper northwest Indiana? Just heading to Grandma’s for the weekend? Then this is the blog post for you!

As your virtual travel guide, here are the four things you need to manage your vSphere environment while on vacation (or take a vacation while managing your vSphere environment).

Before leaving the office, a few things need to be in place…

1. Make sure you’ve downloaded the latest vCMA virtual appliance from VMware Labs:
a) Head to:
b) Install into your infrastructure, and give the appliance an IP address.
2. You will need an iPad with 3G capabilities.
3. VPN connectivity to your private network.
a) Cisco AnyConnect Client for the iPad works great, as shown below:

[image title=”Slide1″ size=”small” align=”center” width=”400″ height=”300″][/image]

b) You can also use the native VPN ability of the iPad.
4. vSphere Client for the iPad.

Once you’ve gotten to your destination of choice, follow these steps to gain access to your vSphere environment:

1. Go to iPad Settings >>Apps >>vSphere Client.
a) Set the Web Server to the IP address of the vCMA appliance.
2. Establish VPN connectivity.
3. Launch the vSphere Client and log into VCenter, as seen on the initial login screen:

[image title=”Slide2″ size=”small” align=”center” width=”400″ height=”300″][/image]

After entering you should now see the summary screen of your VCenter environment:

[image title=”Slide3″ size=”small” align=”center” width=”400″ height=”300″][/image]

From the summary screen you can drill into your ESX servers and be able to do the following:

• View ESX Server CPU, memory, disk & network load.
• View ESX Server Hardware summary and performance:

[image title=”Slide4″ size=”small” align=”center” width=”400″ height=”300″][/image]

• Inventory of the VM’s on the server.
• From this page you can reboot your ESX Server or enter Maintenance Mode.

From the ESX server screen you can drill into the VM:

[image title=”Slide5″ size=”small” align=”center” width=”400″ height=”300″][/image]

Within this screen you will be able to do the following:

• View VM Server CPU, memory, and disk load.
• View VM and the latest VM events.
• View & restore any snapshots associated to the VM.
• You can also Start, Stop, Restart and Suspend the VM.

I’ve only tested this scenario from the beach, but I’m sure it works on the golf course too.

Photo Credit: skylerf

VMware: Infrastructure Updates While The Gears Keep Grinding #workingfortheweekend

By | VMware | No Comments

After 6 years working with VMware… they still manage to impress me.

Last week I was upgrading a client’s network to a Nexus based solution. Part of the deployment included switching their entire ESX infrastructure to a FCoE based solution, utilizing the Nexus 5000 & Qlogic CNA’s.

To speed up the deployment we decided to do the ESX server changes during production hours, utilizing VMotion. One by one, each ESX server was put into maintenance mode, and reconfigured with CNA’s and connectivity to the Nexus 5000.

The plan was simple…
• Remove existing HBA & NIC cards, and upload the latest CNA drivers.
• Bring the server back online and assign the new 10 Gbps based vmnics to the existing vSwitches.
• Update the existing SAN alias to reflect the new WWN.
• Register the new WWN with the storage array.
• Rescan the SCSI bus.
• Take the ESX server out of maintenance mode.
• Migrate a VM, and test the new connectivity.

Each transition took a little over an hour, with the majority of the time spent removing the old cabling (about 10 cables each). A day and half later, the ESX cluster was running using the new Nexus infrastructure, and there was zero downtime to the 100+ VM’s running on the infrastructure.

Not only were we able to upgrade the infrastructure, we were able to do it during the weekday, and not after hours or working through the weekend!

Photo Credit: ralphbijker

Announced @ Cisco Partners Summit: Advanced Architecture Specializations & New Cloud Certifications

By | Cisco, Cloud Computing, Virtualization | No Comments

I had the opportunity to spend a few days in New Orleans while I attended the Cisco Partners Summit this week.  I visited Bourbon Street for the first time (I’m not going into those stories), and was introduced to some pretty good food too!

The real reason I was there was to find out what Cisco is up to with their Partner and Technology Specializations. There were quite a few large changes, especially with the introduction of Architecture Based Specializations:

Advanced Borderless Networks Architecture Specialization:
Allows for design within any sized business. Some features that were upgraded were the Cisco Catalyst switch, Cisco Adaptive Security Appliance firewall and compact Cisco ASR 1000 Series router. For smaller businesses, new entry-level 802.11n wireless access points were introduced. A great example of of one of these technologies is the Cisco AnyConnect Client.

Advanced Collaboration Architecture Specialization:
Has three specific roles outlined and, when attained, increases security and has advanced data storage. Delivers any content type: video, voice and data for interaction.

Advanced Data Center Architecture Specialization:
The new UCS ATP. This has a wide variety of qualifications that demonstrates the Partners ability to architect a Cisco data center, based on ACE, Nexus, MDS & UCS C and B-Series servers.

Advanced Unified Fabric Technology:
This will take the place of the existing Data Center Networking Infrastructure Specialization & Data Center Storage Networking Specialization. Partners that currently hold the DCNI/DCSN will need to update to the new specialization within the year.

Advanced Unified Computing Technology:
Aimed to validate Partners ability to design and sell UCS B-series & C-series systems. This is perfect for Partners that have no existing Cisco certifications, and want to start selling C & B Series.

As systems become more complex and integrated, it’s great to know your partners are able to produce a complete solution. This also aligns nicely with  John Chambers’ (President of Cisco) keynote message about how Architectures are the future (the “Cloud” word came up quite a bit too).

Did someone say Cloud?

Yes, Cisco also announced a set of Cloud-specific certifications designed to identify what Cisco believes to be the three Cloud opportunities:

Cloud Builder – Partners that build the infrastructures
Cloud Provider – Partners that deliver different cloud solutions
Cloud Services Reseller – Partners that resell cloud services on their own infrastructure

Cisco believes the Cloud opportunity to be around 172B in 2013. So, if you want a piece, go get your cert on—I know I will be!

Photo credit: fesja via Flickr

The Effects of Random IO on Disk Drive Performance

By | Clariion, Storage | No Comments

I recently had the opportunity to review some performance data from one of our client’s EMC Clariion arrays. I was specifically looking at the read performance of the disk drives during their backup window. I discovered a great visual example showing the effect of random IO on disk drive IOPS and throughput.

The below graph depicts the following metrics:

  • Disk drive seek distance (GB)– green line, scale on the right
  • Disk drive total IO (IOPS) – black line, scale on the left
  • Disk drive total throughput (MB/s) – red area, scale on the right

Zone 1 – Sequential

  • Seek Distance low, less than 1 GB
  • High total IO, 200-275 IOPS
  • High total disk throughput, about 10-13 MB/s

Zone 2 – Getting Random

  • Seek Distance high, 2-6 GB
  • Lower total IO, 25-100 IOPS
  • Lower disk throughput, less than 3-4 MB/s

Zone 3 – Random

  • Seek Distance High, greater than 9 GB
  • Low total IO, less than 25 IOPS
  • Low disk throughput, 1 MB/s

7.5 Reasons Why I Like the Nexus 5000 Series Switches

By | Cisco, Networking | No Comments

1) vPC – Virtual Port Channels

A Port Channel based connectivity option that allows downstream switches and hosts to connect to a pair of Nexus 5000 vPC peer switches as if they were one switch. This allows the host or switch to use two or more links at full capacity.

2) Copper SFP+ Twinax cable

A low power and cost effective option to connect servers and FEX modules to the Nexus 5000. Twinax cables are available in 1,3,5 and now 7 and 10-meter lengths.

3) Nexus 2000 fabric extenders – FEX

These are like “remote line cards” that attach and are managed by the Nexus 5000. 24 and 48 port 1 Gbps FEX’s are available, and so are 32 port 1/10 Gbps FEX’s. These can be connected to the Nexus 5000 with SFP+ Copper Twinax cable, and for longer runs, SFP+ optics.

4) Expansion Modules

Each Nexus switch has one or two open expansion modules. These can accommodate a variety of modules, including additional 10 gig ports, 4 and 8 Gbps native fiber channel ports, and even a mix of both.

5) The new Nexus 5548

Up to 48 10 Gig ports & 960 Gbps throughput in a 1U chassis!

6) Unified Fabric

LAN and SAN on the same layer 2 Ethernet. This allows full SAN/LAN redundancy with just two cables per server. Great for ESX servers, which can use many network, and fiber channel cables.

7) NX-OS

Cisco’s highly resilient & modular operating system, which is based on Cisco’s rock-solid MDS 9000 SAN-OS.

And for the that extra .5 reason why I like the 5000 series switch, drumroll please…

7.5) It’s Silver!