Category

How To

Adventures In cDOT Migrations: Part Two

By | How To, NetApp | No Comments

Before we start: for those just joining the adventures, here’s Part One.

Part Two: Insights From The Field

When it comes to 7-Mode to cDOT transitions, we are seeing the trend of host-based migration continuing to be King when it comes to databases and virtual environments. However, for those customers who are using SnapMirror where re-seeding those primary and secondary volume relationships is not an option due to WAN limitations, the 7MTT (7-Mode Transition Tool) is becoming the work horse of our transition engagements.

It’s critical going into this process to understand the capabilities and limitations of the tool. Let’s take a look at some of the technical terms around the 7MTT.

  • A Project is a logical container that allows you to setup and manage the transition of a group of volumes.
  • A Subproject contains all of the configuration data around volume transitions, i.e. SVM mapping, volume mapping and SnapMirror schedule.
  • A Transition Peer Relationship is the authorization management mechanism for the SnapMirror relationships are between 7-Mode and cDOT systems.

One of the limitations around the 7MTT is that twenty volumes can be managed inside a project container. There is typically some planning and strategy around grouping volumes together either by use case or RPO/RTO. The look and feel of the transition is very SnapMirror-like: it follows a baseline, and has incremental and cutover format. There is also a CLI, but using the GUI is the recommended approach.

As with any services engagement, due diligence leads to success and these 7-Mode to cDOT transitions require careful planning and collaboration as they can take weeks to months depending on the size of the environment to complete.

“These 7-Mode to cDOT transitions require careful planning and collaboration as they can take weeks to months”

7-Mode Transition Tool 1.2 Data and Configuration Transition Guide For Transitioning to Clustered Data ONTAP® can be found here with a NOW account login.

Important note: You should be aware of the versions of Data ONTAP operating in 7-Mode that are supported for transitioning to clustered Data ONTAP version 8.2.0 or 8.2.1. You can transition volumes from systems running Data ONTAP 7.3.3 and later. For the updated list of Data ONTAP versions supported for transition by the 7-Mode Transition Tool, see the Interoperability Matrix.

Photo credit: thompsonrivers via Flickr

Top 10 Concepts That Matter In Technology

By | How To | No Comments

We IT professionals are always looking for ways to make our lives easier. It’s not because we are lazy—although, some of the best solutions out there require the least amount of human intervention. The reason is because our goal is to provide the best reliability, stability, and service to the business possible. So if the business isn’t happy, chances are, we are not happy either. After working in IT for the last 15 years, I have seen some things go really well, and others go horribly wrong. Below is a short list of ten concepts that really matter in IT—things that will help make you, the IT professional, successful.

1) The Cloud is what you want it to be.

Don’t get me wrong, there is the NIST definition of Cloud, which is a good definition and a great start. But remember, what you actually do with the Cloud—how you solve business problems—is way more important than how it’s defined. Expand existing virtualization technologies, including storage and network virtualization, to put you and your company in the best position possible for the future. Today, that usually means to start building out a hybrid cloud strategy so that you are ready and able to move between public and private spaces with ease.

2) Learning how to break stuff will make you (and your technology solutions) better.

Before you implement a solution, and certainly before you put production users and data on it, try to break it. Fail it over. If it’s a server, pull the network cord or power cord and see what happens. You’d be surprised to learn that solutions that are designed with full redundancy and failover may not behave as expected in the real world. Taking something apart often times teaches you more about how something works than what you might find in a whitepaper. At the least, you will know what the weak point is that maybe someone else may not know, and you can share what you find with others.

3) Software-Defined Data Center is powerless without the hardware to match.

If you haven’t heard about SDDC, it’s what nearly every technology manufactures seems to talk about these days as the key to being agile, flexible, cloud-ready, and cutting edge. And I agree completely. I’m a hardware guy through and through, so I just have to say, now more than ever, the hardware is a key component of SDDC. When you hear the term “commodity,” you might think that it doesn’t matter what the underlying hardware is. This could be a big mistake. If you look under the covers of most storage system controllers being sold today, you’ll most likely find Intel processors. Knowing the differences among the Intel CPUs in those architectures can make the difference between half or double the performance between one vendor and another. Every release of an Intel processor could have a very large effect on the performance of the SAN. I’m not suggesting that we have to know the intricate details of Sandy-bridge vs. Ivy-bridge (Intel code-names), but we need to keep the marriage of software and hardware in mind when designing today’s data center and cloud solutions.

4) It is (still) about the latency, stupid.

A famous 1996 Stanford article discusses bandwidth and latency, and illustrates that solving bandwidth problems are easy, but latency is a physics problem (the speed of light limitation) that cannot be overcome easily or at all. More than ten years later, fiber has replaced modems in many locations, but we are still looking at WAN latency as a major factor in network performance that affects availability, DR, backups, and client connections. Today, latency on the storage is more important than ever. Most application performance today is not due to bandwidth, but instead, latency, and much of it is on the storage. IOPS are still worth discussing, but are not very meaningful without the associated IO size and latency figures to match.

5) Somebody is doing this better, faster, smarter than you.

It’s nearly impossible to be the smartest person in the room. But even if you are, there are at least two big downsides to being this smart. First, your competition is gaining on you faster than you are maintaining your skills. There’s only one place to go when you’re on the top—and that is down, and it will happen sooner or later. Second, intelligence is overrated. Getting things done means cooperating with others, being creative, being persistent, and above all else, putting in time.

6) Seek out the smart ones and join them. If you can’t join them, mimic them.

Michael Dell once said, “If you are the smartest pro in the room, find another room.” If they won’t let you in, be humble and be persistent. If they don’t like you, check your ego: nobody likes a know-it-all. If you can’t join them, find out what they do and start doing it. Mimicry is a form of flattery, but it can also lead to success. You might be able to learn how to do it better than they do it themselves. Microsoft learned from IBM, AOL learned from Netscape, Palm Pilot became popular after Apple made their Newton, Facebook is MySpace 2.0, and so on.

7) Plan for worst-case scenarios and peak utilization

One reason why Google is looking at automated cars is because they have done the math to reveal that the Interstate system is 90% free space on average. But does this statistic matter when most daily commutes are hitting bumper-to-bumper traffic at 8am or 5pm in most American cities? No, it doesn’t. A website that sells concert tickets and is up for four nines and just happens to be down the 52 minutes that tickets go on sale is little consolation for the lost revenue that business needs to operate. If you assume the worst, and build for the peaks, then your customers will less likely be looking at the hour glass when they need it the most.

8) Be passionate about solving problems; don’t be a Brand X person.

Some technology companies are great, and others are frustrating and complicated. The second we say “Brand x is poor technology, I like Brand Y,” we discredit ourselves. We may even lose respect, credibility, or lose customers. Every technology has its place, and maybe it’s a training issue, or maybe that technology really isn’t the best. But someone else may love it. There are often times many ways to implement technology, and your way is not always the best. The other way works perfectly fine too. Stay passionate on your favorite technologies, and use your passions to make people’s lives easier by solving business problems, even if that means using technologies that you’re not in love with.

9) Lose the words “always, never, can’t, but and no” from your vocabulary.

This is easy—just get rid of these words. Look, there are certain things that successful people say and do and sometimes it is not what you say, it’s what you don’t say. Lose these words and replace them with something else. Even if someone asks for the impossible, you can easily allude to the difficulty without saying no. There might be a large cost and/or risk associated with a big challenge, and you might assume they won’t pay the bill. I’ve seen a blank check handed over in response to someone saying “we would do this, but it would be too expensive.”

10) Have five back out plans.

You have to assume that your primary goal or solution will fail. Sometimes it’s political. This is a good thing, if in the end, an alternate solution works and delivers on time and on budget. There’s much more value in having multiple purposes for a product or solution. Suppose a Tier-2 storage system is targeted for an end-user computing platform, but the company wants to change direction. You might use this storage for another application. You might be able to allocate it to a test/dev lab, augment DR, or use it as a backup target. General purpose solutions that excel in one area or another is a lot different than nich products that are really only good at one thing. In today’s fast-changing IT world, flexibility and agility matter.

Photo credit: holeymoon via Flickr

The 4-Step Approach To Building A Highly Effective Project Delivery Machine

By | How To, Project Management | No Comments

If you ask different people in different industries what is an effective project delivery organization, most likely you would get many different answers. An organization’s success drivers will determine what area of the project delivery organization gets the most attention.

It has been my experience working with large technology companies and valued-added-resellers that most organizations value and expect efficiency and cost-savings as the major deliverables of a project management office/team. These expectations can be mapped back to the 3 timeless tenets of project management: Scope, Time and Cost.

This is where the rubber meets the road, where you can separate amateurs from professionals—where a project manager can draw the line and show up as a mere coordinator or a professional project manager. Being able to align the project management methodology with the organizations strategic direction is a not a simple task and one that can definitely provide fruitful rewards for the brave souls that embark in its pursuit. The approach will depend on the type of organization and PMO size or lack thereof.

In the 10+ years managing projects of all sizes and different technologies and also by reading and sharing experiences with other PMs, these steps have allowed me to deliver projects successfully and effectively:

  1. Assess
  2. Plan
  3. Implement and adjust
  4. Repeat steps 1-3

These steps are the approach I employed when I join a new organization. I take a new assignment/job in order to project manage a new technology (not necessarily new technology in the market, but new technology for me) or to enhance the organization’s PMO.

In the assessment phase I sit down with the delivery team, services manager and executive sponsor. I dive into the organization and concentrate on two very important activities which are: 1) find out everything about the existing project management process and 2) start managing existing/new projects right away. This allows me to learn the culture of the organization and determine what is in place and what is missing. Individually I will review historical data/records to get a picture of what has worked and what has not.

Once sufficient data points have been collected and the company’s strategic plan is understood, it’s time to develop the enhancement plan. As most PMs know, we have to be sensitive to the words you use when joining a new organization. If you come across as the “fix-it-all expert”, expect resistance and lack of support for your plan in some cases.

My experience is that once the plan has been determined and drafted, it is best to start implementing it immediately. I want to be clear: it depends to a large extent on the type of organization you are in. In a large organization with an established PMO, executing the plan right away can be and most likely will be viewed negatively since you have to review it with the organization’s PMO for feedback. The best feedback I have experienced is the one that the field gives you. Sometimes I find that organizations have processes in place but they do not necessarily translate to using them to deliver projects.

Armed with feedback from implementation of the plan, it is time to make adjustments to the plan right away. What worked? What can be improved?

As you continuously repeat the process, you will start noticing that efficiency is improving. Efficiency in resource allocation and time frames are optimized. I found that this platform of constant process improvement portrays a sense of control and order that most of the customers I have worked with appreciate and expect. Once you reach this level of operation in your organization, in my experience, I have noticed that scope creep situations are dealt with head on and resolved professionally and justly with customers.

Photo credit: x-av via Flickr

Save Time and Increase Accuracy by Using Microsoft Excel to Script Repetitive Tasks

By | How To | No Comments

If you’re like me, when you have a lot to do and not a whole lot of time to do it, saving time on repetitive tasks certainly helps—creating scripts with Excel can do just that.

Aside from writing some old batch files, I don’t really know how to script things that well. Sure, I can take someone else’s script and modify it fairly easy—but making one from scratch is not what I do best.

So what I like to do is use Excel to create some scripts for me. I’ll go through an example of how to use Excel to quickly create some scripts.

Using Microsoft Excel to Create Scripts

Below is the command for a VNX to create a new LUN in a pool. Columns A, C, E, G, I, K don’t change (neither does B in this example, but it does change per VNX system you’re working on), but the rest do, so you can just copy and paste those down the columns. After that, you can put in the data that you want.

The important thing is that in Excel, if you start the cell with a dash, or minus sign, it will change what is in the cell. I use spaces on all the columns that don’t change, and make sure there are no spaces in the columns that do change.

jon 1

After you get all your information into Excel, highlight it all except the first row with the column information. Then, copy and paste it into Notepad.

When you do this, it copies over a tab from Excel that separates each cell—as you can see below, the spacing is way off.

jon 2

The highlighted area is one character (a tab): highlight it, copy it, and then replace it with nothing.

jon 3

jon 4

After you replace all, you will get the format you need and you can just copy and paste that into whatever CLI you need to use. I would suggest you copy and paste the first line only, in order to ensure you have no errors in your syntax.

jon 5

This example was specifically for an EMC VNX, but you can use this method for anything you need to do repetitive tasks for and only a portion of the information changes.

VMware Backup Using Symantec NetBackup: 3 Methods with Best Practices

By | Backup, How To, VMware | No Comments

Symantec’s NetBackup has been in the business of protecting the VMware virtual infrastructures for a while. What we’ve seen over the last couple of versions is the maturing of a product that at this point works very well and offers several methods to back up the infrastructure.

Of course, the Query Builder is the mechanism that is used to create and define what is backed up. The choices can be as simple as servers in this folder, on this host or cluster—or more complex, defined by the business data retention needs.

Below are the high level backup methods with my thoughts around each and merits thereof.
 

1: SAN Transport

To start, the VMware backup host must be a physical host in order to use the SAN transport. All LUNS (FC or iSCSI) that are used as datastores by the ESX clusters must also be masked and zoned (FC) to the VMware backup host.

When the backup process starts, the backup host can read the .vmdk file directly from the datastores using vADP

Advantage

The obvious advantage here is one can take advantage of the SAN fabric thus bypassing all resources from the ESX hosts to backup the virtual environments. Backup throughput from what I’ve experienced is typically greater than backups via Etnernet.

A Second Look

One concern I typically hear from customers specifically with the VMware team is that of presenting the same LUNS that are presented to the ESX cluster to the VMware backup host. There are a few ways to protect the data on these LUNS if this becomes a big concern, but I’ve never experienced any issues with a rogue NBU Admin in all the years I’ve been using this.
 

2: Hot-add Transport

Unlike the SAN Transport a dedicated VMware backup host is not needed to backup the virtual infrastructure. For customers using filers such as NetApp or Isilon and NFS, Host-add is for you.

Advantage

Just like the SAN Transport, this offers protection by backing up the .vmdk’s directly from the datastores. Unlike the SAN Transport, the backup host (media server) can be virtualized saving additional cost on hardware.

A Second Look

While the above does offer some advantages over SAN Transport, the minor drawback is ESX host resources are utilized in this method. There are numerous factors to determine how much if any the impact will be on your ESX farm.
 

3: NBD Transport

The backup method used with NBD is IP based. When the backup host starts a backup process a NFC session is started between the backup host and ESX host. Like Hot-add Transport, the backup host may be virtual.

Advantage

The benefit of this option is it is the easiest to configure and simplistic in concept compared to the other options.

A Second Look

As with everything in life, something easy always has drawbacks. Some of the drawbacks are cost of resources to the ESX host. Resources are definitely used and noticeable the more hosts that are backed up.

With regard to NFC (Network File Copy), there is one NFC session per virtual server backup. If you were backing up 10 virtual servers off of one host, there would be 10 NFC sessions made to the ESX host VMkernel port (management port). While this won’t affect the virtual machine network, if your management network is 1GB, that will be the bottleneck for backups of the virtual infrastructure. Plus VMware limits the number if NFC sessions based upon the hosts transfer buffers, that being 32MB.
 

Wrap-up: Your Choice

While there are 3 options for backing up a virtual infrastructure, once you choose one, you are not limited to sticking with it. To get backups going, one could choose NBD Transport and eventually change to SAN Transport … that’s the power of change.

Photo credit: imuttoo

juan martinez, IDS, Project Management, IT Project management, activity vs results, IT, PM, Integrated Data Storage, 2013

IT Project Management, Activity vs. Results

By | How To | No Comments

juan martinez, IDS, Project Management, IT Project management, activity vs results, IT, PM, Integrated Data Storage, 2013

With over 10 years of project management experience I have fallen into this trap multiple times. The trap of simply creating status reports with no new information, sending emails with no new results, or having status calls to review the same issues that have been dragged on for multiple days or weeks. Although status reports appear as progress, sometimes it is nothing more than busy work.

As a project manager, it is easy to be swallowed by this phenomenon of “busy work” to avoid having to deliver bad news to the customer, not face off with the unresponsive engineer, or to deal with the mindset of a micro-manager. I recall documenting activities just to cover my bases so when a project hits the wall and the situation is escalated to the highest levels of management, I would have tons of information to justify why it was not my fault. Even though I cover my bases, projects would still be in trouble.

As project managers, we have access to a wealth of important data. This key information helps provide honest feedback about our effectiveness and usefulness as project managers. Previously I found myself giving the excuse that I don’t have time to track important data such as project duration, resources involved, or customer profile details. Without this critical data, project management is like flying an airplane on auto-pilot where no feedback is provided to the controller. With no input to the autopilot, the risk and the probability of the plane landing in the wrong destination increases exponential. For our projects, this means tasks are late, out of budget, or out of scope.

From working on large multi-country programs to working in a regional firm managing several smaller projects in parallel, I have seen all aspects of IT project management. Each position offered a gold-mine of opportunities to assess my effectiveness as a project manager and in return, these opportunities materialize in the form of metrics about specific projects that I’ve overseen.

Every time I took a results based mindset, the respect for the PM profession grew within the company. Performing the activity of providing status reports without facts is decent for some time, especially if it is presented with flashy dashboards and presentations. But eventually, this approach will simply become another attachment that will not be opened when it hits a client’s inbox.

Don’t get me wrong, activity is a good thing. Activities focused around results through status reports, emails and status calls are effective as long as the data is 100% fact-based and accurate. You get your facts by tracking metrics on all your projects and by bench-marking against them. This will give you a historical trend that will allow you to optimize your resources, time, and money.

Once you start tracking results, you will bring added-value to the organization and that translates to owning your destiny as PM.

Photo by @CurveTo

Avamar, Cloud Infrastructure, eSata card, Segate GoFlex, Iomega, NAS, Ethernet, Xcopy, Windows, Linux box, grid, Jon Austin,

Speed Up First-time Avamar Backup by “Seeding” the Server

By | Avamar, Backup, How To | No Comments

avamar seeding blog header 1200Avamar is a great tool to backup remote office file servers to a private or hybrid cloud infrastructure. However, performing an initial backup can be a challenge if the server is more than a few GB and the connection to the remote office is less than 100Mb.

In this scenario, the recommended process is to “seed” the Avamar server with the data from the remote server. There are a number of devices that can be used to accomplish this: USB hard drives are the most often used; however, they can be painfully slow, as most modern servers only have USB 2.0 ports that can only transfer around 60MB/sec and are limited to 3-4TB in size. In order to copy 3TB to a USB 2.0 drive, it will typically take 12-16 hours. Not unbearable, but quite a while.

Another option would be to install a USB 3.0 adapter card or eSata card—but that requires shutting down the server and installing drivers, etc. An alternative that I have had a good deal of success with is using a portable NAS device like the Seagate GoFlex drives or, for larger systems, the Iomega/LenovoEMC StorCenter px4-300d. The px4-300d has an added feature that I will touch on later. These NAS devices leverage Gigabit Ethernet and can roughly double the transfer rate of USB 2.0.

portable nas storage devices

Moving the data to these “seeding” targets can be as simple as drag-and-drop or using a command line utility like Xcopy from Windows or Rsync from a Linux box once you plug in the USB device or mount a share from the NAS drives. When the data copy is complete, eject the USB drive or unmount the share, power down the unit, package the drive for shipping and send it back to the site where the Avamar grid lives.

At the target site attach the portable storage device to a client locally and configure a one-time backup of this data. With the Iomega device, it includes a pre-installed Avamar client that can be activated to the grid and backed up without having to go through an intermediary server.

Once you get this copy of the backup into the Avamar grid, activate and start the backup of the remote client. The client will hash it’s local data and compare to what is on the gridfinding that the bulk of the unique data is already populated – reducing the amount of data required to transfer to the data that has changed or added since the “seed”.

Photo credit: Macomb Paynes via Flickr

Clear for Project Takeoff? The Importance of a Check List

By | How To, Uncategorized | No Comments

Like a Pilot before a flight, it’s critical a Project Manager have a check list before any data infrastructure project begins

project check list blog header 1200Over my 10 years as a Project Manager, one of the most important documents to have ready before we do the kick-off meeting with the customer is the check list. The list that I’m referring to is the pre-installation check list. In the technology world, whether I’m deploying a storage upgrade, networking upgrade or data migration project, there is a check list of items that must be in place before the installation crew travels to the site.

I have a friend who is a pilot for a large commercial airline and he tells me that before every flight he runs through a check list with the co-pilot. The check list is a spectrum from very basic checks to very important items crucial to the flight’s integrity. Same list for the same type of plane every time. One check list exists and must be reviewed for landing as well.

Similarly for project management, lessons learned and experiences earned provides us with the information to generate our own check lists for specific projects. It is our job to make sure that we review the check list with the stakeholders to assure integrity (on time, on budget and on scope) of the project. This basic exercise has saved me many times from having engineers travel internationally and not having the site ready (power, space, cabling etc.)—something, obviously, I want to avoid!

Another similarity to airline pilots is the constant communication the pilots maintain with the control tower. This is to check that the flight is on course per the determined destination. Similarly, we as PMs must have constant communication (status meetings, minutes, personal calls etc.) with the stakeholders to make sure we’re on course (scope and time).

<blockquote>It has been my experience that when basic project management steps are overlooked or not consider, the consequences down the road are very painful.</blockquote>

It is well known what the consequences are of a plane off course, the worst of which being a crash. Our project’s consequences are not nearly as drastic, thankfully, but we could end up with unsatisfied customers, projects delivered late, over budget and with poor quality.

It has been my experience that when basic project management steps are overlooked or not consider, the consequences down the road are very painful. Thus, check lists, along with constant communication, are necessary elements to increase the chances of delivering our projects on time, on budget and per scope.

Photo credit: atomicshark via Flickr

IT, Cloud, IDS, Integrated Data Storage, Networking,

Your Go-To Guide For IT Optimization & Cloud Readiness, Part II

By | Cloud Computing, How To, Networking, Storage, Virtualization | No Comments

[Note: This post is the second in a series about the maturity curve of IT as it moves toward cloud readiness. Read the first post here about standardizing and virtualizing.]

I’ve met with many clients over the last several months that have reaped the rewards of standardizing and virtualizing their data center infrastructure. Footprints have shrunk from rows to racks. Power and cooling costs have been significantly reduced, while increasing capacity, uptime and availability.

Organizations that made these improvements made concerted efforts to standardize, as this is the first step toward IT optimization. It’s far easier to provision VMs, manage storage, and network from a single platform and the hypervisor is an awesome tool that creates the ability to do more with less hardware.

So now that you are standardized and highly virtualized, what’s next?

My thought on the topic is that after you’ve virtualized your Tier 1 applications like e-mail, ERP, and databases, the next step is to work toward building out a converged infrastructure. Much like cloud, convergence is a hyped technology term that means something different to every person who talks about it.

So to me, a converged infrastructure is defined as a technology system where compute, storage, and network resources are provisioned and managed as a single entity.

it optimized blog pyramid

Sounds obvious and easy, right?! Well, there are real benefits that can be gained; yet, there are also some issues to be aware of. The benefits I see companies achieving include:

→ Reducing time to market to deploy new applications

  • Improves business unit satisfaction with IT, with the department now proactively serving the business’s leaders, instead of reacting to their needs
  • IT is seen as improving organizational profitability

→ Increased agility to handle mergers, acquisitions, and divestitures

  • Adding capacity for growth can be done in a scalable, modular fashion within the framework of a converged infrastructure
  • When workloads are no longer required (as in a divestiture), the previously required capacity is easily repurposed into a general pool that can be re-provisioned for a new workload

→ Better ability to perform ongoing capacity planning

 

  • With trending and analytics to understand resource consumption, it’s possible to get ahead of capacity shortfalls by understanding when it will occur several months in advance
  • Modular upgrades (no forklift required) afford the ability to add capacity on demand, with little to no downtime

Those are strong advantages when considering convergence as the next step beyond standardizing and virtualizing. However, there are definite issues that can quickly derail a convergence project. Watch out for the following:

→ New thinking is required about traditional roles of server, storage and network systems admins

  • If you’re managing your infrastructure as a holistic system, it’s overly redundant to have admins with a singular focus on a particular infrastructure silo
  • Typically means cross training of sys admins to understand additional technologies beyond their current scope

→ Managing compute, storage, and network together adds new complexities

  • Firmware and patches/updates must be tested for inter-operability across the stack
  • Investment required either in a true converged infrastructure platform (like Vblock or Exadata) or a tool to provide software defined Data Center functionality (vCloud Director)

In part three of IT Optimization and Cloud Readiness, we will examine the OEM and software players in the infrastructure space and explore the benefits and shortcomings of converged infrastructure products, reference architectures, and build-your-own type solutions.

Photo credit: loxea on Flickr

float(5)