Taking an active role in your personal data management and how it affects you.

By | Personal Data Management, Security, Uncategorized | No Comments

In light of the recent hacking scandals with large national retailers and exploit attacks into celebrity iCloud accounts, taking an active role in personal data security is more relevant than ever. Due diligence and integrity of personal data is ultimately our responsibility as end users.

Especially so, as retailers continue to lobby Washington against upgrading the magnetic strip and the infrastructure that supports the fifty-year-old technology. If you have ever traveled abroad, you may have noticed that credit cards have a small chip embedded in the top corner. What that chip provides is a platform for encrypted data transmission and PIN authentication—two-factor authentication: swipe then confirm PIN upon purchase.

Why has this technology not been adopted in America as of yet?

(Lack Of) Adoption

Well, for the reason stated above. Each embedded card has a cost of around $25, and to upgrade every point-of-sale device and the infrastructure to support this technology is going to cost billions of dollars to retailers. So you can understand the resistance. And if people are not demanding action from Congress, the status quo will continue.

“It’s important to realize that there is no silver bullet solution to having your personal data compromised.”

Even with no change in sight for the near term, there are steps you can take to protect yourself. However, it’s important to realize that there is no silver bullet solution to having your personal data compromised. We live in a fallible time and technological environment where the bad guys seem to be always a step ahead.

Taking Matters Into Your Own Hands

The good thing is, if you have ever used VPN and token to log into your work systems, you are already familiar with two-factor authentication, and adopting these methods in your personal life should be relatively painless.

Yes, taking an extra 30 seconds to log into your bank account, Gmail, iCloud, Facebook, or using a PIN to enter your smartphone may seem annoying at first, but it’s one of the many zero-cost things you can do to adopt an active role in securing your personal data. Also, asking retailers and banks for additional verbal passwords when conducting business over the phone is a great way to prevent social engineering.

Practicing proactive data security will never totally eliminate the chance of being hacked or becoming a victim of identity theft, but it dramatically lowers your attack surface. Most of the apps hackers use are tuned to find data using the lowest common denominator tactics. If you are using two-factor authentication, you make it a lot more effort than it’s worth for such hackers to take the extra time to dig in deeper on an individual level when they are scanning millions of queries. These apps are all about quantity and speed—not quality.

“Practicing proactive data security will never totally eliminate the chance of being hacked or becoming a victim of identity theft, but it dramatically lowers your attack surface.”

I would not expect any movement from Congress or regulators on forcing retailers to adopt the embedded chip standard any time soon. When providing a safe retail experience is trumped by facing billions of dollars in capital expenditures for infrastructure upgrades, they are going to slow roll this situation as long as they can.

The embedded chip is a good technology that has been adopted globally except for in the United States (much like the Metric system). With the wide adoption base, the platform has a life cycle and history. There is really no reason it can’t evolve and be improved upon for years to come. But, while there is apathy, stall tactics, and ignorance, there are always those who will look to use this time in history as a crossroads for innovation.

A Software-Defined Future

Technology companies like Apple, PayPal, and Google are developing software-defined systems that will use your smartphone, in combination with biometrics, and PIN to act as a proxy between you and your bank, facilitating an environment where your data is not even shared with retailers. This adds a third element of authentication, effectively enabling three-factor authentication.

Software-based authentication methods have the potential to eclipse the embedded chip and harness the already very powerful hardware in your smartphone. With buy-in from the banks and credit card companies already, software-defined payment is moving forward with iPay from Apple. It’s a win for the American consumer, it’s a win for Apple as it provides them with another revenue stream—and ultimately, this get retailers off the hook from spending billions on uprooting their existing infrastructure.

It will be interesting to see how the adoption into general society of the “iPay” plays out, as Google has offered these features for a few years already with Google Wallet on the Android platform.

Photo credits via Flickr: shuttercat7

Run Your Data Center: On iPhones?

By | Uncategorized | No Comments

In 2010, at a D8 conference, Steve Jobs made the famous analogy that “back when we were an agrarian nation, all cars were trucks, because that’s what you needed on the farm …

But as vehicles started to be used in the urban centers, cars got more popular. Innovations like automatic transmission and power steering and things that you didn’t care about in a truck as much started to become paramount in cars … PCs are going to be like trucks. They’re still going to be around, they’re still going to have a lot of value, but they’re going to be used by one out of X people.”

Consider the three following data points:

Someday, perhaps in this decade, phones may be displaced by personal wearable computers in the same way that desktops are being replaced by mobile devices. There’s a lot of computing power each individual is carrying around with them, all day, every day. So this got me to thinking about technologies in the data center and how we could leverage the power of the phone. Literally.

“What would a 42U rack full of iPhones look like?”

What would a 42U rack full of iPhones look like? The specs of the original iPhone 5 include a dual-core 1.3Ghz CPU, 1GB of RAM, multiple network adapters, and plenty of fast solid-state storage with low latency. It weighs about ¼ lb. and costs about $849 (retail).

We could easily fit 1152 phones in a 42U rack: that’s 2300 cores, 1.15TB of RAM, 72TB of storage, and 270 Gigabit of network and storage performance. It would weigh 300-400 lbs, and would cost $978,000. I’m not suggesting that people would actually drop off their phones at the data center, but since users are already bringing in the phone to the workplace, it’s already been paid for, what’s missing to make this actually work?

My opinion is that desktops are declining and VDI isn’t taking off because the personal, mobile aspect of computing hasn’t made its way back to the phone where it clearly needs to be. The reason why the phone has taken off so well is because it’s personal, mobile, and we always have it on us. We can’t live without it. It’s powerful, and is the center of the personal IT universe. So we as IT need to find a better way to run our enterprise applications on it.

“The phone is the center of the personal IT universe. So we as IT need to find a better way to run our enterprise applications on it.”

Anyone who has already created an app to run their software on these phones is ahead of the game. But we still need someone to write the software that makes it possible to leverage the existing enterprise applications on the phone, and more importantly—and here’s where I think there’s a hidden gem of an opportunity—figure out a way to leverage the CPU, RAM, and storage in the phone to offload traditional data center costs.

For example, VDI processing actually runs on the data center servers, but is this ideal? Why not leverage the CPU on the phone somehow? I hate to see those 2300 cores just sitting there, mostly idle. Seems like such a waste.

Photo credit: moridin3335r via Flickr

Clear for Project Takeoff? The Importance of a Check List

By | How To, Uncategorized | No Comments

Like a Pilot before a flight, it’s critical a Project Manager have a check list before any data infrastructure project begins

project check list blog header 1200Over my 10 years as a Project Manager, one of the most important documents to have ready before we do the kick-off meeting with the customer is the check list. The list that I’m referring to is the pre-installation check list. In the technology world, whether I’m deploying a storage upgrade, networking upgrade or data migration project, there is a check list of items that must be in place before the installation crew travels to the site.

I have a friend who is a pilot for a large commercial airline and he tells me that before every flight he runs through a check list with the co-pilot. The check list is a spectrum from very basic checks to very important items crucial to the flight’s integrity. Same list for the same type of plane every time. One check list exists and must be reviewed for landing as well.

Similarly for project management, lessons learned and experiences earned provides us with the information to generate our own check lists for specific projects. It is our job to make sure that we review the check list with the stakeholders to assure integrity (on time, on budget and on scope) of the project. This basic exercise has saved me many times from having engineers travel internationally and not having the site ready (power, space, cabling etc.)—something, obviously, I want to avoid!

Another similarity to airline pilots is the constant communication the pilots maintain with the control tower. This is to check that the flight is on course per the determined destination. Similarly, we as PMs must have constant communication (status meetings, minutes, personal calls etc.) with the stakeholders to make sure we’re on course (scope and time).

<blockquote>It has been my experience that when basic project management steps are overlooked or not consider, the consequences down the road are very painful.</blockquote>

It is well known what the consequences are of a plane off course, the worst of which being a crash. Our project’s consequences are not nearly as drastic, thankfully, but we could end up with unsatisfied customers, projects delivered late, over budget and with poor quality.

It has been my experience that when basic project management steps are overlooked or not consider, the consequences down the road are very painful. Thus, check lists, along with constant communication, are necessary elements to increase the chances of delivering our projects on time, on budget and per scope.

Photo credit: atomicshark via Flickr


Review: Big Benefits to Using EMC VPLEX Local

By | EMC, Storage, Uncategorized | No Comments

EMC’s VPLEX is a very powerful tool whether it’s deployed in a Local, Metro, or Geo configuration. What everyone always seems to talk about is the benefits of the VPLEX Metro configuration to the data center. To this point, it is a big benefit. You could have an entire data center go offline yet if you’ve deployed VPLEX Metro and mirrored everything, you could have everything continue running on the other data center. It would be possible for no one to even notice that one of the sites went offline.

The below picture shows an overview of what a VPLEX Metro configuration would look like.


From my professional experience, what no one seems to talk about is the benefits of VPLEX Local. Even if you have a single array, say a CX3-40, a VPLEX Local installation will one day help you. The key is to have it installed!

The below picture shows an overview of what a VPLEX Local configuration would look like.


So… why do I like VPLEX Local so much even if you only have a single array? Well, let’s address why it’s not going to help you.

It will NOT provide any additional redundancy to your infrastructure even if you set up everything properly. This is another thing to configure so there is always the chance of setting it up improperly.


What are the benefits I see from having a VPLEX Local installation?

  1. It has a large cache that sits in-between the host and the array.
  2. If you don’t have a DR location currently yet will have one in the future, you have more options on how to get data to the DR site. You can do VPLEX Geo, Metro, or use RecoverPoint with VPLEX.
  3. If you want to mirror your array and have array failover capability within the same datacenter, you already have the VPLEX setup and won’t have to reconfigure anything.
  4. It is a single point to connect all of your hosts as the VPLEX acts as storage to host and acts as a host to storage. If you have more than one array you don’t have to worry about connecting your hosts to different storage array vendors and getting the connection settings correct. You simply have one place to do it all.
  5. One of the biggest reasons (as if the above reasons weren’t enough) is that you never have to take down time for a migration again. If you read this and weren’t excited then you haven’t done as many migrations as I have. They’re long. Planning takes weeks as migrating itself takes weeks or months. You have to have a lot of people involved and downtime is required. Downtime is usually a long process as it is not completed in just one night, but more like four to six hours one night every week for three months!

    Usually the cutovers happen on Friday or Saturday night and nobody wants to do this. Occasionally things don’t go as planned and you don’t get as much done as you anticipated or there was a setback. The setbacks could be related to any systems no working properly or to something like a key employee forgetting they had to attend a wedding that weekend so you have to push off that weeks migration. I’ve seen it all.

Migrations are complicated and they cost a lot of money to hire someone to do it. As much as you trust your employees how often do they do migrations, once every four years? Wouldn’t you rather have the peace of mind paying someone else who does this professionally? You will need to hire someone that does migrations often and they don’t come cheap.


How does having a VPLEX Local fix this?


Let’s assume you already have it installed, running, and your hosts have storage presented from it (as you should). The next step is for you to buy a new array, configure the array then present the new storage to the VPLEX. After this you go to the VPLEX and you mirror everything from the old array to the new array. Once that is done you take away the original leg of the mirror (old array) and you’re done. No down time, hardly any planning and no one has to work late from your company. You also save a ton of money as you don’t have to pay someone else to do it for you.

Attn, IT Departments: Do You Have The “Right” Server Capacity Tools?

By | How To, Uncategorized, VMware | No Comments

In the case of most IT departments I talk to, I’d settle for ANY tool.

This is something that just boggles my mind. If an excavation company showed up at your business to dig for a new fiber line with a spoon, you’d have security escort them off the premises. If a carpentry contractor showed up with Lego blocks instead of lumber, saws and hammers, you’d rescind the contract. Yet, these scenarios are precisely what most businesses do to their IT teams.

Case in point: a recent RFP we were invited to respond to sets out a rather sizable infrastructure in need of replacement. A couple hundred terabytes of storage, all being replaced in one project, migration of servers, implementation of a new fabric, etc. In the RFP very little actual detail was provided about the existing environment. An IOPS and capacity target are laid out as requirements, but no data as to what the current levels are, how the applications are laid out, and not even so much as a server inventory.

Like any consultant, I take advantage of the question period to ask for these details and a few more. Virtually every question I asked got a response that basically told me that the customer has neither the tools nor the knowledge of how to obtain such information. Understand that I’m not asking for the meaning of life, just for a spreadsheet that lists the servers and the capacity associated with each one and maybe some average I/O measurements over a few days worth of time.

This process should not be this way! Why do businesses hamstring their departments in this manner? Would you send your sales team out without knowledge of their product, or tell manufacturing that a screwdriver doesn’t have enough ROI on it to justify purchasing?

The definition of insanity (or stupidity depending on who you talk to), is doing the same thing repeatedly in the same manner, and expecting different results. That’s what businesses do to IT departments all the time. They refuse to buy them tools, but hold them overly accountable when they didn’t see a performance problem or a failure coming and move to prevent it. And this is only worsening in environments where virtualization has become the standard, not the exception.

The problem really isn’t that hard to solve. It doesn’t require a whole team of people to manage a tool set like maybe it did in years past. Tools like vCenter Operations from VMware and Veeam Reporter (just to name a couple) make this problem disappear.

IT departments share some of the blame in this too. Simple knowledge based operating system tools like System Monitor or iostat can help fill a lot of knowledge gaps and provide a considerable amount of information to design engineers like myself.

Help us help you customers and prospects!  Remember that using the right tool to accomplish the job is just as critical as getting the job done!

Photo Credit: denisecarbonell

Defining the New CIO Role: More Business Leader, Less “Head of IT”

By | Uncategorized | No Comments

This isn’t my first blog post to ponder this question (“Dear Mr. CIO…”), and I’m sure it won’t be the last. I experienced something this past week that has me thinking about the answer from a new perspective.

I’ll begin by stating that I have aspired to be a CIO since the start of my career in IT. In many ways it is my career path dream to be a CIO who is actively engaged in the business as a technologist who can present relevant data to the rest of upper management to aid in quick and informed business decisions which drive revenue and profitability.

I have written previously about the idea that most CIO’s are in fact Infrastructure Officers, not Information Officers, and have generated some interesting discussion from this concept. I love technology, I love information analysis, and I think that if approached properly the CIO position can be the most critical of any executive management team.

Breaking with the popular CIO mold was a gentleman holding this title whom I met last week. Actually, he holds the dual title of CIO and CFO. He is technically adept, but isn’t a nuts and bolts geek towards all of the features and minute details of the technology his organization uses—he employs people he trusts to do that for him.

Here in lies his true power: he actually trusts these employees, empowers them and expects them to do the same. He doesn’t make technology decisions without consulting his IT team and receiving full input from them. I can’t tell you how many CIO’s I’ve worked with who SAY they do that, but a lunch meeting with a vendor can unravel months of work by his or her team and destroy a well thought out design with one that was conceived to push a product (and in the spirit of disclosure, yes, I’ve unraveled my fair share of competitor’s designs, but I like to think I put an equal amount of effort into my design at least).

I’ve seen a few CxO and VP level folks do this before, and while learning to recruit, hire, and retain talent you can truly trust is a skill more managers should learn and cherish, it wasn’t all that unique. What really stood out to me was his understanding of the “Information” component in the CIO title. I had a great conversation with him about how he is regularly looking to find ways of analyzing his businesses data to better improve their overall product. It was very, very refreshing. This CIO cares about the technology used under his direction and absolutely does not want his team skimping or being short-changed with technology that makes their jobs harder.

At the end of the day, though, if they can do what they need to do with a hamster in a wheel, and still provide business information to analyze and provide for use to the rest of the executive team, this particular CIO doesn’t get tied up with those ins and outs. He empowers and provides the right amount of resources to his team to provide a robust infrastructure to the business, and then he actually USES it.

So that got me thinking about whether this long standing pattern of assuming a person with a CIS degree is the best person to run IT is a good assumption. I’ve always thought it was based on my view of what a CIO should be doing for the business, but after talking with this CFO, I’m not so sure.

It may be that a better way to approach this is to put a real business person in the role, but insure that they leave technology decisions to a trusted technologist, and learn some of the technical details. That way, they come from a business perspective and provide a more business oriented approach to IT decision making instead of a technological one. That wouldn’t preclude someone in IT from climbing the ladder to the CIO role, of course. It just requires they understand what their business really does and what it needs as a pre-requisite to attaining the role. It’s a delicate balance, because if you’ve got a strictly non-technical person in that position, you could completely hamstring your IT staff without realizing it until they’ve all quit, which I’ve seen happen in other organizations.

I’m pretty excited about working with this individual further to see how he deals with some other issues around data mining and presentation. As I learn more I’ll share what I can.  I’d be interested in whether your organization would (or has) considered a non-technical person in the CIO role and your thoughts on the matter.

Photo Credit: Chandra Marsono via Flickr

Integrated Data Storage Named to Crain’s 2011 Fast Fifty, Third Consecutive Year on List

By | Uncategorized | No Comments

For the third year in a row, Integrated Data Storage has been named to the Crain’s Chicago Fast Fifty list, coming in at #13. After receiving the great news, I sat down with Matt Massick, our CEO, and Alan Dorrian, our Chairman and Founder, to discuss why and how IDS continues to grow and move forward at such an extraordinary pace.

[framed_box bgColor=”EFEFEF” rounded=”true”] Shannon: In your opinion what do you attribute to Integrated Data Storage’s incredible growth over the past few years, which has lead to where we are today, three years consecutively ranking in the top 15 of the Crain’s Fast Fifty?

Matt: I think the reason for our growth is three-fold:

  1. We’ve gone into the marketplace and hired the best talent, whether that be from a sales perspective, from a pre-sales engineering and architectural standpoint, or from a post-sales implementation standpoint. I think we’ve done one hell of a job in doing just that, which has been validated by the various awards we’ve won in addition to the Crain’s listing, like the EMC Velocity Services Quality Award and the CRN Tech Elite 250.
  2. The second reason for our growth is we’ve provided an environment to our employees in which they can excel.
  3. And the third area is we’ve never lost sight of the fact that our customers are number one and our job is to serve them.

Alan: A large part of what has made us successful is our desire to go above and beyond to take care of our customers, no matter what the situation or issue. This dedication to our customers is what really sets us apart from others in the industry.

In some instances, we have helped customers architect and install products they didn’t even obtain from us: one in particular had shipped the product into London, which at the time wasn’t an authorized distribution area for us, but was still in the product set that we covered. They needed a very critical upgrade that supported their mission critical data. So, for a product we didn’t sell or profit from, we sent our lead engineer to London to supervise the upgrade on our own dime. And we did it simply because we knew it was that important to the customer and the customer was ultimately grateful and successful from the upgrade we had provided for them.

This “no matter what” approach to our customer base is evident in every facet of IDS and is ultimately what attributes directly to our growth.

Shannon: What is your vision for the future of IDS, and how does that relate to the continuously evolving IT marketplace?

Matt: I think the marketplace is screaming to have more partners like IDS, as opposed to the various catalogue houses that may be out there selling everything from a pencil sharpener to a storage array. What we want to focus on is continuing to hire the top talent and engineering expertise, so that we can get deeper into future technologies and use them to help our customers transform the data center into a business driver.

Alan: My vision for IDS is for us to remain intensely customer focused—as they saying goes, people buy from people and we have very much personalized the work that we do. My cell is listed on my business card. As the owner of the company, you can reach me 24/7 and that same attitude is held by our sales and engineering teams. However the technology looks a year or ten years from now, businesses are going to seek out and partner with trusted advisers they can rely on for accountability and results. As we get bigger, our commitment is to retaining that personalized “smallness” our customers appreciate.


This has been a superb year here at IDS, as we’ve continued to make great strides moving forward at an extraordinary pace, mainly due to the heavy investment in our pre-sales technical architects and post-sales engineers. That investment hasn’t simply been adding more personnel: we’ve also strengthened our set of Tier 1 technologies with which they architect and implement, now having added NetApp and Cisco as manufacturer partners.

At the end of the day, though, it is our relationship with our customer that remains #1.

In a Roomful of Award Winners at EMC World, IDS Is The Last Partner Standing

By | EMC, Uncategorized | No Comments

Two weeks ago, I had the honor of representing IDS at EMC World to receive our Velocity Services Quality Award. I say honor because this is one of those awards that really matters (in my mind) because it is solely based on Customer Feedback—the thing that ultimately drives our business. A little background on the award for anyone who is curious:

[framed_box bgColor=”#EFEFEFEF” rounded=”true”]Several years ago, EMC implemented a program called the Authorized Services Network (ASN). There were hundreds of resellers in North America certified to sell EMC, but only a select handful could qualify to be ASN-certified and actually perform EMC implementations for their customers. This program requires rigorous testing of multiple Pre-Sales and Post-Sales Engineers to prove that the company is dedicated to not just selling EMC equipment, but providing their customers the highest level of service with their engineering expertise.

Back in 2007, EMC decided to recognize the best of the best by creating an ASN Quality Award for the top implementation partner in North America, based completely on customer feedback. After a reseller performs an implementation for a customer, that customer receives a third-party survey asking how the implementation went, would they use the reseller again, would they recommend them to peers in the industry, etc. Based on those responses, the ASN Partners were ranked and IDS finished at the top of the list, receiving EMC’s first ever ASN Quality Award.

In 2008, EMC decided to open up the Award a bit and presented the award to two partners. In subsequent years, a few more Partners made the list as well. Fast forward to 2011. EMC changed the name of the award to the Velocity Services Quality (VSQ) Award but the concept is exactly the same.

This year, 14 partners received the honor at EMC World for their dedication to engineering excellence and customer satisfaction. They started the awards by naming the first-time winners, then two-time winners, etc. At the tail-end was IDS being announced as the only five-time winner of the prestigious award. To be named as the #1 Partner for the largest storage manufacturer in the world based on entirely Customer Satisfaction is a huge honor and I was proud to be there accepting on behalf of the IDS team.[/framed_box]

First off, I would like to say thank you to our customers. Your dedication to IDS and the services that we provide is what makes us great. We appreciate the long-term business Partnerships and look forward to many more years of joint prosperity.

To our Engineers: thank you for making this award possible! You work long hours at customer sites, study technical materials at night to keep your expertise at the highest possible level, and frequently spend time away from your families supporting the customers that ultimately give us these high marks. We appreciate everything that you do and our customers do as well. You are the lifeblood of this organization and we appreciate everything that you do.

And finally, to the other VSQ Award Winners this year, congratulations. It is an elite group to be in and I can appreciate all of the hard work that it takes to achieve this level of accomplishment. I look forward to seeing you at the award ceremony for many years to come … and, of course, always being the last man standing.

ISPs Pop a Cap in Business Cloud Adoption #datacaps

By | Cloud Computing, Uncategorized | No Comments

If you use broadband internet access in your home, then you probably already know that data caps and overage charges are all the rage these days with ISPs. Most of the time when you read some outrageous story about caps, it’s focused on consumers who have an open WiFi access point and some neighbor decided to torrent the entire DVD collection of the Sopranos across it. That may be changing, though.

AT&T just imposed caps on a good chunk of their customer base, including small business DSL customers. Hello, big time barrier to the cloud, especially if other providers follow suit.

AT&T says it will implement a 150GB monthly cap on landline DSL customers and a 250GB cap on subscribers to U-Verse high speed internet starting on May 2nd. AT&T will also charge overage fees of $10 for every additional 50GB of data, with two grace periods to start out — in other words, the third month you go over the cap is when you’ll get charged. (From

As I’ve written in a previous blog post regarding the roadblocks to cloud, there are three barriers to ubiquitous cloud adoption:  

1) institutionalized thinking

2) security

3) bandwidth costs (with what we’ve seen from Amazon and Google lately, I may be adding a fourth, and that’s the providers themselves; that’s another discussion for another day, though).

With an unlimited bandwidth connection, say a 50Mbps down/5Mbps up business cable connection, using cloud services is relatively simple (assuming you can overcome the other two barriers). What difference does it make if you have to restore an entire desktop or server hard drive from your MozyPro subscription? Well, if you chew up half or more of your monthly bandwidth allotment in one restore, it might make a big difference.

What’s so interesting to me is that these bandwidth providers are almost all working to create, or already have, cloud services in their data centers. So for them, cloud is a double win; they hook you by convincing you how much lower your costs will be, and then club you over the head when you actually use the services and go over your bandwidth caps.

We’ll obviously have to see how this goes, but if it heads down the same path the consumer caps have, it isn’t going to be pretty for cloud based infrastructures, that’s for sure.

Further Reading:

Time Warner Monthly Data Caps Detailed

Verizon Data Caps Coming

At&t To Impose Broadband Data Cap

Photo Credit: targuman

IT Certifications: Why Are They Important, Again? #CareerAdvice

By | Uncategorized | No Comments

As long as the IT industry has been around, there have always been people longing to be taught how to navigate the different technologies that are out there. The advantage of instruction is that most times, you receive a certification which validates your knowledge of the tech that you received instruction on.

Now, there are some of us out there who try to stick to the fact that we can learn things on our own from the tried and true, “Hands-on Experience.” While this works in some situations, in others it can be more beneficial to get training from someone with experience. In this day and age, we can get certified in pretty much anything relating to the information technology sector. Networking, storage, virtualization, etc. These are all items that we can take classes on and take tests in to receive certifications.

The biggest question regarding certifications that I get from peers is “What is the value of certifications these days? Aren’t they just pieces of paper to validate that I took and passed a test?”

While this is a valid question, it brings up a couple points I’d like to make.

1) First and foremost, will having certain certifications in your field make you eligible for a higher pay scale? Most of the time, the answer to this question is yes. Having certifications can help you create demand for yourself in the industry. Take my previous employer for instance: I took the VMWare Certified Professional (VCP) exam and was immediately able to garner a raise from my manager. Why? Because we used VMWare in our environment and having a knowledgeable person on the team was a high value to my manager.

2) My second point regarding certifications is to only take the ones that are valid to either yourself or the business. If you are a consultant and primarily do security work revolving compliance, don’t go out and take the CCNA for Voice—really won’t help you. In this case, you would want to focus on the CISSP, as it directly relates to your security role and will help out your bottom line. When it comes to getting new business, the CISSP certification will show customers that you’re very focused on what you do. If your employer is running Cisco VOIP in the Enterprise, don’t go and take the Avaya certifications for voice. Bottom line is to take certifications that can help you and your employer out—and, of course, ones that will eventually get you more money!

There are a lot of areas that you can self-study on without having to take an instructor-led course. Pick up some books, take online training, and get with the program. The main goal of me breaking this down is this: receiving certifications can only help your career overall. Yes, without a doubt, they will help the business; but mainly, they’ll help you exceed in your professional life.

Set some goals for yourself, say trying to get 2 certifications in a 6 month period. This way, when your 6 month or annual review rolls around, you can show your manager that you care about what you do and making yourself better at your job function.

Photo Credit: nolapoboy via Flickr