All Posts By

Brad Voogel

Unstructured Data, the silent killer of your IT budget

Unstructured Data, the Silent Killer of Your IT Budget: How are you stopping the bleeding?

By | Backup, Replication, Storage | No Comments

Like most organizations, you probably are hosting your unstructured data on traditional NAS platforms. The days of storing this data on these legacy systems are coming to an end. Let’s look at some of the setbacks that plague traditional NAS:

  • Expensive to scale
  • Proprietary data protection – third-party backup software is needed to catalog and index
  • Inability to federate your data between disparate storage platforms onsite or in the cloud
  • High file counts, which can cripple performance, increase back-up windows, and require additional flash technology for metadata management
  • File count limitations
  • High “per-TB” cost
  • Some platforms are complex to administer
  • High maintenance costs after Year 3

Read More

Trending cDOT Transition Considerations

Trending cDOT Transition Considerations: What You Need to Know

By | Networking, Storage, Virtualization | No Comments

When considering a transition from current 7-Mode systems to Clustered Data ONTAP (cDOT), it’s important to understand the limitations, timing and complexity. At IDS, we help our customers navigate and understand how this process impacts their production environment. We understand every customer’s architecture is different, but we have compiled some questions that continue to trend in our conversations. Read More

ONTAP 8.3 Update: QOS Commands, Consider Using Them Today

ONTAP 8.3 Update: QOS Commands, Consider Using Them Today

By | Data Center, How To, NetApp | No Comments

NetApp has included some very powerful troubleshooting commands with the 8.3 update which I’d like to bring to your attention: its QOS statistics and its subcommands. Prior to 8.3, we used the dashboard command to view statistics at the cluster node level. The problem with dashboard is that it’s reporting on cluster level statistics and it can be difficult to isolate problems caused by a single object. The advantage of the QOS command is that we now have the ability to target specific objects in a very granular fashion. Read More

Cloud Computing and Farming from NetApp Insight

An Unexpected Finding at NetApp Insight 2014

By | Cloud Computing, NetApp, Strategy | No Comments

Another year of NetApp Insight has come and gone and I would like to share some very exciting news regarding the many useful updates to Data ONTAP 8.3. However, I will have to wait a few more weeks until NetApp lifts the press embargo. Instead, I want to take some time and share with you something I found extremely interesting from NetApp partner, Fujitsu

NetApp has partnerships with many players in tech, but this story presented at one of the general sessions from Fujitsu’s Bob Pryor, Head of Americas and Corporate Vice President, regarding how Japanese farmers are using SaaS in the cloud really had a profound effect on me.  Not only because cloud computing and farming are perceptively juxtaposed, but having grown up on a dairy farm, I’m always interested in how farmers are using technology to drive efficiency in their daily lives. For example, robots and GPS devices did not exist 30 years ago in the agricultural space, right when I needed them most to help me with my chores.

How Can Cloud Computing and Farming Work Together?

Cloud and farming? Nothing could seem so irrelevant on the together on the paper.After all, farmers use sweat and brawn, machinery, and long hours to accomplish their tasks, but they’re also running a business and need to collect data on crops, commodity prices, livestock, and weather.Millions of potential data points for analysis could open up new ways of discovery to higher yields, healthier livestock, and ultimately, greater profits.Kind of sounds like a “Big Data” opportunity to me. I encourage you to take a look at Akisai, Fujitsu’s SaaS platform aiding Japanese farmers today. 

“Fujitsu’s new service is the first of its kind worldwide that has been designed to provide comprehensive support to all aspects of agricultural management, such as for administration, production, and sales in open field cultivation of rice and vegetables, horticulture, and in animal husbandry. With the on-site utilization of ICT as a starting point, the service aims to connect distributors, agricultural regions, and consumers through an enhanced value chain.” – Fujitsu

As more of us move into large cites and as third-world countries continue evolve from an agrarian to manufacturing/services based economies, it’s more important now than ever to understand where our food comes from, how it’s produced, and how it affects us as consumers. If technology can play a more dominant role in supplying food delivery to the world with less land, resources, and time, and can provide better economies of scale to the farmer, then I believe Fujitsu is onto something here.

Please visit Fujitsu’s website for further information.

 

 

Taking an active role in your personal data management and how it affects you.

By | Personal Data Management, Security, Uncategorized | No Comments

In light of the recent hacking scandals with large national retailers and exploit attacks into celebrity iCloud accounts, taking an active role in personal data security is more relevant than ever. Due diligence and integrity of personal data is ultimately our responsibility as end users.

Especially so, as retailers continue to lobby Washington against upgrading the magnetic strip and the infrastructure that supports the fifty-year-old technology. If you have ever traveled abroad, you may have noticed that credit cards have a small chip embedded in the top corner. What that chip provides is a platform for encrypted data transmission and PIN authentication—two-factor authentication: swipe then confirm PIN upon purchase.

Why has this technology not been adopted in America as of yet?

(Lack Of) Adoption

Well, for the reason stated above. Each embedded card has a cost of around $25, and to upgrade every point-of-sale device and the infrastructure to support this technology is going to cost billions of dollars to retailers. So you can understand the resistance. And if people are not demanding action from Congress, the status quo will continue.

“It’s important to realize that there is no silver bullet solution to having your personal data compromised.”

Even with no change in sight for the near term, there are steps you can take to protect yourself. However, it’s important to realize that there is no silver bullet solution to having your personal data compromised. We live in a fallible time and technological environment where the bad guys seem to be always a step ahead.

Taking Matters Into Your Own Hands

The good thing is, if you have ever used VPN and token to log into your work systems, you are already familiar with two-factor authentication, and adopting these methods in your personal life should be relatively painless.

Yes, taking an extra 30 seconds to log into your bank account, Gmail, iCloud, Facebook, or using a PIN to enter your smartphone may seem annoying at first, but it’s one of the many zero-cost things you can do to adopt an active role in securing your personal data. Also, asking retailers and banks for additional verbal passwords when conducting business over the phone is a great way to prevent social engineering.

Practicing proactive data security will never totally eliminate the chance of being hacked or becoming a victim of identity theft, but it dramatically lowers your attack surface. Most of the apps hackers use are tuned to find data using the lowest common denominator tactics. If you are using two-factor authentication, you make it a lot more effort than it’s worth for such hackers to take the extra time to dig in deeper on an individual level when they are scanning millions of queries. These apps are all about quantity and speed—not quality.

“Practicing proactive data security will never totally eliminate the chance of being hacked or becoming a victim of identity theft, but it dramatically lowers your attack surface.”

I would not expect any movement from Congress or regulators on forcing retailers to adopt the embedded chip standard any time soon. When providing a safe retail experience is trumped by facing billions of dollars in capital expenditures for infrastructure upgrades, they are going to slow roll this situation as long as they can.

The embedded chip is a good technology that has been adopted globally except for in the United States (much like the Metric system). With the wide adoption base, the platform has a life cycle and history. There is really no reason it can’t evolve and be improved upon for years to come. But, while there is apathy, stall tactics, and ignorance, there are always those who will look to use this time in history as a crossroads for innovation.

A Software-Defined Future

Technology companies like Apple, PayPal, and Google are developing software-defined systems that will use your smartphone, in combination with biometrics, and PIN to act as a proxy between you and your bank, facilitating an environment where your data is not even shared with retailers. This adds a third element of authentication, effectively enabling three-factor authentication.

Software-based authentication methods have the potential to eclipse the embedded chip and harness the already very powerful hardware in your smartphone. With buy-in from the banks and credit card companies already, software-defined payment is moving forward with iPay from Apple. It’s a win for the American consumer, it’s a win for Apple as it provides them with another revenue stream—and ultimately, this get retailers off the hook from spending billions on uprooting their existing infrastructure.

It will be interesting to see how the adoption into general society of the “iPay” plays out, as Google has offered these features for a few years already with Google Wallet on the Android platform.

Photo credits via Flickr: shuttercat7

Insights From The Pony Express: Why The Five-Year Tech Refresh May End Up Costing You More

By | Strategy | No Comments

Prior to 1860, it could take several weeks to months for correspondence from the east coast to reach the Pacific states. As settlement of the West exploded due to the promise of free land from the Homestead Act and the discovery of gold, a faster line of communication was needed. Especially with war looming, the ability to facilitate faster mail delivery became critical to every facet of society, enterprise, and government.

To undertake this monumental feat (remember, this was before the advent of the continental railroad), William Russell, Alexander Majors, and William Waddell—who already ran successful freight and drayage businesses—joined forces to devise a network of stations roughly 10 miles apart that stretched from St. Joseph, MO to San Francisco, CA.

Riders, who had to be lightweight, travel light, and run their specially-selected smaller horses at a gallop between stations, would often ride around 100 miles per day. (The horses were burdened with only 165 pounds on their backs, including the rider.)

With this relay method of riding fast, light, and taking on fresh horses at every stop, a letter could make it across the country in 10 days, an accomplishment that was previously thought impossible.

“A letter could make it across the country in 10 days, an accomplishment that was previously thought impossible.”

So let’s summarize the technology and methodology of the day and what made delivery of mail to the West coast in 10 days possible:

Tech

  • Small riders, often children (orphans were preferred because of the hazardous occupational dangers accompanied by job of the rider)
  • Small horses (the happy medium between speed and overhead; larger horses eat more and can’t run as fast at distance)
  • The Mochilla pouch (fit over the saddle and carried locked mail compartments, water and a bible)

Method

Stations 10 miles apart helped overcome:

  • Finite running capacity of the horses
  • Weather
  • Terrain
  • Danger from crossing into Indian lands that were hostile

Operating for a period of just 18 months, the Pony Express was one of the most ambitious endeavors in American history. By October 1861, it was over, displaced by the more cost-effective technological advancement of the telegraph.

Let’s take a look at the known costs of running the Pony Express from a CAPEX and OPEX stand point. We don’t have historical records on all details, so I will take some liberties using guestimation.

CAPEX

  • 400 horses at $200 each
  • 184 stations at $600
  • Tack (saddles, horse shoes, Mochilla) $20 per horse

Total: approximately $200K
Value today: approximately $76 Million

OPEX

  • 80 riders at $100 per month
  • 400 staff at $20 per month
  • Feed at $5 per horse per month

Total: approximately $18K per month
Value today: approximately $7 Million

old telegraph key

The reason I use this example is to show that disruptive technologies throughout history have led to monumental changes in the way we conduct commerce. The advent of a letter traveling with a Pony Express rider across the country in 10 days, which once was thought impossible, was eclipsed by laying down telegraph cable and Morse code. Not only did the telegraph render the enterprise of the Pony Express irrelevant, but cost prohibitive, especially at $1 per ½ once.

It would have taken years to recoup the startup capital and turn a profit. We can also look to the advancement of the telegraph as seeding other life-changing technologies, such as the telephone and the continental railroads (which followed the telegraph lines across known safe passages into the West.)

“Do we really want to put ourselves in a position that stifles our ability to ride the ebb and flow of advancement … because we are locked into rigid maintenance contracts with original equipment manufacturers?”

I guess the point I’m trying to articulate is: technology that supports and drives business today can change even faster than in 1861. Do we really want to put ourselves in a position that stifles our ability to ride the ebb and flow of advancement, which blocks our ability to drive up efficiencies and drive down costs, because we are locked into rigid maintenance contracts with original equipment manufacturers?

I say no.

Keep your business agile. Be prepared for the next big thing that poses relevant change in information technology. Keep your maintenance schedules to three years. Don’t let history repeat itself on you and your enterprise.

Photo credits via Flickr, in order of appearance: bombeador; digitaltrails.

InfoSight Review: How Nimble Storage Is Turning The OEM Support Model On Its Head

By | Nimble, Review, Storage | No Comments

All storage OEMs have some kind of “call home” feature. Upon a hardware or software failure, there is usually an alert sent simultaneously to the OEM and the customer. A support ticket is logged and either a part or engineer is dispatched to fulfill the SLA.

Most OEMs also collect performance statistics on weekly intervals and provide either a portal or a reporting mechanism to view historical data, see trending, etc. Customers can then correlate that data and use it as a mechanism to drive forecasting in the environment around the future needs in their IT organization.

How about an easy-to-interpret and read dashboard view vs. a raw data dump to a text file?

What if this data was available in real-time? How would that affect my organization? What if I didn’t have to rely on my internal resources to interpret that data? How about an easy-to-interpret and read dashboard view vs. a raw data dump to a text file? How much time could I return to the business? Can this really boost the productivity of my staff?

The answer is an emphatic yes.

Let me explain why all of the above is so beneficial for today’s enterprise and why it’s such a departure from what I consider the “status quo” of traditional support models found elsewhere.

Let’s face it, IT operators are expected to do more. Time is the most valuable resource. The day of the one-trick pony is drawing to a close. The trend I see with my customers is that that they are responsible for more than one platform and are also expected to complete their expanded duties in the same amount of time. This doesn’t leave a lot of time to develop deep expertise in one skill set, let alone two or three.

Nimble InfoSight: The Benefits

Nimble steps in here by providing:

  • Easy-to-read and interpret graphical dashboards that are cloud-based, using a web front end. No Java!
  • Real-time performance monitoring and reporting. A daily summary is a huge value add here as most admins are only logging in to the controllers for care and feeding tasks (i.e. provisioning storage.)
  • Predictive upgrade modeling based on real-time analytics of performance data.
  • Executive summaries, capacity planning, trending and forecasting. Did I mention that this is a web front end and not bolt-on software to the standard management interface?

The bottom line is that Nimble’s InfoSight is an all-encompassing holistic reporting and forecasting engine that is a zero-cost add on to your Nimble storage array.

Most other OEMs charge extra for software that offers the same foundational idea of “reporting” that does not equal what I’d call a Web 2.0 caliber solution. I would argue, from a business perspective, that InfoSight offers more overall value to the enterprise from the top-down than increasing the speed of xyz application.

From a business perspective, InfoSight offers more overall value to the enterprise from the top-down than increasing the speed of xyz application.

Although the application of the technology, where it fits better, or why it’s faster has its place; I find it can be a somewhat one-dimensional conversation. I believe that overall value should be abstracted through a larger lens of how the entire solution benefits your organization as a whole.

Think big. Make you workplace better, faster, stronger! InfoSight keeps you working smarter, not harder.

Photo credit: Phil Hilfiker via Flickr

Adventures In cDOT Migrations: Part Two

By | How To, NetApp | No Comments

Before we start: for those just joining the adventures, here’s Part One.

Part Two: Insights From The Field

When it comes to 7-Mode to cDOT transitions, we are seeing the trend of host-based migration continuing to be King when it comes to databases and virtual environments. However, for those customers who are using SnapMirror where re-seeding those primary and secondary volume relationships is not an option due to WAN limitations, the 7MTT (7-Mode Transition Tool) is becoming the work horse of our transition engagements.

It’s critical going into this process to understand the capabilities and limitations of the tool. Let’s take a look at some of the technical terms around the 7MTT.

  • A Project is a logical container that allows you to setup and manage the transition of a group of volumes.
  • A Subproject contains all of the configuration data around volume transitions, i.e. SVM mapping, volume mapping and SnapMirror schedule.
  • A Transition Peer Relationship is the authorization management mechanism for the SnapMirror relationships are between 7-Mode and cDOT systems.

One of the limitations around the 7MTT is that twenty volumes can be managed inside a project container. There is typically some planning and strategy around grouping volumes together either by use case or RPO/RTO. The look and feel of the transition is very SnapMirror-like: it follows a baseline, and has incremental and cutover format. There is also a CLI, but using the GUI is the recommended approach.

As with any services engagement, due diligence leads to success and these 7-Mode to cDOT transitions require careful planning and collaboration as they can take weeks to months depending on the size of the environment to complete.

“These 7-Mode to cDOT transitions require careful planning and collaboration as they can take weeks to months”

7-Mode Transition Tool 1.2 Data and Configuration Transition Guide For Transitioning to Clustered Data ONTAP® can be found here with a NOW account login.

Important note: You should be aware of the versions of Data ONTAP operating in 7-Mode that are supported for transitioning to clustered Data ONTAP version 8.2.0 or 8.2.1. You can transition volumes from systems running Data ONTAP 7.3.3 and later. For the updated list of Data ONTAP versions supported for transition by the 7-Mode Transition Tool, see the Interoperability Matrix.

Photo credit: thompsonrivers via Flickr

Adventures in cMode Migrations: Part One

By | NetApp, Storage | No Comments

On paper, a 7-Mode to the Clustered Data OnTap (“cDOT”) migration can seem fairly straightforward. In this series, I will discuss some scenarios in terms of what can be very easy vs. what can be extremely difficult. (By “difficult” I’m mostly referring to logistical and replication challenges that arise in large enterprise environments.)

The Easy First!

Tech refresh in one site:

Bob from XYZ corp is refreshing his 7-Mode system and has decided to take advantage of the seamless scalability, non-disruptive operations and the proven efficiencies of 7-Mode by moving to the cDOT platform. Hurray Bob! Your IT director is going to double your bonus this year because of the new uptime standard you’re going to deliver to the business.

Bob doesn’t use Snapmirror today because he only has one site and does NDMP dumps to his tape library via Symantec’s Replication Director. Plus 10 points to Bob. Managing snapshot backups without a catalogue can be tricky. Which Daily.0 do I pick? Yikes! Especially if he gets hit by a bus and the new admin has to restore the CEO’s latest PowerPoint file because Jenny in accounting opened up a strange email from a Nigerian prince asking for financial assistance. Bad move, Jenny! Viruses tank productivity.

Anyway …

Bob’s got a pair of shiny new FAS8040s in a switchless cluster, the pride of the NetApp’s new mid-range fleet. He’s ready to begin the journey that is cDOT. Bob’s running NFS in his VMware environment, running CIFS for his file shares and about 20 iSCSI LUNs for his SQL DBA. Bob also has 10G switching and servers from one of the big OEMs. So no converged network yet, but he’ll get there with next year’s budget with all of the money he’s going to save the business with the lack of downtime this year! Thanks cDOT.

Approach

So what’s the plan of attack? After the new system is up and running, from a high level it would look something like this.

1. Analyze the Storage environment

a. Detail volume and LUN sizes (excel spreadsheets work well for this)
b. Lay out a migration schedule
c. Consult the NetApp Interoperability Matrix to check Fiber channel switch, HBA firmware and host operating system compatibility.
d. Build the corresponding volumes on the new cDOT system
e. Install the 7-Mode migration tool on a Windows 2008 host.
f. Using the tool to move all file based volumes.

That wasn’t so hard. Actually on paper, it looks like this scenario may seem somewhat trivial but I can assure you it is this straightforward. Next time, we are going to crank up the difficulty level a bit. We will add in multiple sites, a Solaris (or insert any other esoteric block OS, HPUX anyone?) environment as well as the usual NAS-based subjects.

See you next time for Part Two.

Photo credit: thompsonrivers via Flickr

float(1)