BYOD (Bring Your Own Device) is a buzzword we have heard in IT and security circles for years. It speaks to questions that every business leader and IT executive must ask and answer: how do we secure and protect the growing number of mobile technologies (personal or company issued) employees want to use at work? How do we give a mobile, tech-centric workforce what it needs to succeed without putting our data and company at risk? Read More
Last week, IDS hosted a dinner featuring a presentation about current cloud trends and how IDS is helping our customers create their cloud strategy. The topic generated a lot of very interesting conversation and one of the biggest discussions was about cloud contracts. If you’re going to trust a partner to host your data, it is critical to have a contract in-place that protects the interests of both parties. Many large cloud providers have a “take it or leave it” approach to contracts and others will work to customize to your needs. Regardless of the flexibility, it is imperative that you understand all aspects of your cloud contract and what it means to your business. Since this was a highly discussed item at our dinner, I decided to create a list of the top 5 contract considerations when evaluating Cloud Providers. Read More
Many storage architects start off their workday sipping coffee, reading emails, checking on the status of various things in the environment. Mostly just killing time on the often repetitive and boring tasks of provisioning, monitoring, and maintaining the arrays, switches, servers, and other pieces that make up the infrastructure, and fighting any fires that may have flared up recently. Work seems to alternate between incredibly boring and repetitive and incredibly stressful at a moment’s notice.
On rare occasions an email or phone call from management will contain the words that are dear to the heart of technologists everywhere: We need some new stuff, figure out what we need and let’s bring it in and test it out! Getting to test drive the shiny new, ultra-fast, mega-big, leading edge tech is often the reason IT folks got into their jobs in the first place. Read More
Disaster Recovery Planning (DRP) has gotten much attention in the wake of natural and man-made disasters in the recent years. But Executives continue to doubt the ability of IT to restore business IT infrastructure after a serious disaster. And this does not even include the increasing number of security breaches worldwide. By many reports, the confidence level in IT recovery processes is less than 30%, bringing to question the vast amounts of investment poured into recovery practices and recovery products. Clearly, backup vendors are busy – see compiled list of backup products and services at the end of this article (errors and omissions regretted). Read More
NetApp has included some very powerful troubleshooting commands with the 8.3 update which I’d like to bring to your attention: its QOS statistics and its subcommands. Prior to 8.3, we used the dashboard command to view statistics at the cluster node level. The problem with dashboard is that it’s reporting on cluster level statistics and it can be difficult to isolate problems caused by a single object. The advantage of the QOS command is that we now have the ability to target specific objects in a very granular fashion. Read More
This is the second of a two-part series on Technology Roadmaps. Previously we explained “The Concepts behind a Technology Roadmap,” and here we explain how to develop one.
Technology roadmaps begin with a “handshake” between IT and the business. Knowing future business plans allows IT to determine the focus area(s). As businesses evolve and new technologies emerge, IT is challenged with constant change. Developing roadmaps helps IT to be prepared for the change and manage the associated risks.
How Do You Create a Technology Roadmap?
- Collect Data. Take the time to gather preliminary information about products, people and processes. Understand current implementations and directions.
- Hold Interviews. Identify key stakeholders and gain different perspectives. Meet individually or in groups, and be sure to cover topics like resources, costs, risk, compliance, growth, skills, support and management.
- Create technology baselines. Document the essentials and highlight the constraints. Stay sufficiently high-level, but acknowledge the details around recent changes.
- Analyze focus areas. Use a structured method for the analysis. One of the most widely used framework in business analysis is the SWOT (Strength-Weakness-Opportunities-Threats) model. Since opportunities and threats relate to the industry at large, it is important to have subject matter experts (SMEs) provide input at this stage.
- Construct technology roadmaps. This is a collaborative exercise incorporating the inclusion of emerging technologies over several years. This does not always have to be a chart or a graph. It can be as simple as an enumeration of important technology adoptions in stages. For best results, use a backward sweep starting from end objectives, and then a forward sweep showing how adopting a technology at each stage can lead to the end objective. Continue this same pattern until you get it just right.
- Present recommendations. Knowing the roadmaps enables you to enumerate the IT projects that need attention in the coming months. There should also be clarity on the investment needed in terms of budget, time and resources.
- Host a workshop. Facilitate a workshop where key stakeholders meet again to review the results. This is a necessary touch point to discuss the project-based initiatives and make any final adjustments to the course.
How effective are Technology Roadmaps?
It all depends on the people and the effort put into the exercise. As indicated in the first part of this two-part series, technology roadmaps bring consensus and improved planning, budgeting & coordination. It is critical that organizations treat this as a project in itself, and provide the necessary funds and resources.
While an internal committee may be established to execute such a project, the benefits of technology roadmaps multiply exponentially when an external partner, like IDS, is involved. IDS guarantees a proven process with expert methodology, and key insight on the final deliverable. A partner like IDS can pre-empt much of the struggle by bringing SMEs to the table and a fresh external perspective.
And remember: As businesses and technologies evolve, so will the roadmaps. So, review them often.
Learn more by reading the first part of this two-part series, “The Concepts Behind a Technology Roadmap.”
In an industry where technology development and advancement moves incredibly fast, even top CIOs may feel like it’s impossible to keep up. While they may feel like their organization is falling behind, how do they determine whether they really are? What counts as “up to date” in our constantly evolving IT landscape, and is that even good enough? It’s easy to let Data Centers get out of control, and unfortunately it’s a risky business to do so. We’ve compiled the five top signs that your Data Center needs some investment to help cut through the confusion. See one or a few things that sound eerily familiar on this list? It may be time for a Data Center upgrade.
Five Signs Your Data Center Needs an Upgrade
- Your data center feels like a desert. If you’re carrying around a personal fan while walking through your Data Center, you’re definitely losing the Data Center cooling battle. Some recommend looking at a Computational Fluid Dynamics (CFD) analysis to assist in cooling system arrangements, and hot and cold aisle containment. If your Data Center continuously suffers from heat stroke, it’s probably not operating to the highest capacity possible.
- You skipped spring-cleaning the last 10 years. While it’s easy to let gear pile up, it’s vital to complete some fundamental analysis when it comes to the hardware in your Data Center. Equipment that no longer adds value, or is simply not being used should be thrown away or donated to a non-profit organization like a school for example. Discarding old equipment can have countless benefits including increased power capacity and clearing valuable space.
- Your server lifecycle was up three cycles ago. There are multiple reasons why a server lifecycle may come to a close. Because server lifecycles can vary greatly, deciphering that usable life can be confusing based on legacy applications and operating systems. We follow a general rule of thumb that if the server can no longer meet your required needs after 3 years, replacement or an alternative solution will likely make sense over simple upgrades. Replacing old servers, or incorporating innovative technologies like virtualization, cloud-based services, and converged infrastructure can help consolidate and optimize the Data Center. In turn, consolidating the Data Center can reduce cabling, management, heating, cooling and ongoing maintenance costs.
- Your cabling looks like a rat’s nest. Cabling can easily consume a Data Center if it’s not managed properly. If you’re not labeling, tying down and properly organizing your Data Center cabling, you need a serious revamp of this vital part of the Data Center. This type of disorganization can even lead to human error that can cause downtime to business-critical applications. If a wrongly placed elbow could take your retail business offline for multiple days, it’s time to rethink your cabling strategy. In addition to organization, converged technologies can greatly decrease the cabling in your Data Center.
- People are walking around your data center and you don’t know who they are. If you’re finding strangers meandering through your Data Center, it’s probably time to consider the physical security and current measures in place to protect your valuable applications and data. While you may not need a full-time guard dog, your organization may consider implementing key card access, security cameras and a sign in, sign out process with regular audits. Keep in mind, the biggest threat can often come from within your organization, so checks and balances are critical. Moving your infrastructure services to the Cloud or colocation facilities can allow you to leverage enterprise-class security and controls without massive capital investment upfront.
Even with the tips above, determining when and how to update your Data Center can be a difficult decision. It’s often a good idea to bring in a third party for a Data Center assessment consultation to make sure you’re receiving unbiased feedback. Taking the time to properly assess your current Data Center infrastructure and plan an integrative upgrade will help deter hasty decisions, and ultimately save critical capital.
Recently there’s been a lot of buzz in the technology marketplace about Mobile Device Management, or MDM. It is certainly one of the new trendsetters and hot topics within Information Technology management. MDM has become even more relevant with the heavy adoption of Bring Your Own Device (BYOD) models by many organizations. With all the information surrounding MDM to digest, the big question is, what is the right MDM product for your organization?
There are a lot of factors to consider when investigating any new technology solution and it’s easy to get bogged down in the process. Today I’d like to walk you through the right steps to take when determining the best Mobile Device Management solution for your company.
5 Steps to Choosing the Right MDM Solution for Your Organization
- Check in with Gartner Magic Quadrant. I always look at Gartner Magic Quadrant as they have been the leading trusted resource for qualifying technologies. Gartner’s 2014 leaders in EMM are AirWatch, MobileIron, Citrix, Good Technology and IBM.
- Consider existing technologies. Take the time to consider existing technologies in your environment. For example, if you have Citrix XenDesktop or XenApp, then you probably use Netscaler devices to access these environments from the Internet. Depending on your existing technologies and considerations for hardware, licenses and optimization, you can determine which solutions may be a good fit.
- Determine requirements. Start asking questions related to the requirements you need to meet. One example of many questions you should ask might be if you only want to provide company resources on these devices while users are on premise. If that’s the case, you will need to determine whether the product has Geo-fencing capabilities.
- Analyze the options. Once you have narrowed your focus to two or three choices, it’s time to do some additional homework. Ask your trusted solution advisors that you have used in the past, and look at any industry related postings about experiences with the products.
- Complete a proof of concept. When you are ready to make a decision and have your ideal solution in mind, it’s time to do a proof of concept. For this type of project, a POC is highly recommended to ensure that the product will work in your environment, meet all of your requirements and perform to expectation. From there, you can expand into a larger pilot group, and finally roll into production.
It’s important to ensure that you make proper business decisions when it comes to your strategy for managing and controlling devices accessing company resources. In addition to the steps listed above, you can also help prevent mistakes by running business decisions by a focus group. This way you know how receptive the user community will be when you implement products that manage personal and company owned devices. Ultimately this strategy helps you communicate properly with the user group and gives you the opportunity to get them excited about new technologies.
Halloween is a great reminder for everyone to take some time and enjoy their favorite spooky activities, but what about the really scary stuff? In honor of Halloween we are breaking down the top four truly scary mistakes that technology departments are making today. If you see a frightening mistake that sounds familiar, take a look at our tips for getting back on track.
1. Allowing Miscommunication Between Business and IT
Business and IT units understand projects in inherently different ways. Business and IT professionals are trained differently, understand processes with contrasting views and ultimately communicate in a way that best suits their own team. Lack of communication and understanding between the two units can lead to problems like unrealistic deadlines, confusion on project scope and general lack of clarity. These types of issues can delay or halt projects altogether, creating an inefficient work environment for both types of business units.
How to Improve Communication Between Business and IT Units
- Get both sides involved in development.
- Be realistic about workload management.
- Strategize reworks and changes together.
2. Ignoring Infrastructure and Storage Resources
Too many technology departments are putting focus on building out internal solutions and infrastructure when there are readily-available resources they can take advantage of that are much more cost and time efficient. By using existing commercial options, IT teams can spend less time focused on infrastructure and maintenance, and more time focusing on other projects. Choosing to partner with a Data Center Technology Integrator and Cloud Service Provider, such as IDS, allows IT teams to access best-of-breed infrastructure and offload some of the day-to-day management so they can spend time focusing in more strategic areas.
How to Use Commercial Infrastructure and Storage Resources
- Realize department limits and prioritize time.
- Research cost effective commercial solutions.
- Contact IDS for a custom assessment.
3. Always Saying “Yes”
Technology departments need to have realistic expectations about bandwidth. Agreeing to take on every project and meet every deadline will undoubtedly create a lot of stress and less than impressive outcomes. Maryfran Johnson who wrote “A CIO Survival Guide to Saying No” for CIO.com suggested that it’s getting even harder for IT departments to say no because of their increased involvement in business activities that can directly impact revenue. Johnson suggests leaving emotions behind and discussing the matter at hand using fact-based reasoning as often as possible.
How to Strategically Say “No”
- Leave emotions off the table.
- Explain project scopes in a universal way.
- Use fact-based reasoning.
4. Forgetting About the Users
Technology teams should always have the end user in mind. No matter what the project or end deliverable, the user needs to be successful at whatever it is they are doing. Often times IT departments are so overloaded with completing projects, they don’t have time to collect and implement user feedback in the development process. Taking the extra time to strategically analyze feedback from the users can drastically change the success of technology outcomes.
How to Keep Users on the Radar
- Schedule regular user feedback collections.
- Create action plans from reliable feedback.
- Test products and updates aggressively.
Have other mistakes you think technology departments should avoid? Comment below and tell us about them!
Last week, I compared Sneakernet vs. WAN. And I didn’t really compare the two with any WAN optimization products—just a conservative compression ratio of around 2x, which can be had with any run-of-the-mill storage replication technology or something as simple as WinZip.
But today, I want to show the benefits of putting a nice piece of technology in between the two locations over the WAN to see how much better our data transfer becomes.
When WAN Opt Is Useful
When choosing between a person’s time or using technology, I like the tech route. But even if it’s faster, how much faster does it need to be to offset the expense, hassle, and opportunity cost of installing a WAN Opt product? The only true way to know is to buy the product, install it, and run your real-world tests; however, I’m one for asking around.
But even if it’s faster, how much faster does it need to be to offset the expense, hassle, and opportunity cost of installing a WAN Opt product?
I reached out to my friends over at Silver Peak, and they pointed me to this handy online calculator.
It turns out, WAN Optimization products aren’t always useful in some situations. If you have ample bandwidth that’s very low latency, it might not be worth it. But even marginal latency across any distance at all, or data that can be repetitive (or compresses/deduplicates well), can benefit from a WAN Opt. And if you have business RPO and RTOs, you may very well require WAN Optimization in between.
I took the example from last week: the 100mbit connection, figuring in 7ms of latency to simulate the equivalent of 50% utilization on the line with 2x compression. If you recall, the file transfer of 10TB of data moved in 10 days can translate to 370TB of data in the same time frame with a Silver-Peak appliance at both ends. Much of that efficiency is due to the way WAN Optimization works, which is to say that data doesn’t always just get compressed and streamed using multiple steams. The best WAN Opt products also don’t send duplicate and redundant data. So a transfer that would normally take a week or a day could be completed in as little as 4.5 hours or 40 minutes, respectively.
The effort to install, in reality, is not that significant. Silver-Peak appliances come in physical and virtual form, with the virtual machines being a lot quicker to spin up and a little cheaper to acquire. Just make sure you are on a relatively recent IOS code that supports WCCP on your routers, and you can quickly deploy the virtual appliance in both locations.
Aside from moving data quickly, there are other benefits, such as improved voice calls (UDP packets that arrive out of order can be reassembled in the correct order), faster response times on applications over the wire, and pretty much any type of traffic that’s TCP-IP. If it were me, I would simply compare the cost of expanding the performance of the circuit versus adding a WAN Opt product in between. For most locations in the United States, circuits are expensive and bandwidth is limited, so you’re likely better off with a Silver-Peak at both ends to save both time and cost.
If it were me, I would simply compare the cost of expanding the performance of the circuit versus adding a WAN Opt product in between.
Of course, don’t just take my word for it. Run a POC on any network that you’re having problems with, and you’ll find out soon enough if WAN Optimization is the way to go.
Photo credit via Flickr: Tom Raftery