The hardware and software to make an all-flash data center a reality are on the market and ready to go. IT leaders and data center managers know that flash makes a world of difference in application performance, but the one thing stopping enterprises from adopting this approach is a business case, which would demonstrate overall price and total cost of ownership of all all-flash environment. What they don’t know is whether or not their enterprises need to and should make capital expenditures so that every single application (big or small, mission critical or not) achieves superior levels of performance. Read More
1 Path & 7 Benefits to Getting Hyper Cloud Connected
Ever feel like today’s migration to cloud services is an exclusive party? Does it seem as though big enterprises and global powerhouses with vast resources are the only ones invited? Well get off the proverbial couch and get ready to party like it’s 2016 because when it comes to the cloud (and all the efficiency and opportunities it offers businesses) everyone is invited. Unless you’ve decided you’d rather be disrupted than be a disruptor, staying home is not an option. Read More
Hello Cloud, Goodbye Constant Configuration
I have to admit that when I log into a Linux box and realize that I have some technical chops left, I get a deep feeling of satisfaction. I am also in the habit of spinning up a Windows Server in order to test network routes/ACLs in the cloud since I like using the Windows version of tools like Wireshark. Despite my love for being logged into a server, I do see the writing on the wall. Logging into a server to do installs or make configuration changes is fast becoming a thing of the past. Given the number of mistakes we humans make, it’s probably about time. Read More
As we discussed in my previous article, “Data Retention: In IT We Trust,” data retention is a necessary component of a comprehensive information lifecycle management (ILM) strategy. In this article we will turn our attention to data classification.
Data classification is the essential step towards ILM. Data classification helps organizations know what data they have, where the data is located and how they can access it (if at all). This becomes increasingly important with the uncontrollable growth in unstructured data that appears to push the infrastructure limits to new heights with every technological advance. Read More
A few weeks ago I wrote “Evaluating All-Flash Storage Part 1: Mapping out the process,” discussing the excitement around getting to test a new product, specifically an All-Flash Array.
Getting to open boxes, break out new equipment, install it, and put it through its paces is fun. Unfortunately, the business side of things rears its ugly and less entertaining head, and the temporarily excited IT staff has to put their noses back to the grindstone. Testing is being requested for a reason, and the results of the tests will play an important role in determining not only the product that will be purchased, but quite possibly in future directions for the company and the testers involved. Read More
Many storage architects start off their workday sipping coffee, reading emails, checking on the status of various things in the environment. Mostly just killing time on the often repetitive and boring tasks of provisioning, monitoring, and maintaining the arrays, switches, servers, and other pieces that make up the infrastructure, and fighting any fires that may have flared up recently. Work seems to alternate between incredibly boring and repetitive and incredibly stressful at a moment’s notice.
On rare occasions an email or phone call from management will contain the words that are dear to the heart of technologists everywhere: We need some new stuff, figure out what we need and let’s bring it in and test it out! Getting to test drive the shiny new, ultra-fast, mega-big, leading edge tech is often the reason IT folks got into their jobs in the first place. Read More
Like most organizations, you probably are hosting your unstructured data on traditional NAS platforms. The days of storing this data on these legacy systems are coming to an end. Let’s look at some of the setbacks that plague traditional NAS:
- Expensive to scale
- Proprietary data protection – third-party backup software is needed to catalog and index
- Inability to federate your data between disparate storage platforms onsite or in the cloud
- High file counts, which can cripple performance, increase back-up windows, and require additional flash technology for metadata management
- File count limitations
- High “per-TB” cost
- Some platforms are complex to administer
- High maintenance costs after Year 3
As many companies are making the investment in Citrix XenApp and XenDesktop they want a fast, reliable and secure solution. In this article, we will focus on WAN optimization and performance using the Citrix CloudBridge technologies.
Citrix CloudBridge is a unified platform used to accelerate applications across public and private networks, increasing performance and user experience. CloudBridge offers ICA protocol acceleration, QoS, optimization and security for XenApp and XenDesktop. This is an optimal solution for remote or branch offices that have WAN performance issues. CloudBridge offers extensive monitoring and reporting features, which help IT staff to performance tune any Citrix environment.
When considering a transition from current 7-Mode systems to Clustered Data ONTAP (cDOT), it’s important to understand the limitations, timing and complexity. At IDS, we help our customers navigate and understand how this process impacts their production environment. We understand every customer’s architecture is different, but we have compiled some questions that continue to trend in our conversations. Read More
We recently came across one of Gartner’s newest reports, “Major Myths About Big Data’s Impact on Information Infrastructure,” which explores nine big data myths. An abundance of marketplace hype and news about big data has caused a series of misconceptions surrounding the hot topic. To help straighten out the confusion, Gartner completed valuable research around nine major areas. We read through the lengthy report and compiled the biggest takeaways for you.
If you’d prefer to wade through the report on your own, check out the full Gartner research report instead.
Nine Big Data Takeaways for 2015
- Everyone has not already adopted big data. Many organizations exist in a state of fear that they will be the last to develop and implement big data strategies. Thanks to Gartner’s research, we know that in reality only 13% of organizations have actually fully adopted big data solutions. On the flip side, the research also reveals that close to 25% of organizations have no plans to invest in big data at all.
- Quantity doesn’t trump quality. Organizations are becoming so obsessed with collecting big data, they’ve forgotten about the importance of quality. Gartner explains that many organizations think that larger data sets mean flaws have less of an impact. In actuality, the number of flaws grows as the data does, meaning the overall impact of errors is unchanged. IT organizations need to put just as much effort, if not more, into big data quality strategy.
- Big data won’t eliminate data integration. According to Gartner, using new technologies to replace some data integration steps won’t eliminate the need for integration; it will just change the design. Gartner recommends assessing the existing architecture to pinpoint areas for improvement as top priorities for new engineering approaches.
- Data warehouses will still be relevant. While a lot of hype surrounding big data seems to suggest there’s no longer a need for data warehouses, Gartner reports that’s simply not true. According to Gartner, organizations don’t necessarily need to utilize data warehouses during the experimentation phase, but they should still implement them afterwards. Gartner points out that curated data leads to the best scoring and risk models, keeping data warehouses relevant.
- Data lakes will not replace data warehouses. Critics are also arguing that data lakes will replace data warehouses. Garter responds by suggesting that data lakes aren’t quite as easy as they seem. They recommend capitalizing on already-successful data warehouses, instead of dealing with the lack of analytical and data-manipulation skills necessary for data lakes.
- Hadoop will not replace data warehouses. With a third theory, critics suggest that Hadoop will replace data warehouses, but Gartner dispels this one too. Gartner reports that fewer than 5% or organizations are actually planning on replacing data centers with Hadoop, and the number is actually decreasing.
- Big data has not fully matured. As specified earlier, only 13% of organizations have actually deployed big data solutions. While big data is growing, it’s not a mature market and there is still an incredible wealth of information that is unknown, making risk very realizable.
- Big data is not the end-all, be-all. While big data seems big, it’s not the first opportunity for IT development. Gartner’s research shows that processing capacity doubles every 22 to 28 months, memory capacity doubles each year and every five years storage improves in efficiency, among other advancements. Instead of reacting in a panic, Garter suggests that organizations should calm down and develop a logical plan that works for them.
- You need to keep data governance in mind. Because big data is just like any other data, only larger, organizations still need to employ data governance initiatives. Gartner mentions that as organizations implement big data they should re-evaluate the current data governance structure and make sure it is flexible enough to meet the criteria needed.