All Posts By

IDS Engineer

IT, Cloud, IDS, Integrated Data Storage, Networking,

A Clear(er) Definition of Cloud Computing

By | Cloud Computing | No Comments

IT, Cloud, IDS, Integrated Data Storage, Networking,What is the Cloud? I get asked this all the time. It is part of many client meetings. My relatives ask me when we get together. My friends ask me. Heck, even my wife asked me at one point. It is probably the most common question I get asked in my life right now. It’s a little disheartening because it is my job.

I am the Director of Cloud Services and questions like this sometimes come across like “What exactly do you do?” But the confusion is understandable. The media have taken the term Cloud and made it their latest craze, threatening to bury it in a sea of hype. To make matters worse, there is not strict definition. So, inevitably my answer varies depending on the question. So, here is my attempt to define what the “Cloud” is.

Well, let’s start with our favorite place, Wikipedia.

<blockquote>
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user’s data, software and computation.
</blockquote>

Okay! Now we are getting somewhere!

But there’s a problem. If we read this, isn’t an Exchange server at your work Cloud Computing? Well, by this definition, yes. It is a software you, as a user, use as a service that is delivered over a network, and sometimes the Internet. So wait … everything is the Cloud? Well, yes. Sort of.

What about when I buy a book from Amazon, is that the Cloud? Well, let’s see: you’re not using a computing resource, so it’s not Cloud Computing—the difference here being that second word. Remember, we are looking for the definition of the Cloud and if you remove the word Computing from the definition of Cloud Computing, you get a pretty accurate definition of the Cloud.

Cloud is the use of resources that are delivered as a service over a network (typically the Internet).

So, book buying on Amazon is using the Cloud. It’s a Cloud-based reseller. Gmail is a Cloud-email provider. Netflix is a cloud-based media provider. Progressive is a Cloud-based insurance provider and your bank has Cloud banking services. Even this this blog is a Cloud information source. It’s all Cloud. By strict definition, if you’re using the service and it isn’t running on your PC, like the Word application I am typing on, you’re using the Cloud.

It might be a private Cloud, like your work email, or a Public Cloud, like Gmail, but it is all Cloud. There are even hybrid Clouds where some features are privately owned, and others run on public resources.

Ironically, the next realization is that the Cloud is not a new idea, just a new term. The idea of internet and network-based resources has been around since … the internet and network-based resources. That is important to remember when thinking about leveraging the Cloud for your business. It is not a new idea. It is, in fact, over 25 years old as Public Cloud and 40 or more as Private Cloud. It is almost as old as the PC.

So, apparently the Cloud is not so mysterious after all. It’s on old concept (in computer years, anyway). It’s a common concept and it is a concept we already readily embrace. Now if only I could get my mother in law to read this.

Photo credits: niamor and thekellyscope

IT, Cloud, IDS, Integrated Data Storage, Networking,

Your Go-To Guide For IT Optimization & Cloud Readiness, Part II

By | Cloud Computing, How To, Networking, Storage, Virtualization | No Comments

[Note: This post is the second in a series about the maturity curve of IT as it moves toward cloud readiness. Read the first post here about standardizing and virtualizing.]

I’ve met with many clients over the last several months that have reaped the rewards of standardizing and virtualizing their data center infrastructure. Footprints have shrunk from rows to racks. Power and cooling costs have been significantly reduced, while increasing capacity, uptime and availability.

Organizations that made these improvements made concerted efforts to standardize, as this is the first step toward IT optimization. It’s far easier to provision VMs, manage storage, and network from a single platform and the hypervisor is an awesome tool that creates the ability to do more with less hardware.

So now that you are standardized and highly virtualized, what’s next?

My thought on the topic is that after you’ve virtualized your Tier 1 applications like e-mail, ERP, and databases, the next step is to work toward building out a converged infrastructure. Much like cloud, convergence is a hyped technology term that means something different to every person who talks about it.

So to me, a converged infrastructure is defined as a technology system where compute, storage, and network resources are provisioned and managed as a single entity.

it optimized blog pyramid

Sounds obvious and easy, right?! Well, there are real benefits that can be gained; yet, there are also some issues to be aware of. The benefits I see companies achieving include:

→ Reducing time to market to deploy new applications

  • Improves business unit satisfaction with IT, with the department now proactively serving the business’s leaders, instead of reacting to their needs
  • IT is seen as improving organizational profitability

→ Increased agility to handle mergers, acquisitions, and divestitures

  • Adding capacity for growth can be done in a scalable, modular fashion within the framework of a converged infrastructure
  • When workloads are no longer required (as in a divestiture), the previously required capacity is easily repurposed into a general pool that can be re-provisioned for a new workload

→ Better ability to perform ongoing capacity planning

 

  • With trending and analytics to understand resource consumption, it’s possible to get ahead of capacity shortfalls by understanding when it will occur several months in advance
  • Modular upgrades (no forklift required) afford the ability to add capacity on demand, with little to no downtime

Those are strong advantages when considering convergence as the next step beyond standardizing and virtualizing. However, there are definite issues that can quickly derail a convergence project. Watch out for the following:

→ New thinking is required about traditional roles of server, storage and network systems admins

  • If you’re managing your infrastructure as a holistic system, it’s overly redundant to have admins with a singular focus on a particular infrastructure silo
  • Typically means cross training of sys admins to understand additional technologies beyond their current scope

→ Managing compute, storage, and network together adds new complexities

  • Firmware and patches/updates must be tested for inter-operability across the stack
  • Investment required either in a true converged infrastructure platform (like Vblock or Exadata) or a tool to provide software defined Data Center functionality (vCloud Director)

In part three of IT Optimization and Cloud Readiness, we will examine the OEM and software players in the infrastructure space and explore the benefits and shortcomings of converged infrastructure products, reference architectures, and build-your-own type solutions.

Photo credit: loxea on Flickr

Faster and Easier: Cloud-based Disaster Recovery Using Zerto

By | Cloud Computing, Disaster Recovery, How To, Replication, Virtualization | No Comments

Is your Disaster Recovery/Business Continuity plan ready for the cloud? Remember the days when implementing DR/BC meant having identical storage infrastructure at the remote site? The capital costs were outrageous! Plus, the products could be complex and time consuming to setup.

Virtualization has changed the way we view DR/BC. Today, it’s faster and easier than ever to setup. Zerto allows us to implement replication at the hypervisor layer. It is purpose built for virtual environments. The best part: it’s a software-only solution that is array agnostic and enterprise class. What does that mean? Gone are the days of having an identical storage infrastructure at the DR site. Instead, you replicate to your favorite storage—it doesn’t matter what you have. It allows you to reduce hardware costs by leveraging existing or lower-cost storage at the replication site.

zerto visio graphic

How does it work? You install the Zerto Virtual Manager on a Windows server at the primary and remote sites. Once installed, the rest of the configuration is completed through the Zerto tab in VMware vCenter. Simply select the Virtual Machines you want to protect and that’s about it. It supports fully automated failover and failback and the ability to test failover, while still protecting the production environment. Customers are able to achieve RTOs of minutes and RPOs of seconds through continuous replication and journal-based, point-in-time recovery.

Not only does Zerto protect your data, it also provides complete application protection and recovery through virtual protection groups.

Application protection:

  • Fully supports VMware VMotion, Storage VMotion, DRS, and HA
  • Journal-based point-in-time protection
  • Group policy and configuration
  • VSS Support

Don’t have a replication site? No problem. You can easily replicate your VMs to a cloud provider and spin them up in the event of a disaster.

Photo credit: josephacote on Flickr

IT, Cloud, IDS, Integrated Data Storage, Networking,

Your Go To Guide For IT Optimization & Cloud Readiness, Part I

By | Cloud Computing, How To, Networking, Virtualization | No Comments

As an Senior IT Engineer, I spend a lot of time in the field talking with current or potential clients. Over the last two years I began to see a trend in questions that company decision makers were asking and this revolves around developing and executing the right cloud strategy for their organization.

With all the companies I’ve worked with, there are three major areas that C-level folks routinely inquire about and those topics include reducing cost, improving operations and reducing risk. Over the years I’ve learned that an accurate assessment of the organization is imperative as it’s a valuable key to understand the current state of the companies IT infrastructure, people and processes. When discovering these key items of an organization, I’ve refined the following framework to help decision makers effectively become cloud ready.

Essentially IT infrastructure optimization and cloud readiness adhere to the same maturity curve, moving upstream from standardized to virtualized/consolidated and then converged.  From there, the remaining journey is about automation and orchestration.  It ultimately depends on where an organization currently resides. Within that framework it will dictate my recommendations for tactical next steps to reach more strategic goals.

IT, IT Optimization, Cloud, Cloud Readiness, IT Cloud, Cloud GuideStandardization is the first topic which needs to be explored as that is the base of all business operations and directions. The main drive to standardize is in efforts to reduce the number of server and storage platforms in the data center.

The more operating systems and hardware management consoles your administrators need to know, the less efficient they become.  There’s little use for Windows Server 2003 expertise in 2013 and it is important to find a way to port the app to your current standard.  The fewer standards your organization can maintain, the fewer the variables exist when trouble shooting issues. Ultimately, fewer standards will allow you to return to IT to focus on initiatives essential to the business.  Implementing asset life-cycle policies can limit costly maintenance on out of warranty equipment and ensure your organization is always taking advantage of advances in technology.

After implementing a higher degree of standardization, organizations are better equipped to take the next step by moving to a highly virtualized state and by greatly reducing the amount of physical infrastructure that’s required to serve the business.  By now most everyone has at least leveraged virtualization to some degree.  The ability to consolidate multiple physical servers onto a single physical host dramatically reduces IT cost as an organization can provide all required compute resources on far fewer physical servers.

I know this because I’ve worked with several organizations who’ve experienced consolidation ratios of 20-1 or greater.  One client I’ve worked with has extensively reduced their data center footprint, migrating 1200 physical servers onto 55 total virtual hosts. While the virtual hosts tend to be much more robust than the typical physical application server, the cost avoidance is undeniable.  The power savings from decommissioning 1145 servers at their primary data center came to over $1M in the first year alone.

It is also important to factor in cooling and a 3 year refresh cycle that will require a 1100+ servers to be purchased as the savings start to add up quickly.  In addition to the hard dollar cost savings, virtualization produces additional operational benefits.  Business continuity and disaster recovery exposure can be mitigated by using high availability and off site replication functionality embedded into today’s hypervisors.  Agility to the business can increase as well, as time required to provision a virtual server on an existing host is typically weeks to months faster than what’s required to purchase, receive, rack, power, and configure a physical server.

Please look for Part II of “Your Guide To IT Optimization & Cloud Readiness” as Mr. Rosenblum breaks down Convergence and Automation.

photo by “reway2007

deduplication, data domain, online storage, back up online, data, data storage

Avamar And Data Domain: The Two Best Deduplication Software Appliances

By | Avamar, Deduplication, Networking, Storage | No Comments

For years backup-to-disk technologies have evolved toward the ingestion of large sums of data very quickly, especially when compared to the newest tape options. This evolution has made backup applications influence disk targets for equally fast restores, even down to the file level.

Essentially what this means is that, today clients can integrate disk-based back-up solutions to fulfill the following conditions:
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] – Mitigate risk of traditional tape failures

– Reduce the amount of time it takes to perform large back-up jobs

– Reduce the amount of capacity required at the back-up target by nearly 10-20X more than tape

– Reduce the amount of data traversing the network during a back-up job (Avamar or similar “source-based” technologies)

– Lower the total cost of ownership in comparison to tape

– Enable clients to automate the “off-site” requirement for tape by the way of replicating one disk system to the next over long distances

– Lower the RTO and RPO for clients based on custom policies available
[/framed_box]

Data Domain deduplication methods are useless without backup software in place. By leveraging Data Domains OST functionality (DDBoost), we can now combine Data Domain’s deep compression ability with the superior archiving abilities of Avamar.

Through Source-Based Deduplication, Avamar’s host side enables environments with lower bandwidth and longer backup windows to push the backup process much faster. Also, after completing the initial backup, this strategy results in less data on disks, which is good for everyone.

deduplication, data domain, online storage, back up online, data, data storage

Where Data Domain shines the most is in its ability to compress the then deduplicated data 10X more than Avamar. This integration allows Avamar to cut weekend, month-end and year-end backups to the Data Domain, allowing for much longer retention. This feature expands Avamar’s reach into extended retention cycles to disk, which is one of the faster restore methods.

Data Domain’s “target-based” deduplication technology means the backup/deduplication process happens at the actual DD Appliance. Data Domain is the actual target, as it is here that the deduplication takes place.

All data has to go over the network to the target when leveraging Data Domain. If there is a need to backup 10TBs then 10TBs need to traverse the network to the DD Appliance. When leveraging Avamar, I may only need to send 2TBs over the network, given the fact that data has been deduped prior to pushing to the target.

Taking Data Domain even further, Avamar can replicate backups to another Data Domain off site.

Allowing Avamar to control the replication enables it to keep the catalogues and track the location of the backup. This ability gives the end user ease of management when a request is made to restore. The prerequisites for DDBoost are both the license enabler for DDBoost and the Replicator on Data Domain. Overall this integration of the two “Best Deduplication Appliances” allows the end user a much wider spectrum of performance, use and compliance.

For a deeper dive into deduplication strategies, read the article from IDS CTO Justin Mescher about Data Domain vs EMC Avamar: which deduplication technology is better.

IT, IT vendors, journey, IDS, learning

Being Successful, My Journey to Stay Sharp

By | Cisco, How To, VMware | No Comments

Learning is the one word that comes to mind when I think about being successful in the world of technology. In previous years I bought into the traditional method of learning by taking vendor training and follow-up exams. After failing an exam last year I began to understand that I had to develop a new methodology of learning. I wanted to pass IT exams on the first attempt and retain the required knowledge. I had to adapt my style of training.

Traditionally, companies understand that in order to keep their IT employees from leaving they have to offer incentives beyond money; most employees want to learn. The majority of companies I have worked for seem to follow a similar approach to training.

1. Determine the technical proficiency required
2. Train and learn the material deemed important by the vendor
A. Attend an authorized training course
B. Read books or PDF’s related to the subject
3. Take the exam
A. Re-take the exam if needed

training, journey, IT vendor, vendors, learning

I found that learning had a profound ripple affect well beyond my personal advantages. The company I worked for benefited from vendor partnerships as certain accreditations earned provided access to different markets and lead generation. When consulting with potential customers on the front-end (sales) or the back-end (implementation) the opportunity for additional business with the customer grows substantially. This happens because the customer feels confident that you are a subject matter expert. You become their trusted adviser.

When I was hired as a Post-Sales Engineer at Integrated Data Storage (IDS), I was informed about the training curriculum and introduced to the company’s learning methodology. The major issue I encountered with the learning cycle was how much there was to learn.

I recall the pain and aggravation of re-taking exams for EMC ISM, VMware VCP 4.1 and VMware VCA4-DT. Even though I spent suitable time studying the content, I was overwhelmingly devastated when I failed these exams. It was my goal to pass these exams on the first attempts; I was determined to diagnose the problem and change it.

One year ago I reviewed my approach to studying and I quickly discovered that all habits resembled that of the traditional learning method. Take a course then take a test. This structure was not working for me, so I began to create my own roadmap for success. I created a list of tools and resources that became indispensable, such as books, PDFs, computer-based training (CBTs), home labs, specialized learning centers, vendor specific training, blogs, and knowledgebase articles. I was immersed in training and embraced my new learning methodology.

In February 2012 I put my new study methods to the test. The results were immediate and positive. By combining multiple study strategies I took and passed the VCP5, VCP5-DT, NetApp NCDA and Citrix XenDesktop on my first attempt(s). Through a restructured training curriculum, I obtained my goal of passing these exams on the first attempt.

While revamping my studying habits I found several training secrets which contributed to my success.

TrainSignal is a Chicago-based company with CBTs that I loaded on my tablet for offline viewing. The instant online access interface is intuitive and easy to use and they offer transcender practice exams with select courses. The trainers at TrainSignal are some of the most respected, certified, talented and personable individuals in the industry. I was able to follow each of them on Twitter and ask questions through social media. The bonus for me was that TrainSignal offers a majority of their individual training courses for around $400.

Current Technologies Computer Learning Center (CTCLC) is a Portage, Indiana, learning center maintained by a team of certified instructors. CTCLC is authorized by vendors across many different technologies which allow easy access to exams and certifications. By being devoted to this local learning center, I was able to obtain extra stick time with valuable classroom hardware. Also, another great benefit to CTCLC is their flexibility in rescheduling courses. When an emergency at work required my immediate attention, the staff at CTCLC was kind enough to help reschedule my courses.

Benchmark Learning is an authorized learning center that specializes in technologies for specific vendors. I used Benchmark Learning for my Citrix XenDesktop certification as I was very impressed with their style and outline. Benchmark Learning kept their training status up-to-date on Citrix’s website. They were very responsive and accommodating to my request for scheduling.

Vendors provided additional training, which helped me obtain additional time learning specific solutions and technologies. Aside from the three companies mentioned, vendors like Nutanix, VMware, Citrix and EMC provided in-depth knowledge through partner related training videos, PDFs and white papers.

home labs, training, lab, exam, tests

Home Labs provided actual hands-on experiences for my training. Combined with the theory-based knowledge learned in classes, CBT videos and online material, I was able to solidify my knowledge about the specific solution and technologies by having these items available at my house. After checking E-bay and Craigslist, I found a VMware vSphere compatible server and began building my lab. My home lab now consists of several Dell servers, a free iSCSI SAN using OpenFiler, WYSE P20 Zero Client, HP laptop as a thin client, iPad, Mac Mini and a handful of trial licenses for VMware, Microsoft, Citrix, VEEAM, Liquidware Labs, TrendMicro and Quantum.

2013 is here and my vision for this year is to rebuild my home lab with even more hardware. My goal is to provide real design examples built on VMware and Citrix technologies to continue to take my learning to the next level.

Cloud, Server,

Embracing The Cloud To Survive Change

By | Cloud Computing, Storage | No Comments

I am deeply involved in technology. I find it fascinating to see how technology rapidly changes our world like some freewheeling bulldozer plowing an uncaring path right through the center of society.

I was sitting in an airport over the Holidays waiting for my plane to arrive and I began thinking about the death of the travel agent.

I will admit that I am old enough to remember when travel involved travel agencies. You could call the airlines for a reservation, but it took forever and the prices you received were usually worse than the prices from a travel agent. Your best bet was to call a travel agent who would coordinate everything for you.

<blockquote>Businesses are beginning to realize that they can save a lot of money, improve flexibility, agility and capability by embracing the cloud.</blockquote>

Ironically when you called a travel agent, the reason they could get you these great deals was because they had a computer with special access to airline reservations. They could see all the flights and available hotel rooms. They had special pricing based on what they could see and how much they sold. The very thing that they were leveraging to make them profitable, in truth to make them exist, would also be the same thing that would show up ten years later and demolish their business. The one doing the bulldozing is the cloud.

While the term cloud is relatively new, the concept isn’t. Companies like Travelocity and Orbitz are just cloud based travel agents. Today, these cloud services are what we use to virtually book travel. We have become our own Travel Agents and the job of the professional Travel Agent has been bulldozed.

That is not to say Travel Agents don’t exist, they do. But now they are boutique shops servicing special needs like exotic foreign travel. There are also travel agents inside companies to control costs and increase convenience. But those agents largely leverage the same tools you and I have access to. They are large scaled cloud users.

to the cloud, cloud services, cloud computing, cloud hosting, google cloud, cloud server, cloud, the cloud

So here I am in Austin Airport, surrounded by people whose lives, other than the former Travel Agents, have been made fundamentally better, more flexible, more agile and more cost effective all because of the cloud. This got me thinking about IDS and its cloud offerings. While on the surface they don’t seem related, underneath they are exactly the same. IT is becoming a commodity.

Businesses are beginning to realize that they can save a lot of money, improve flexibility, agility and capability by embracing the cloud. There will always be the special scenarios, yet the majority of IT is not unique. The bulldozer is revving up and it’s coming after traditional IT services.

IDS is already there in the cloud, ready to help companies leverage this new service and move them from becoming victims, in front of the oncoming blade, to being in the driver’s seat, shaping the future.

[Additional reading: “My Personal Journey To The Cloud” written by IDS CTO Justin Mescher.]

Photos by: @ExtraMedium and @Salicia

Why, Oh Why To Do VDI ?

By | Cloud Computing, Security, Storage, View, Virtualization, VMware | No Comments

I recently became a Twit on Twitter, and have been tweeting about my IT experiences with several new connections. In doing so, I came across a tweet about a contest to win some free training, specifically VMware View 5 Essentials from @TrainSignal – sweet!

Below is a screen capture of the tweet:

vdi-tweet

A jump over to the link provided in the tweet – explains that one or all of the below questions should be commented on in the blog post – in order to win. Instead of commenting on that blog, why not address ALL of the questions in my own blog article at IDS?!  Without further ado, let’s jump right in to the questions:

Why are Virtual Desktop technologies important nowadays, in your opinion?

Are you kidding me?!

If you are using a desktop computer, workstation at work or a laptop at home/work – you are well aware that technology moves so fast, updated versions are released as soon as you buy a “new” one. Not to mention the fact usually laptops are already configured with what the vendor or manufacturer thinks you should be using, not what is best, more efficient or fastest. More times than not, you are provided with what someone else thinks is best for the user. The reality is that only you – the user – knows what you need and if no one bothers to ask you, there can be a feelings of being trapped, having no options, or resignation, which all tend to lead to the dreaded “buyer’s remorse.”

When you get the chance to use a virtual desktop, you finally get a “tuned-in” desktop experience similar to or better than the user experience that you have on the desktop or laptop from Dell, HP, IBM, Lenovo, Gateway, Fujitsu, Acer and so on.

Virtual desktops offer a “tuned” experience because architects design an infrastructure and solution from the operating system in the virtual desktop, be it Windows XP to Windows 7; soon to be Windows 8, to the right amount of virtual CPUs (vCPUs), capacity of  guest memory, disk IOPS, network IOPS and everything else that you wouldn’t want to dive into the details of. A talented VDI Architect will consider every single component when designing  a virtual desktop solution because the user experience matters – there is no selling them on the experience “next time.” Chances are if you have a negative experience the first time, you will never use a virtual desktop again, nor will you have anything good to say when the topic comes up at your neighborhood barbecue or pool party.

The virtual desktop is imparitive because it drives the adoption of heads up displays (HUD) in vehicles, at home and the workplace, as well as slimmer interface tablet devices. Personally, when I think about the future of VDI I envision expandable OLED flex screens that will connect wirelessly to private or public cloud based virtual desktops with touch-based (scratch-resistant) interfaces that connect to private cloud based virtual desktops. The virtual desktop is the next  frontier, leaving behind the antiquated desktop experience that has been dictated to the consumer by vendors and manufacturers that simply does not give us what is needed the first time.

What are the most important features of VDI in your opinion?

Wow, the best features of VDI require a VIP membership into the exclusive VDI community. Seriously though, the users and IT Support staff are the last to know the most important features, but the users and IT Support are the first to be impacted when a solution is architected because those two groups of people are the most in lock-step with the desktop user experience.

The most effective way for me to leave a lasting impression is to lay out the most important features out in a couple of bullet statements:

  • Build a desktop in under 10 minutes –  how about 3-minutes?
  • Save personal settings and recover personal desktop settings, immediately after rebuilding a desktop.
  • Increased speed by which more CPU or RAM can be added to a virtual desktop.
  • Recovery from malware, spyware, junkware, adware, trojans, viruses, everything-ware – you can save money by just rebuilding in less than 10-minutes.
  • Access to the desktop from anywhere, securely.
  • It just works, like your car’s windshield!

That last point brings me to the most important part of VDI, that when architected, implemented and configured properly, it just works. My mantra in technology is “Technology should just work, so you don’t have to think about technology, freeing you up to just do what you do best!”

What should be improved in VDI technologies that are now on the market?

The best architects, solution providers and companies are the best because they understand the current value of a solution, in this case VDI, as well as the caveats and ask themselves this exact question. VDI has very important and incredibly functional features, but there is a ton of room for improvement.

So, let me answer this one question with two different hats on – one hat being a VDI Architect and the other hat being a VDI User. My improvement comments are based on the solution provided by VMware as I am most familiar with VMware View.  In my opinion, there is no other vendor in the current VDI market who can match the functionality, ease of management and speed that VMware has with the VMware View solution.

As a VDI Architect, I am looking for VMware to improve their VMware View product by addressing the below items:

  • Separate VMware View Composer from being on the VMware vCenter Server.
  • Make ALL of the VMware View infrastructure applications, appliances and components 64-bit.
  • Figure out and support Linux-based linked-clones. (The Ubuntu distribution is my preference.)
  • Get rid of the VMware View Client application – this is 2012.
  • Provide a fully functional web-based or even .hta based access to the VMware View virtual desktop that is secure and simple.
  • Build database compatibility with MySQL, so there is a robust FREE alternative to use.
  • Build Ruby-on-Rails access to manage the VMware View solution and database. Flash doesn’t work on my iPad!

As a VDI User, I am looking for VMware to improve:

  • Access to my virtual desktop, I hate installing another application that requires “administrator” rights.
  • Fix ThinPrint and peripheral compatibility or provide a clearer guide for what is supported in USB redirection.
  • Support USB 3.0 – I don’t care that my network or Internet connection cannot handle the speed – I want the sticker that says that the solution is USB 3.0 compatible and that I could get those speeds if I use a private cloud based VDI solution.
  • Tell me that you will be supporting the Thunderbolt interface and follow through within a year.
  • Support web-cams, I don’t want to know about why it is difficult, I just want it to work.
  • Support Ubuntu Linux-based virtual desktops.

In summary, you never know what you will find when using social media. The smallest of tweets or the longest of blog articles can elicit a thought that will provoke either a transformation in process or action in piloting a solution. If you are looking to pilot a VDI solution, look no further… shoot me an email or contact Integrated Data Storage to schedule a time to sit down and talk about how we can make technology “just work” in your datacenter!  Trust me when I say, your users will love you after you implement a VDI solution.

Photo Credit: colinkinner

The Future Of Cloud: Managing Your Data Without Managing Your Data

By | Backup, Cloud Computing, Disaster Recovery, How To | No Comments

The catch phrase of the last few years has been “The Cloud”. What REALLY is the  cloud?  By the consumer’s definition it is when I buy a video on Amazon and magically it is available to me anywhere I go. The video is then up in the ambiguous cloud. I don’t know what the hardware or software is, or even if it the data is in the same country as me. I just know it’s there and I sleep at night knowing that my investment is protected (I buy a lot of movies). There’s so much more to it than that and it is time that businesses begin to leverage the power of the cloud.

How can the cloud be applied to the business? In tough economic times the common saying is “Do more with less”. Let’s face it: Even in the best of times no one is going to walk up to the IT Director or CIO and say: “Here you go, more money!”. Instead it is a constant battle of doing more with less and in many instances we in the field are just trying to keep our heads above the water. CEO and department heads want all of their data protected, available, and accessible at any time and usually on a budget that frankly cannot cover all of the expenses. To plan a normal disaster recovery a number of factors have to be looked at:

  1. Where will the datacenter be ?
  2. How much will rack space, power, and cooling cost ?
  3. How many and what products do we need to install ?
  4. How will we manage it ?
  5. How will we connect to it and maintain redundancy ?
  6. Who will manage it ?
  7. Do we need to hire extra staff to manage it ?

That’s just a sample of the questions to even begin to start the project. It will also take months and maybe a year to design and implement the solution and will be very costly. This is where the cloud comes in. All of the resources you need are already available, protected, and scalable. Need more data storage? No problem. Need more compute power? We have that ready too. All it takes is an email. Really, who wants to manage physical servers anyways? It’s time to start looking at data, memory, and computing as simply just resources and less like a capital investment.

Beyond this, what is to stop you from running your entire infrastructure in the cloud?  Why not pay for your infrastructure the same way you pay the company phone bill? Here is where managed cloud services come into play, rather than importing more costs into your datacenter – you are exporting that time and money to a fraction of the cost with a managed services provider. IDS is ready, willing and able – just a click away.

Photo Credit: Fractal Artist

Do You Learn From Data Breaches And Disasters Or Observe Them?

By | Backup, Disaster Recovery, Security | No Comments

How many articles or blog posts have you read that talked about the “lessons we learned” from 9/11, the Japanese earthquake/tsunami, the Joplin tornado, Hurricane Katrina, or <insert disastrous event here>? I see them all the time, and after reading a very interesting article in the Winter issue of the Disaster Recovery Journal (you may have to register to view the full article), I got to thinking about this concept.

What is the indication that we have learned something? The word learn has several definitions, but my favorite (thanks to dictionary.com) is this:

to gain (a habit, mannerism, etc.) by experience, exposureto example, 

or the like; acquire …

If you learn something, you gain a new habit or mannerism; in other words, you change something.

What does it mean to observe? Again, from dictionary.com

to regard with attention, especially so as to see or learn something …

Just notice the difference. Learning means to take action, observing means to watch so you can learn. This really hits home with me and how I talk to my customers, because we talk A LOT about all of the lessons we have learned from various disasters. I don’t think it’s just me, either. Do a Google search on the phrase “lessons learned from hurricane Katrina” and you get 495,000 hits. Do a search on “lessons learned from Japanese tsunami” and you get 2.64 million hits. This gets talked about A LOT.

But how much are we really learning? After Katrina, how many of you proactively, objectively assessed or had someone assess your ability to maintain a revenue stream if a debilitating disaster struck your center of operations, whatever your business is? How many of you looked at what happened in Japan, or in Joplin, MO, and said: if that happened to us, we’d be able to sustain our business and we aren’t just fooling ourselves?

Let’s put this in a less dramatic and more regularly occurring context. How many of you saw the completely insane events surrounding the breach of HBGary and actually DID SOMETHING to change behavior, or build new habits to insure you didn’t suffer a similar fate? Many of us observed the event, were aghast at it’s simplicity of execution and the thoroughness with which information was exposed, but how many people actually changed the way their security is addressed and learned from the event? Have you looked at the ten year breach at Nortel, or the data breach at Symantec and set in motion a course of events in your own organization that will do everything possible to prevent similar issues in your organization?

These problems are not going away. They are becoming more and more prevalent and they are not solely the problem of global Fortune 500 companies. Any organization who does any type of business – has data that could potentially be useful for nefarious purposes in the wrong hands. It is our responsibility as stewards of the data to learn the lessons and take action to secure and protect our data as though it was our money — because it is.

Photo Credit: Cherice

float(10)