Category

Data Loss Prevention

Data protection/cyber security concept with lock

Is Your Data Protection Risky or Right-on?

By | Cybersecurity, Data Breach, Data Loss Prevention, Disaster Recovery, Personal Data Management, Security | No Comments

A Three-step Assessment Guide

Data is your company’s most valuable asset. Without it, you don’t have a company at all. Even so, many enterprises do not dedicate the resources needed to ensure their data protection strategy and solutions are covering them effectively. More often than not, I see considerable lag time between when enterprises invest in new technology and when they invest in an appropriate solution to protect it. This gap is a perilous window during which data is ripe for theft, corruption and/or loss. Read More

Data Password and Phishing Hook

Gone Phishing

By | Cybersecurity, Data Breach, Data Center, Data Loss Prevention, Phishing, Security, SPAM Filters, User Behavior Analytics | No Comments

One of the Many Ways Cyber Criminals are After Your Data

Phishing sounds innocent enough, right? Echoing a relaxed pastime, phishing even has a name designed to put your guard down. Ironically, that is exactly how cyber phishing works—it’s a ploy that tricks users into relaxing their guard so criminals can access to valuable personal and business data. So how serious is the risk? It’s serious and very costly. Read More

Bring Your Own Device (BYOD)

The Best Next Thing in BYOD: VMI

By | Data Loss Prevention, How To, Personal Data Management, Security, Virtualization | No Comments

BYOD (Bring Your Own Device) is a buzzword we have heard in IT and security circles for years. It speaks to questions that every business leader and IT executive must ask and answer: how do we secure and protect the growing number of mobile technologies (personal or company issued) employees want to use at work? How do we give a mobile, tech-centric workforce what it needs to succeed without putting our data and company at risk? Read More

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

Advice from the Expert, Best Practices in Utilizing Storage Pools

By | Backup, Cisco, Data Loss Prevention, EMC, How To, Log Management, Networking, Storage, VMware | No Comments

Storage Pools for the CX4 and VNX have been around a while now, but I continue to still see a lot of people doing things that are against best practices. First, let’s start out talking about RAID Groups.

Traditionally to present storage to a host you would create a RAID Group which consisted of up to 16 disks, the most typical used RAID Groups were R1/0, R5, R6, and Hot Spare. After creating your RAID Group you would need to create a LUN on that RAID Group to present to a host.

Let’s say you have 50 600GB 15K disks that you want to create RAID Groups on, you could create (10) R5 4+1 RAID Groups. If you wanted to have (10) 1TB LUNs for your hosts you could create a 1TB LUN on each RAID Group, and then each LUN has the guaranteed performance of 5 15K disks behind it, but at the same time, each LUN has at max the performance of 5 15K disks.
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] What if your LUNs require even more performance?

1. Create metaLUNs to keep it easy and effective.

2. Make (10) 102.4GB LUNs on each RAID Group, totaling (100) 102.4GB LUNs for your (10) RAID Groups.

3. Select the meta head from a RAID Group and expand it by striping it with (9) of the other LUNs from other RAID Groups.

4. For each of the other LUNs to expand you would want to select the meta head from a different RAID Group and then expand with the LUNs from the remaining RAID Groups.

5. That would then provide each LUN with the ability to have the performance of (50) 15K drives shared between them.

6. Once you have your LUNs created, you also have the option of turning FAST Cache (if configured) on or off at the LUN level.

Depending on your performance requirement, things can quickly get complicated using traditional RAID Groups.

This is where CX4 and VNX Pools come into play.
[/framed_box] EMC took the typical RAID Group types – R1/0, R5, and R6 and made it so you can use them in Storage Pools. The chart below shows the different options for the Storage Pools. The asterisks notes that the 8+1 option for R5 and the 14+2 option for R6 are only available in the VNX OE 32 release.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inNow on top of that you can have a Homogeneous Storage Pool – a Pool with only like drives, either all Flash, SAS, or NLSAS (SATA on CX4), or a Heterogeneous Storage Pool – a Storage Pool with more than one tier of storage.

If we take our example of having (50) 15K disks using R5 for RAID Groups and we apply them to pools we could just create (1) R5 4+1 Storage Pool with all (50) drives in it. This would then leave us with a Homogeneous Storage Pool, visualized below.High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

The chart to the right displays what will happen underneath the Pool as it will create the same structure as the traditional RAID Groups. We would end up with a Pool that contained (10) R5 4+1 RAID Groups underneath that you wouldn’t see, you would only see the (1) Pool with the combined storage of the (50) drives. From there you would create your (10) 1TB LUNs on the pool and it will spread the LUNs across all of the RAID Groups underneath automatically. It does this by creating 1GB chunks and spreading them across the hidden RAID Groups evenly. Also you could turn FAST Cache on or off at the Storage Pool level (if configured).

On top of that, the other advantage to using a Storage Pool is the ability to create a Heterogeneous Storage Pool, which allows you to have multiple tiers where the ‘hot’ data will move up to the faster drives and the ‘cold’ data will move down to the slower drives.

Jon Blog photo 4Another thing that can be done with a Storage Pool is create thin LUNs. The only real advantage of thin LUNs is to be able to over provision the Storage Pool. For example if your Storage Pool has 10TB worth of space available, you could create 30TB worth of LUNs and your hosts would think they have 30TB available to them, when in reality you only have 10TB worth of disks.

The problem with this is when the hosts think they have more space than they really do and when the Storage Pool starts to get full, there is the potential to run out of space and have hosts crash. They may not crash but it’s safer to assume that they will crash or data will become corrupt because when a host tries to write data because it thinks it has space, but really doesn’t, something bad will happen.

In my experience, people typically want to use thin LUNs only for VMware yet will also make the Virtual Machine disk thin as well. There is no real point in doing this. Creating a thin VM on a thin LUN will grant no additional space savings, just additional overhead for performance as there is a performance hit when using thin LUNs.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inAfter the long intro to how Storage Pools work (and it was just a basic introduction, I left out quite a bit and could’ve gone over in detail) we get to the part of what to do and what not to do.

Creating Storage Pools

Choose the correct RAID Type for your tiers. At a high level – R1/0 is for high write intensive applications, R5 is high read, and R6 is typically used on large NLSAS or SATA drives and highly recommended to use on those drive types due to the long rebuild times associated with those drives.

Use the number of drives in the preferred drive count options. This isn’t always the case as there are ways to manipulate how the RAID Groups underneath are created but as a best practice use that number of drives.

Keep in mind the size of your Storage Pool. If you have FAST Cache turned on for a very large Storage Pool and not a lot of FAST Cache, it is possible the FAST Cache will be used very ineffectively and be inefficient.

If there is a disaster, the larger your Storage Pool the more data you can lose. For example, if one of the RAID Groups underneath having a dual drive fault if R5, a triple drive fault in R6, or the right (2) disks in R1/0.

Expanding Storage Pools

Use the number of drives in the preferred drive count options. If it is on a CX4 or a VNX that is pre VNX OE 32, the best practice is to expand by the same number of drives in the tier that you are expanding as the data will not relocate within the same tier. If it is a VNX on at least OE 32, you don’t need to double the size of the pool as the Storage Pool has the ability to relocate data within the same tier of storage, not just up and down tiers.

Be sure to use the same drive speed and size for the tier you are expanding. For example, if you have a Storage Pool with 15K 600GB SAS drives, you don’t want to expand it with 10K 600GB SAS drives as they will be in the same tier and you won’t get consistent performance across that specific tier. This would go for creating Storage Pools as well.

Graphics by EMC

To Snapshot Or Not To Snapshot? That Is The Question When Leveraging VNX Unified File Systems

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Replication, Security, VMware | No Comments

For those of you who are leveraging VNX Unified File systems, were you aware that you have the ability to checkpoint your file systems?

If you don’t know what checkpoints are, checkpoints are a point-in-time copy of your file system. The VNX gives you the ability to automate the checkpoint process. Checkpoints can run every hour, or any designated length of time, plus keep those files for whatever length of time is necessary (assuming of course that your data center has enough space available in the file system).

Checkpoints by default are read-only and are used to revert files, directories and/or the entire file system to a single point in time.  However, you can create writable checkpoints which allow you to snap an FS, export it, and test actual production data without affecting front-end production. 

VNX Checkpoint also leverages Microsoft VSS: allowing users to restore their files to previous points created by the VNX. With this integration you can allow users to restore their own files and avoid the usual calls from users who have accidently corrupted or deleted their files.  Yet, there are some concerns as to how big snapshots can get. VNX will dynamically increase the checkpoints based on how long you need them and how many you take on a daily basis. Typically the most a snapshot will take is 20% of the file system size and even that percentage is based on how much data you have and how frequently the data changes.

For file systems that are larger than 16TB, accruing successful backup can be a difficult task. With NDMP (network data management protocol) integration you are able to backup the checkpoints and store just the changes instead of the entire file system.

Take note that replicating file systems with other VNX arrays will carry your checkpoints over, giving you an off-site copy of the checkpoint made to the production FS. Backups on larger file systems can become an extremely difficult and time consuming job – by leveraging VNC Replicator and checkpoints you gain the ability to manage the availability of your data from any point in time you choose.

Photo Credit: Irargerich

Top 3 Security Resolutions For 2012: Moving Forward From “The Year Of The Breach”

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Security | No Comments
I always feel a sense of renewal with the turn of the calendar. Many people use this time to set new goals for the new year and take the opportunity to get re-grounded and move toward accomplishing their goals. Yet, as I reflect on the security landscape in 2011, aptly named “The Year of the Breach”; I thought it would be a perfect time to make some resolutions for 2012 that everyone with any data to protect could benefit from.

 

1. Focus More on Security and Not Just on Compliance

On a day to day basis I speak to a wide range of companies and often see organizations who are so concerned about checking the box for compliance that they lose sight of actually minimizing risk and protecting data. Regardless of the regulation in the long list of alphabet soup (SOX, GLBA, PCI, HIPAA) – maintaining compliance is a daunting task.
 
As a security practitioner, focusing on limiting exposure to every business has always been my key concern. How can I enable the business while also minimizing risk? With this mindset, compliance helps to ensure that I am doing my due diligence and that all of my documentation is in order to prove that I’m doing my due diligence to keep our customers and stakeholders happy and protected.
 
2. Ready Yourself for Mobile Device Explosion
 
The iPad is a pretty cool device. I’m no Apple Fanboy by any stretch, but this tablet perfectly bridges the gap between my smart phone and my laptop. I am not the only one seeing these devices becoming more prevalent in the workforce as well. People are using them to take notes in meetings and give presentations, yet users are not driving the business to support these devices. Many organizations instead are simply allowing their employees to purchase their own devices and use them on corporate networks.
 
If employees can work remotely and be more happy and efficient with these devices, security admins can’t and shouldn’t stand in the way. We must focus on protecting these endpoints to ensure they don’t get infected with malware. We’ve also got to protect the data on these devices to ensure that corporate data isn’t misused or stolen when spread over so many variations of devices.
 
3. Play Offense, Not Defense
 
I’ve worked in IT Security for a long time and unfortunatley along the way I’ve seen and heard a lot of things that I wish I hadn’t. Yet, I can’t afford to have my head in the sand regarding security. I need to have my finger on the pulse of the organization and understand what’s happening in the business. It’s important that I also understand how data is being used and why. Once this happens, I am able to put controls in place and be in a better position to recognize when something is abnormal. With the prevalence of bot-nets and other malware, it is taking organizations 4-16 weeks before they even realize they have been compromised. Once this surfaces, they have to play catchup in order to assess the damage, clean the infection and plug the holes that were found. Breaches can be stopped before they start, if the company and/or security admin are adamant about being on the offense.
 
These are my top three resolutions to focus on for 2012 – what is your list? I invite you to list your security resolutions in the comment section below, I’d love to know what your organization is focused on!
 
Photo Credit: simplyla
 
 

Following “The Year of the Breach” IT Security Spending Is On The Rise

By | Backup, Data Loss Prevention, Disaster Recovery, RSA, Security, Virtualization | No Comments

In IT circles, the year 2011 is now known as “The Year of the Breach”. Major companies such as RSA, Sony, Epsilon, PBS, Citigroup, etc. have experienced serious high profile attacks. Which begs the question: if major players such as these huge multi-million dollar companies are being breached, what does that mean for my company? How can I take adequate precautions to ensure that I’m protecting my organization’s data?

If you’ve asked yourself these questions, you’re in good company. A recent study released by TheInfoPro states that:
37% of information security professionals are planning to increase their security spending in 2012.
In light of the recent security breaches, as well as the increased prevalence of mobile devices within the workplace, IT security is currently top of mind for many organizations. In fact, with most of the companies that IDS is working with I’m also seeing executives taking more of an interest in IT security. CEO’s and CIO’s are gaining a better understanding of technology and what is necessary to improve the company’s security position in the future. This is a huge win for security practitioners and administrators because they are now able to get the top level buy-in needed to make important investments in infrastructure. IT security is fast becoming part of the conversation when making business decisions.
 
I expect the IT infrastructure to continue to rapidly change as virtualization continues to grow and cloud-based infrastructures become more mature. We’re also dealing with an increasingly mobile workforce where employees are using their own laptops, smart phones and tablets instead of those issued by the company. Protection of these assets become even more important as compliance regulations become increasingly strict and true enforcement begins.
 
Some of the technologies that have grown in 2011 and which I foresee increasing their growth in 2012, include Data Loss Prevention, Application-aware Firewalls and Enterprise Governance Risk and Compliance. Each of these technologies focus on protecting sensitive information to ensure that authorized individuals are using this information responsibly. Moving forward into 2012, my security crystal ball tells me that everyone, top level down will increase not only their security spend, but most importantly their awareness of IT security and just how much their organizations data is worth to protect.
 
Photo Credit: Don Hankins
 

What Happens When You Poke A Large Bear (NetApp SnapMirror) And An Aggressive Wolf (EMC RecoverPoint)?

By | Backup, Clariion, Data Loss Prevention, Deduplication, Disaster Recovery, EMC, NetApp, Replication, Security, Storage | No Comments

This month I will take an objective look at two competitive data replication technologies – NetApp SnapMirror and EMC RecoverPoint. My intent is not to create a technology war, but I do realize that I am poking a rather large bear and an aggressive wolf with a sharp stick.

A quick review of both technologies:

SnapMirror

  • NetApp’s controller based replication technology.
  • Leverages the snapshot technology that is fundamentally part of the WAFL file system.
  • Establishes a baseline image, copies it to a remote (or partner local) filer and then updates it incrementally in a semi-synchronous or asynchronous (scheduled) fashion.

RecoverPoint

  • EMC’s heterogeneous fabric layer journaled replication technology.
  • Leverages a splitter driver at the array controller, fabric switch, and/or host layer to split writes from a LUN or group of LUNs to a replication appliance cluster.
  • The split writes are written to a journal and then applied to the target volume(s) while preserving write order fidelity.

SnapMirror consistency is based on the volume or qtree being replicated. If the volume contains multiple qtrees or LUNs, those will be replicated in a consistent fashion. In order to get multiple volumes replicated in a consistent fashion, you will need to quiesce the applications or hosts accessing each of the volumes and then take snapshots of all the volumes and then SnapMirror those snapshots. An effective way to automate this process is leveraging SnapManager.

After the initial synchronization SnapMirror targets are accessible as read-only. This provides an effective source volume for backups to disk (SnapVault) or tape. The targets are not read/write accessible though, unless the SnapMirror relationship is broken or FlexClone is leveraged to make a read/write copy of the target. The granularity of the replication and recovery is based off a schedule (standard SnapMirror) or in a semi-synchronous continual replication.

When failing over, the SnapMirror relationship is simply broken and the volume is brought online. This makes DR failover testing and even site-to-site migrations a fairly simple task. I’ve found that many people use this functionality as much for migration as data protection or Disaster Recovery. Failing back to a production site is simply a matter of off-lining the original source, reversing the replication, and then failing it back once complete.

In terms of interface, SnapMirror is traditionally managed through configuration files and the CLI. However, the latest version of ONCommand System Manager includes an intuitive easy to use interface for setting up and managing SnapMirror Connections and relationships.

RecoverPoint is like TIVO® for block storage. It continuously records incoming write changes to individual LUNs or groups of LUNs in a logical container aptly called a consistency group. The writes are tracked by a splitter driver that can exist on the source host, in the fabric switch or on a Clariion (VNX) or Symmetrix (VMAXe only today) array. The host splitter driver enables replication between non-EMC and EMC arrays (Check ESM for latest support notes).

The split write IO with RecoverPoint is sent to a cluster of appliances that package, compress and de-duplicate the data, then sends it over a WAN IP link or local fibre channel link. The target RecoverPoint Appliance then writes the data to the journal. The journaled writes are applied to the target volume as time and system resources permit and are retained as long as there is capacity in the journal volume in order to be able to rewind the LUN(s) in the consistency group to any point in time retained.

In addition to remote replication, RecoverPoint can also replicate to local storage. This option is available as a standalone feature or in conjunction with remote replication.

RecoverPoint has a standalone Java application that can be used to manage all of the configuration and operational features. There is also integration for management of consistency groups by Microsoft Cluster Services and VMWare Site Recovery Manager. For application consistent “snapshots” (RecoverPoint calls them “bookmarks”) EMC Replication Manager or the KVSS command line utilities can be leveraged. Recently a “light” version of the management tool has been integrated into the Clariion/VNX Unisphere management suite.

So, sharpening up the stick … NetApp SnapMirror is a simple to use tool that leverages the strengths of the WAFL architecture to replicate NetApp volumes (file systems) and update them either continuously or on a scheduled basis using the built-in snapshot technology. Recent enhancements to the System Manager have made it much simpler to use, but it is limited to NetApp controllers. It can replicate SAN volumes (iSCSI or FC LUNs) in NetApp environments – as they are essentially single files within a Volume or qtree.

RecoverPoint is a block-based SAN replication tool that splits writes and can recover to any point in time which exists in the journal volume. It is not built into the array, but is a separate appliance that exists in the fabric and leverages array, and fabric or host based splitters. I would make the case that RecoverPoint is a much more sophisticated block-based replication tool that provides a finer level of recoverable granularity, at the expense of being more complicated.

 Photo Credit: madcowk

Why Data Loss Prevention (DLP) Matters, Compliance Regulations or Not

By | Data Loss Prevention, RSA, Security | No Comments

Working in IT for as long as I have, the general public often asserts that I have magical powers as I excitedly speak technological jargon while their eyes glaze over. I’m sure everyone in this industry has had similar experiences. However, it’s our job to translate our “techno geek mumbo jumbo” into broad terms for everyone to understand. Security practictioners are responsible for giving business leaders the information they need to make decisions to drive and enable their business. CEOs, HR Directors and Finance Managers don’t care about bits, files and unstructured data.

They do care, however, if confidential, non-public information about the organization makes it into the public eye.

What most people don’t understand is that data loss is often accidental and businesses need to implement processes and procedures for educating their employees about acceptable best practices. As much as we’d like to, we can’t stand over everyone’s shoulder to instruct them on when they can copy data to a USB device or email a document to their personal Gmail account so they can work on it from home.

This is where Data Loss Prevention (DLP) technology comes into play.

DLP is used to monitor, identify and protect sensitive and/or confidential data. It’s used to proactively monitor and protect data as it:

  1. Moves through the network (Data in Motion)
  2. Becomes stored data (Data at Rest)
  3. And as it’s being used (Data in Use)

The system not only discovers and classifies sensitive data, but also educates users on how to use company data properly. Plus, it helps to identify potential theft and misuse.

Many companies that I talk to think they only need to consider DLP in their environment if they have compliance regulations to adhere to. Compliance is certainly an important driver for implementing DLP, but it isn’t the only driver.

All businesses have information that gives them some sort of a competitive advantage. How much is this information worth? How much damage would be done if it got into the hands of a competitor? I usually find out pretty quick when we do an evaluation and the IT director and CIO see what’s actually leaving their network.

Here’s a 3-minute video I made with our marketing team on behalf of a customer who showed it as his organization’s annual company-wide meeting, in order to explain and demo DLP technology.

Cisco UCS Test Drive With IDS

By | Cisco, Data Loss Prevention, Storage, VMware | No Comments

Last week, the folks here at IDS and the knowledgeable experts at Cisco teamed up for a test drive of their latest rollout: the UCS Blade server. The feedback we received was overwhelming in response to both the capacities of the UCS, as well as the care and attention to detail that went into the development of the server (which makes sense, since Cisco is at the top of the industry in spending top dollar on research and development). The detail-oriented mindframe when developing the UCS is what differentiates it from any other server on the market to date.

[image title=”cisco” height=”333″ width=”500″ align=”center”]http://www.integrateddatastorage.com/wp-content/uploads/2011/05/Cisco-Blog.jpg[/image]

We began our tour at Cisco’s brand-spanking-new Rosemont offices, where we were in a state of awe as soon as we stepped over the threshold to be greeted by their demo data center. Everything about the layout of their offices is centered around the customer experience. Touring their facility offered a look into their awesome array of demo rooms, product displays and classrooms for product demonstrations. Distractions abounded as we mosied into their UCS demo classroom.

[image align=”center” width=”500″ height=”400″]http://www.integrateddatastorage.com/wp-content/uploads/2011/05/Viewing-the-Individual-Blade.jpg[/image]

We proceeded into an informational session about the UCS, where the following points were emphasized about the blade server:

1. Embedded management.
2. Unified fabric computing.
3. Expanded memory.
4. Virtualized adaptor.
5. Stateless servers and service profiles.

Per our tour moderator, Cisco’s Jon Ebmeier, Consulting Systems Engineer, overall the UCS handles more traffic per blade than any other server: prime example – an HP needs 38 blades, while the UCS can take on the same workload with only 19 blades. Another point that was highlighted is the chassis’ flexibility in working with the existing software in your data center. This flexibility also leads us into the UCS’s propensity towards functioning optimally within a fully virtualized environment (check out our upcoming event that revolves around 100% virtualization).

[image align=”center” width=”500″ height=”333″]http://www.integrateddatastorage.com/wp-content/uploads/2011/05/cicso-engineer-among-servers.jpg[/image]

While we viewed the actual blade server, the customers’ and engineers’ feedback I received was immense. Below are some of the highlights from actual IT managers and IDS engineers:

  • “The organization of this interface is the best I’ve ever seen.”
  • “This would make my data center exponentially easier to manage.”
  • “From a cost and real estate/space perspective the UCS can handle a lot more with less.”
  • “The Catalina chip within the blade makes 4 memory slots appear as 1 to the server, thereby cutting down on the amount of physical servers needed at any time.”
  • “In the customer example we heard from Cisco that 308 concurrent VMs were running on one server, this is unbelievable and amazing, I’d love to see what the UCS could do for my data center.”
  • “Huge network traffic ability.”
  • “Flexibility in losing a blade and still being able to move data while not going offline.”

Overall, we had an amazing experience at Cisco. Learning about the specifics around the UCS server was definitely beneficial for everyone involved. I invite you to check out the in the field interview I conducted post-tour with our engineer, David Langley:

 

Photo Credits: idsdata

float(2)