Navigating Cloud Backup with the Data Positioning Compass

Your first step changes everything. It can even change the hemisphere you find yourself in.

Navigating Cloud Backup with the Data Positioning Compass

The same applies to your data’s journey into the cloud because the first few decisions you need to make affect every thing else related to command and control. So which way will you go?

  • Will your first step be to the East where all of your backups converge as point in time snapshots that are monitored by an administrator; or to the West where your data decisions are made by users and files are constantly synchronized (copied) to multiple devices ?
  • As for that second step, will your data go South into a vendor’s pre-built data stores; or North to data store(s) that you personally create, move, and delete with the cloud vendor(s) you choose ?

Before continuing it is important to note that every point of the Data Positioning Compass (DPC) leads to a viable destination. At issue is not the decision itself, but the alignment of those decisions with the desired outcomes.

For instance, if you want your employees to decide which files go into the cloud and you are comfortable with a copy of those files distributed to every participant’s device(s) then a sync process (West) is a good choice.

Just keep in mind that sync products started out as workgroup productivity enhancers and you should reflect on using them as your organization’s sole backup because the files are written to and deleted with the results mirrored in near real time.

Backup is different than sync. Backup data is no longer “in motion” once it is moved into an authoritative repository with controlled ingress and egress. The term backup also implies point in time snapshots for files, and as important, groups of files that are of a like generation.

If that is what you want, then your first step should be to go East with software that permits your administrator(s) to decide what is backed up and, critically, how long it is backed up. Most organizations will come to a point where they do NOT want to just keep everything forever and will want to impose policies and control distribution and access.

Whether you have decided to sync or backup, the next step is deciding whether you want to go North and create your own cloud data store or go South and use the data store bundled with your software.

The least expensive combination – and cost is a very important factor – is usually going to be in the South West quadrant with sync software and the data store provided by the same vendor. If you are ok automatically copying all your files to a mirror site but concerned about the implications of being in a “cloud crowd” then you might prefer to go into the North West quadrant and run sync software into a cloud data store that you control. That way there are no other organizations in your storage pool so you will not be affected by their activities.

Backup can also be into a software and data store combination from a single vendor in the South East quadrant, or you can pick the software and choose where you want to put your cloud data store(s) in the North East quadrant.

The primary benefit of using a single vendor for both the backup software and the data store is at the outset when you are just getting started because they are tightly coupled.

Separating the software and the data store results in a different set of benefits, primarily in the areas of flexibility and integration with the rest of your cloud operations. If you are going to be deploying production systems into your selected cloud(s) then it definitely makes sense to incorporate backup “Nodes” into your cloud computing pool so you can programmatically distribute data to your systems and the content networks.

Of course there are many other benefit and feature trade-offs between these options but this post is simply about identifying those first two critical steps and recognizing that they result in very different destinations.

So, to recap, the East-West decision is about the contents of the data store and how files are transmitted whereas the North-South decision is about the location and nature of the data store itself.

When viewed from a command and control standpoint; decoupling the method (software) from the repository yields higher command while structured backups of specific data sets governed by policies grant more control than simple synchronization.

These benefits are valued differently by each organization. The key is to understand that there is wide variation between “cloud backup” solutions and to start off in the best direction to meet all of your objectives.

Having said that, there are some real differences between the real world and the virtual world. Most notably, in the latter you can be two places at one time. This means that if your organization is deriving value from sync products that they themselves can be backed using a structured method. More on that next time.

Posted in Uncategorized | Leave a comment

Yet Another Reason To Run With the IaaS Cloud Crowd

The first thing to “consider” is that SLAs (Service Level Agreements) are not written for customers, rather they are written by and for service providers. So unless you are a Fortune 1,000 customer with unlimited legal resources don’t ever expect to be compensated for data loss. You will be lucky to get a credit for the affected month.

The next factor is the cognizant party nature embedded in SLAs. Specifically there are different types of service providers; those which have multiple legal risk surfaces and those which have effectively only one (1).

For instance, a cloud backup vendor providing both the infrastructure AND a service has at least two (2) risk surfaces because they are involved with the actual assets of their customer(s). If that same vendor is providing the infrastructure and the service THROUGH a reseller channel then it jumps to at least three (3) surfaces.

Typically the SLAs for a vendor who is providing the actual service AND the infrastructure upon which that service runs are one-offs. In other words, there is an implicit individual SLA with each customer and it is architected to protect the provider by limiting responsibility because the vendor – or vendors if you include the channel – is/are technically “capable” of accessing the data. This means that the SLA must be designed to “thread the needle” and mitigate exposure on single accounts.

Keep in mind also that vendors with individual SLAs can triage which of their customers will receive immediate attention in times of crisis and which will wait their turn. This means the SLA is actually part of a larger profit/loss calculation at various points in your relationship.

Contrast this with an IaaS vendor who is not making any special arrangements based on customer identity and has absolutely no idea what is going on in any given cloud computer, and therefore need only address a single risk surface.

Since an IaaS vendor is only providing the infrastructure with no influence or visibility into the services running on that infrastructure they can operate on a generic one-size-fits all basis. This usually manifests as a terms and conditions checkmark on the credit card form you fill out.

This way the vendor exposure is limited to nominal penalties for service outages and reminders that you will get nothing if your data is lost. On the surface this seems less attractive but in reality it is actually better for most normal organizations for a couple of reasons.

  • First, because the SLA is stretched over a large number of undifferentiated customers there is a very real incentive to keep the “crowd” from becoming dissatisfied and talking about it.
  • Second, because the risk surface is much broader and less specific, third party out-of-band requests are harder to satisfy because the IaaS provider, by design, does not have ready access to the customer’s data.

Point being that ownership is much clearer with IaaS as opposed to SaaS and even PaaS.

Using the cloud backup example again, if you run your own software then the ownership is clearly delineated. In fact, when you attach a drive to one of your cloud computers, it is bound by the same limitations as a normal drive. In other words, while it is attached to your cloud computer, it can not be attached to another computer, just like in your data center.

Of course there is lots more to this subject but the basic premise is that sometimes being part of a crowd has it’s benefits.

Posted in Uncategorized | Leave a comment

The SMB Disaster Recovery ROI Myth

The data processing industry regularly laments the abysmally low adoption rate for Disaster Recovery among SMBs. We have all seen the figures that most SMBs who lose their data are pretty much guaranteed to go out of business in five (5) years. That may be true but it obscures the fact that historically most offsite data protection solutions have been ridiculously expensive and to date the ROI has been so poor it made no sense to spend precious cash to offset such a low probability occurrence.

So the “myth” in this case is that SMBs have made bad decisions when in fact, based on the numbers alone, they correctly sensed that it was not yet time to invest in offsite backup until just recently with the advent in cloud computing / cloud backup. Just imagine if you will that the fear uncertainty and doubt (FUD) gang had convinced every single small business to stand up an offsite position. That would have represented billions, possibly trillions, of wasted investment.

For instance, if a typical small business had somehow managed to “replicate their production environment” at a remote site or with a service provider five (5) years ago they would now be stuck with, essentially, boat anchors. Unless they experienced a major outage during that period then they never benefited from the tens of thousands of dollars spent. Too many of those kinds of expenditures and you will definitely go out of business.

SMBs are street smart and they can tell when something is in their best interest and when it is a good time to invest. With the advent of hypervisors and cloud computing the ROI on cloud backup is finally moving into an appropriate range. For instance, if there is a 2% chance your business location will be compromised in a given year then you can easily justify spending at least 2% of your profit on an alternate “data position”. So $100K annual profit suggests break even for a $2K annual investment at which point the ROI is finally on your side.

Also, keep in mind that ROI calculations for disaster recovery are separate from those for a local backup. This means that if you figure there is a 2% chance that you will delete a file that you want back in the coming year (what are the chances) then that is an additional 2% ROI for data protection on top of the 2% chance your physical plant will be compromised.

Ideally you will deploy a single solution with linear costs and exponential benefits so you can dramatically improve survivability simply by adding more of the exact same kind of Nodes that you run locally into your cloud backup network. Moreover the costs with cloud computing are now so attractive that it actually makes sense to deploy your own software into your public cloud computer cluster rather than isolating your backup data in complete separate hosted position.

This just keeps the ROI layering going when you deploy a cloud backup Node within your community of cloud computers on the same virtual subnet because it adds points for competitive advantage. Some examples of this include managed file transfer and content delivery which can easily add another 1% to 3% to your ROI calculation making the investment really easy to justify.

In other words, now is a great time, finally, to ensure your organization is both hard to kill and hard to beat.

Posted in Uncategorized | Leave a comment