Is Your Cloud Backup Provider Fail-Proof? Risks You Should Know

In the realm of data security, cloud storage provides an appealing solution for backup. The advantage of this system is straightforward: delegate the complexities and expenses of backup infrastructure to a service provider, and relax knowing your data is safeguarded in an off-site location. Moreover, cloud backups ensure that at least one replica of your backup data is stored off-site. Several backup providers also offer immutable backups, making it impossible for hackers to erase, encrypt, or distort your backups in any way, even if they gain administrative access over your backup system.

Although cloud backups are widely embraced currently, it’s crucial to realize they are not risk-free. The experiences of Carbonite and StorageCraft, two cloud backup vendors that lost part of their clients’ backup data, perfectly illustrate this point.

In 2009, a multi-disk failure in Carbonite’s backup storage arrays resulted in the total loss of the most recent backups for 7,500 clients.

In 2014, a human error during a cloud transition led to a significant data loss for StorageCraft. A system admin prematurely decommissioned and erased a server prior to its full migration to the cloud. This blunder resulted in the loss of metadata and a rushed effort to assist clients in re-establishing their backups. While the specific technical failures vary, the primary issues in both instances are identical: insufficient redundancy and resilience protocols for storing client backup data.

Carbonite’s reliance on singular RAID arrays using nothing but RAID5 exposed them to various failure scenarios. The most recurrent problem is loss of an entire RAID5 array due to simultaneous failing of more than one drive, the exact experience Carbonite had. This undesirable situation is a result of the size and number of drives in an array, the significantly bigger sizes than before, and the inevitable fact that drives fail. The recommendation against using RAID5 configuration has been made for a long time, yet this is what was used.

An array that harbored data from around 7,500 customers was basically a disaster waiting to happen. This array was open to risks such as fires, floods, electrical shorts, all of which could eradicate their clients’ data. Also, the incidence of data centers becoming engulfed in flames is not uncommon as witnessed in the 2022 disaster at OHV. Considering this, providers of data storage services should put in place geo-redundant storage facilities like S3, Azure Blob, or Google object storage systems.

StorageCraft had a similar inadequacy in their design. The fact that the decommissioning of a single server led to the loss of all metadata points to a lack of geo-redundancy and fault-tolerance in their backup storage design. There was a single server, holding a single only-replica of the singular data that could have enabled all customer backups to be reassembled. Likewise, this company could have been obliterated by a fire, flood, or electrical short, but instead, a simple human error was the cause. A reminder that human error and malfeasance occupy the first and second spots as the leading causes of data backup.

As someone well-versed with data back up, such events are disheartening. The 3-2-1 backup principle states that three replicas of your data should be stored in two different media, with one being stored off-site. An up-to-par backup service provider should design their clouds with multiple redundancy layers, geo-replication, and fault-isolation. Anything short of these puts customer data at risk. The destruction of a single replica in any backup environment should not result in the loss of all the other replicas.

When a misfortune of this sort occurs, the manner in which the corporation manages it is also scrutinized. Instead of admitting their own mistakes, Carbonite chose to press charges against their storage vendor. According to them, the flaw wasn’t in their system but rather, it was their storage vendor’s storage array which led to the loss of client data. The details about the out-of-court settlement remain undisclosed. Furthermore, Carbonite’s CEO attempted to downplay the incident publicly, asserting that only backup data was lost and not production data. This arguement, however, would not have placated the 54 companies who lost production data as they were dependent on the backup data for restore.

On the other hand, StorageCraft responded in a more commendable manner. Its CEO owned up to their mistakes and acknowledged the serious implications of the incident. Besides, the firm went to great lengths to help its clients rebuild their backup. This included the dispatch of drives to fast-track the data transmit. This expedited process, also termed as ‘the seed’, is often used for an initial backup over the internet.

So, what can businesses planning to employ cloud backup services learn from this? Here are some pointers.

Despite being a powerful tool for backup modernization and cost-efficiency, the cloud, like every infrastructure decision, necessitates thorough research to distinguish real enterprise-grade providers from those compromising on quality. The instances of Carbonite and StorageCraft serve as eye-openers about the potential slips and are a gentle reminder for all of us working in the backup industry to uphold our goal of being data protection super-heroes. In short, trust but verify.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article

Clazar Obtains $10M Funding to Enhance Cloud Marketplace Integration for Independent Software Vendors

Next Article

Amazon Offers Lowest Price Yet for Asus Customizable Xbox Controller

Related Posts