Table of Contents
- Report Methodology
- Design Criteria
- Additional Considerations
- Solution Profile
- Analyst’s Take
- About Enrico Signoretti
- About GigaOm
Many organizations are investing heavily in the cloud to improve their agility and optimize the total cost of ownership of their infrastructure. They are moving applications and data to the public cloud to take advantage of its flexibility, only to discover that, when not properly managed, the public cloud costs can quickly spiral out of control.
Data storage and protection are among the biggest pain points of many cloud bills. Many of the services available in the public cloud need to be enhanced and hardened to deliver the reliability and availability of enterprise storage systems and the tools to manage the protection of data saved in them need to go well beyond simple snapshot-based data protection.
Even though snapshots provide a good mechanism to protect data against basic operational incidents, they are not designed to meet enterprise needs and can be particularly expensive when managed without the proper tools and awareness of the environment. At the same time, traditional enterprise backup solutions are not optimal because they do not provide the necessary speed and flexibility and add unnecessary complexity to the picture.
Cloud-native backup solutions are designed to add enterprise-class backup functionalities to the public cloud while improving data management processes and costs. Compared to traditional (agent-based) and snapshot backup solutions, cloud-native data protection offers several advantages and simplifies operations.
In this regard, the user should take into account some important aspects:
- Speed: When properly integrated, cloud-native backup can take advantage of snapshots and other mechanisms available from the service provider to speed up backup and restore operations.
- Granularity: One of the biggest limitations of snapshots is the ability to restore single files and database records, one of the most common requirements. To do so, the user has to mount the snapshot on a new virtual machine instance, recover the necessary field, and then kill the instance. This is slow, and the process is also error-prone.
- Air gap: Creating distance between source and backup targets is at the base of every safety and security practice in data protection, especially with the increasing number of ransomware attacks. Snapshot management services in the cloud do not separate snapshots from the source storage system, exposing the system to potential attacks or risks of major service failures.
- Operation scalability: Snapshots are good for making quick backup copies of data, but they tend to show their limits pretty quickly. Most of the services available in the market make it difficult to coordinate snapshot operations and grant application consistency. At the same time, managing a large number of snapshots can quickly become complicated and, while automation exists, it usually lacks the user-friendliness necessary to manage large-scale environments. Agent-based solutions have a different set of challenges, but the scalability of operations can easily become a problem as well. With agents, everything should be planned in advance, and it is another software component that has to be installed and managed over time.
- Cost and TCO: Snapshots are relatively cheap, but they are very expensive to manage in the end, creating hidden costs that are difficult to remove over time. Again, for agent-based solutions the user has to consider additional costs coming from additional resources necessary to run backup operations and infrastructure management.
The most efficient way to operate in the public cloud is to always adopt solutions specifically designed in a cloud-native fashion. In this context, the best data protection is the one that can take advantage of the services available from the cloud provider and can operate with them to build a seamless user experience. This means having the ability to operate with snapshots, organize them efficiently, and have full visibility of data for recovery operations. At the same time, enterprise users expect to find features and functionalities similar to what they have on their traditional backup platforms, including application awareness, analytics, reporting, and so on.
About the GigaOm Use Case Scenario Report
This GigaOm report is focused on a specific use case scenario and best practices to adopt new technology. It helps organizations of all sizes understand the technology, and apply it efficiently for their needs. The report is organized into two sections:
Design criteria: A simple guide that describes the use case in all its aspects, including potential benefits, challenges, and risks during the adoption process. This section also includes information on common architectures, how to create an adoption timeline, and considerations about interactions with the rest of the infrastructure and processes in place.
Solution profile: A description of a solution that has a proven track record with the technology described in these pages and with this specific use case.