Table of Contents
1. Executive Summary
With options like multicloud, hybrid cloud, or on-premises to choose from, we’ve evolved to the point where platform choice and flexibility can become an organization’s key differentiators. Multicloud, in particular, enables organizations to innovate by keeping data and applications mobile, and leveraging a cloud provider’s best-in-class services to achieve their business needs. Those needs could be growth, security compliance, or risk mitigation. Additionally, multicloud can be an appealing prospect to meet availability, business continuity, and disaster recovery requirements.
However, the mobility, flexibility, and availability benefits of multicloud solutions often get mired in complexity, and this is especially evident in cloud storage subsystems. Moving data between clouds efficiently and securely is no trivial task. Each cloud service provider has different management tools, features, and workflows. Often, short-staffed teams get forced to context switch between vastly different environments. Duplication of effort and re-work comes at the expense of innovation.
On the other hand, homogeneity in the infrastructure stack supports consistent workflows and automation, and eases manageability for teams. Better business outcomes often come from consistency and simplicity, and this can translate to time and money savings.
This report explores the capabilities and benefits that a homogeneous storage platform can bring to an organization running multicloud workloads. We evaluate NetApp Cloud Volumes ONTAP (CVO), an infrastructure-as-a-service solution with tie-ins to NetApp BlueXP, a central control plane.
To this end, we tested whether CVO could help organizations running multicloud workloads with mobility requirements, consistent and simplified workflows, and optimized cloud spend. We learned that:
- NetApp simplified moving data between clouds with drag-and-drop ease. When pairing CVO with Cloud Sync, the workflow for moving data between multiple clouds became much more straightforward and streamlined.
- NetApp supports many low-effort ways to save on cloud spend, like storage efficiencies and object tiering. Support among Google Cloud and Amazon Web Service (AWS) often did not provide feature parity with CVO.
This GigaOm field test report helped us assess how NetApp fared as a cohesive storage platform for IT teams managing data and applications in multicloud environments. We determined that NetApp CVO provides a measurable TCO advantage for those customers who need data mobility workflows.
Among the tested cost benefits, we found:
- 50% reduction in storage management labor costs
- 42% reduction in incident and operations labor costs
- Outage-free adjustment of storage properties with CVO vs. 100% outage rate with cloud native
Additionally, CVO brings simplicity and data agility to IT teams that transcend the total cost of ownership (TCO). Complexity and context switching between clouds and workflows can lead to employee burnout and poor retention. These staffing problems aren’t quantifiable.
As a result, we can say that CVO may be viewed as an imperative element for enterprises that need the agility and mobility that multicloud workloads can offer but don’t want the complexity to overburden their teams and muddy their strategic objective. NetApp is building such a user experience by partnering with cloud providers and building a robust feature set across public, hybrid, and private clouds, as well as on-premises infrastructure.
2. What Drives Multicloud?
The public cloud foretold the death of on-premises data centers, or so it was said. In reality, organizations continue to rely on on-premises infrastructure and workloads for a wide range of operational, technical, and cost-driven reasons. Further, on-premises continues to thrive alongside cloud operations in the form of hybrid cloud implementations, which support workloads running across public and on-premises (often cloud) infrastructure.
Multicloud becomes a compelling solution for an organization when you factor in the following drivers:
- As public clouds increasingly differentiate, multicloud is a way for organizations to stay agile and take advantage of each cloud provider’s best-in-class offerings. Moving workloads to the optimal service can enable an organization to differentiate itself and maximize strategic outcomes from the value of its data and applications.
- Cloud and multicloud can become a strategy for business continuity and disaster recovery without procuring new hardware. Semiconductor shortages and supply chain problems can complicate IT infrastructure procurement.
- Leveraging multicloud allows for faster failover in high availability scenarios by distributing data across various cloud vendors. Additionally, this strategy can mitigate the risk of widespread outages of a single cloud solution, like the AWS outages in December 2021.
- By not relying solely on a single cloud, organizations lay the groundwork for protection against cloud lock-in. Ask any organization that’s gone through repatriation, and they’ll tell you that without pre-planning, changing sole cloud providers can be a Sisyphean task and a cost-prohibitive process.
Multicloud is a continuation of the shift to flexible operating models. As a result, organizations get less hung up on the either-or nature of the cloud and instead shift their focus to optimization, innovation, and driving business outcomes. These desired objectives can be security, compliance, growth, or whatever success criteria the organization has defined.
3. Challenges: When Multicloud Means Multi-Problems
For all of the value enterprises hope to gain from adopting a multicloud strategy, some considerations should be made. Some of the top considerations for organizations with a multicloud strategy include:
- How to avoid data silos and enable data mobility.
- Data mobility is about smashing silos. However, multicloud can lead to multiple silos in each cloud provider, without efficient and automatable workflows to move data and applications.
- Public cloud providers have no incentive to help an organization move data to other clouds. Getting data where it needs to be is often complex, slow, and error-prone. Time spent waiting for data or applications to move or sync is an opportunity wasted on furthering business outcomes.
- Managing data using cloud-specific tools and homegrown data replication has a higher risk of data loss, especially if each project group or DevOps team uses different processes or workflows.
- How to avoid rework or duplication of workflows.
- Every cloud service provider has different opinions and options when it comes to storage solutions, with differences like protocols, UI/CLIs/APIs, workflows, performance, and ways to optimize. Accounting for the difference among clouds becomes people and process problems.
- How data is managed can increase costs for operational teams to maintain operational dashboards and respond to outages.
- How to balance innovation and spend.
- Optimizing cloud spend can be challenging with a single cloud. More clouds mean different cloud spend optimization workflows with more ways to waste money on cloud consumption.
Creating approved enterprise application templates using NetApp’s solutions can speed up deployments and increase feature velocity. Templates save development teams from the guesswork of sizing and performance planning. Additionally, operations teams can fix sizing or performance issues transparently to DevOps teams. These changes will have zero impact on continuous deployments.
To minimize costs and the complexity trade-offs that multicloud often brings, IT teams benefit when there’s reuse in its infrastructure stack (networking, compute, or storage). Further, a homogeneous solution can mean less context switching for teams. For example, AWS and Google Cloud are vastly different in terms of user experience, APIs, and terminology. Sameness in the stack can free teams to refine and optimize processes instead of duplicating effort and focusing on refactoring and reworking. But real obstacles may block teams from achieving that goal.
First, let’s not forget the people aspect of multicloud. Moving to another cloud can mean time spent re-architecting solutions, designing new workflows, and learning new management tools. Switching between clouds isn’t time spent on innovation or adding value to the business. Likewise, using two different clouds at least doubles the change rate for DevOps teams, forcing them to stop work on business objectives to address cloud vendor changes.
Consistency in the storage stack can also help mitigate the creation of silos and reduce slow and costly data movement time. Of course, the problems are not unique to multicloud workloads. However, the challenges multiply in these use cases.
The storage stack is a good place for an enterprise to bring consistency to its environment. Also, because data is any organization’s most important asset, it makes sense to maximize value there.
With NetApp, there’s the option to keep the same data on-premises and sync to where refreshed data is needed. NetApp provides multiple ways to replicate or keep synchronized data in numerous locations. The ability to make copies of data quickly for quality assurance, security, or non-production environments is a trivial task using NetApp.
This reusability and ubiquity in a storage solution coupled with a consistent set of easily consumable services capable of running on multiple platforms make NetApp a more uncomplicated solution without the inherent complexity of using multiple native cloud vendor solutions.
Enterprises make data and applications mobile so that they can run in the cloud that’s most convenient or most suitable. But data and application mobility confers other benefits:
- Reduce complexity and cloud spend
- Unburden overworked staff
- Move data and applications to where they need to be sooner
- Reduce training costs
- Reduce complexity in monitoring and situational awareness
4. Benefits of a Consistent Storage Platform
A consistent experience within a storage platform means homogeneity within the ever-shifting heterogeneity of multicloud, where applications and data need to be mobile enough to run anywhere. A cohesive storage platform will enable enterprises to move and migrate applications and data with ease and automation. Data mobility is more than replication or syncing. It also includes the ability to clone and take snapshots. There is no multicloud without data mobility.
Furthermore, teams benefit from the same user interfaces, APIs, and tooling. As a result, the entire data/application lifecycle becomes streamlined, and teams are better able to create automatable and repeatable workflows, as shown in Figure 1.
Figure 1. Streamlined Data/Application Lifecycle
Let’s explore one phase of the application lifecycle: optimization, where teams typically take a closer look at security and costs.
In terms of security, repeatable and cloud-independent workflows to secure workloads mitigates the risk of exposure or data loss. The entire enterprise benefits from only needing to solve one storage pattern for all its applications.
With a consistent NetApp storage layer, security teams have a repeatable security process to test APIs and open-source software for security vulnerabilities. Additionally, security teams fine-tune their security hardening processes with NetApp instead of creating new strategies across disparate storage solutions.
Each cloud also has its specific method to adjust, control, and calculate cloud spend, so the enterprise benefits from having a consistent process for monitoring and optimizing costs. Finally, the best cost optimization should be easy to implement and not distract teams from focusing on driving business outcomes.
By implementing a homogeneous platform with strategies that support flexible consumption models and data movement choice, solutions are more uncomplicated and easy to configure, support, and optimize.
A cohesive storage platform enables teams to drive business outcomes throughout the application lifecycle process.
5. NetApp CVO: A Coherent Storage Platform
Complexity, silos, and costs threaten any real gains or strategic advantages an organization can realize with multicloud mobility for data and apps. With NetApp, teams end up with a uniform platform that can unify the heterogeneity of cloud provider experiences and tame multicloud chaos. This consistency lets teams reuse existing management and automation workflows while taking advantage of a public cloud’s best-in-class services.
NetApp has built a consistent and cohesive ecosystem around ONTAP. ONTAP is a unified storage platform that can host data across multiple on-premises and cloud environments. Supported protocols include NFS, SMB/CIFS, and iSCSI. The focus of this benchmark report, NetApp CVO, runs on virtual machine instances in AWS, Google Cloud, or Azure.
Because CVO is still ONTAP under the covers, it supports the same features as ONTAP. Features like multi-protocol SMB/NFS support for Windows and Linux clients, storage efficiencies, snapshots, and clones. Teams managing on-premises ONTAP don’t have any leaps to make when working with CVO.
NetApp ONTAP, as an IaaS offering like CVO, can be a consistent platform layer for teams battling the cloud’s constant change and heterogeneity. BlueXP is a centralized control plane that teams can use to deploy, manage, and optimize ONTAP in many environments. (Figure 2)
Figure 2. Consistency of NetApp’s Ecosystem with ONTAP
- NetApp® BlueXP™ is NetApp’s central control plane and single pane of glass to manage all of an organization’s ONTAP instances wherever they run. A single pane of glass simplifies administrative tasks, as does BlueXP’s drag-and-drop GUI. BlueXP is a hub to consume and access NetApp’s auxiliary service offerings like Cloud Sync, Cloud Backup Service, and more. To date, there are 19 different services offered. BlueXP can be reached at https://bluexp.netapp.com or the public IP of the ONTAP connector.
- Cloud Sync and SnapMirror, accessible from BlueXP, are consistent ways to synchronize and move data between working environments. Note that SnapMirror is familiar to on-premises ONTAP storage admins. SnapMirror APIs are exposed to BlueXP and can be managed ONTAP instances reachable from BlueXP.
Because optimizing cloud costs is the pragmatic part of a multicloud strategy, we’d be remiss if we didn’t mention some of the other key services or tools for saving on cloud spend.
- Spot by NetApp is a service offering in NetApp’s portfolio that integrates with CVO and other compute instances to automate deployments and optimize resource allocations over the application’s lifecycle, including financial management (FinOps) benefits.
- NetApp TCO Calculator is a detailed TCO calculator app that allows instance types, tiering capacity, snapshots, and other variables to calculate whether CVO will directly correlate with storage savings over cloud block and file offerings.
6. Test Criteria
Simplicity and consistency are keys to any successful implementation, but this is especially true in multicloud environments. Below, we have a set of tests where we evaluate consistency in NetApp CVO on Google Cloud and AWS. We’ll also compare functionality to AWS and Google Cloud block and file offerings.
Challenge #1: Moving Data Between Clouds
To assess the data mobility features of Cloud Volumes ONTAP, we conducted two tests. The first involved cloning data and the second involved syncing files between clouds.
Challenge #1a: Move Data Between Google Cloud and AWS for Cloning or Disaster Recovery
Moving data through BlueXP and associated services can be done with drag-and-drop ease using scheduling and automation workflows. SnapMirror is well known to existing NetApp customers. It’s the same SnapMirror that NetApp ONTAP customers have relied on to synchronize and move data for decades. The only difference is the APIs can be called and managed from BlueXP’s centralized control plane.
AWS and Google Cloud don’t support cloning natively. To make data replicas, you need to use a third-party backup solution to help the process. If you’ve ever been a backup administrator, you know that restores happen often.
BlueXP and SnapMirror support exact replicas and make cross-cloud cloning and DR possible. Additionally, because SnapMirror replicates data in its compressed and deduplicated form, data moves faster, and less network traffic is required to keep data replicas in sync.
In our tests, we replicated a volume from AWS CVO to Google Cloud CVO and back again. In BlueXP, we could drag and drop to enable the direction of replication. First, we created an NFS volume on AWS and replicated it to Google Cloud. Then, we mounted the volume on Google Cloud and confirmed that the files were an exact match. Table 1 shows the results.
Table 1. Data Movement Findings
Backup Solution w/o CVO | NetApp with CVO | |
---|---|---|
Cost | High | Low |
Automation | 2 | 3 |
Ease of Use | 2 | 3 |
Time Savings | 2 | 3 |
Source: GigaOm 2022 |
Challenge #1b: Synchronize Files Between Clouds
The traditional solution is typically to set up VPNs and use third-party software like Beyond Compare or rsync. Additionally, AWS released DataSync, which does not support the same number of endpoints and does not sit outside the AWS control plane. Table 2 shows the results.
Table 2. Synchronizing Files
Third-Party Software w/o CVO | NetApp with CVO | |
---|---|---|
Cost | High | Low |
Automation | 1 | 3 |
Ease of Use | 2 | 3 |
Time to Move Data | 1 | 3 |
Source: GigaOm 2022 |
NetApp leverages BlueXP and Cloud Sync. Cloud Sync can sync SMB/NFS and object storage, and is hardware agnostic. In addition, Cloud Sync supports non-cloud environments and also non-NetApp solutions. Like SnapMirror in our previous test, configuring data movement relationships was done with drag-and-drop ease in BlueXP.
Cloud Sync requires a data broker instance with network connectivity to the source and destination. However, setting that up was not difficult.
One of the significant advantages of using a data broker for synchronization is that Cloud Sync can leverage parallelism and scale up data brokers when data needs to move more quickly. Additionally, Cloud Sync supports data-in-transit encryption. This type of encryption can protect against man-in-the-middle attacks.
We only looked at file syncs between AWS and Google Cloud for these tests. However, Cloud Sync supports many endpoints and can be leveraged outside NetApp. You can view the supported sync relations here.
It’s one thing to move data but quite another to do it via simple drag and drop. Cloning applications and syncing data are often slow and manual processes. Additionally, it’s challenging to move between protocols. Cloud Sync and SnapMirror with BlueXP puts NetApp ahead of third-party offerings for data movement between clouds.
Challenge #2: Reducing Cost via Efficiency and Tiering
Next, we explore optimization of cloud storage costs. There are three primary ways to achieve this:
- Data efficiencies like deduplication, compression, compaction, and thin provisioning
- Minimization of storage required for snapshots and cloning data
- Tiering or moving primary storage to lower-cost storage like object storage
For this challenge, we evaluate these three approaches to reduce cloud storage spend for CVO compared to options available to block and file offerings on AWS and Google Cloud. Specifically, we looked at AWS Elastic File System (EFS), Elastic Block Store (EBS), and Google Cloud Filestore and Persistent Disk.
Challenge #2a: Storage Efficiencies Support
In Table 3, we compare the maturity and completeness of each capability as follows on a 0 to 3 point scale, as follows:
0: Capability is not supported.
1: Capability is limited, lacking maturity and completeness.
2: Capability is capable, good enough for most use cases but room for improvement.
3: Capability is exceptional and exceeds the competition.
Table 3. Storage Efficiencies Capabilities Compared
CVO | AWS | Google Cloud | |
---|---|---|---|
Data Efficiencies | 3 | 0 | 0 |
Snapshots and Clones | 3 | 2 | 2 |
Tiering to object | 3 | 1.5 | 0 |
Aggregate Score | 9 | 3.5 | 2 |
Source: GigaOm 2022 |
Data efficiencies like deduplication, compression, and compaction (storing more small files in a block) can enable organizations to store more in a smaller storage footprint. In addition, thin-provisioning lets enterprises over-allocate storage capacity and align costs closer to the amount of storage used.
When we looked at these data efficiency features, only NetApp’s storage platform let customers enable features like deduplication, compression, and thin-provisioning. While AWS and Google Cloud support these features internally for their infrastructure operations, cloud providers bill customers for provisioned storage with no savings based on data efficiencies. Therefore, we found only CVO had the potential to lower the overall cost per GB for cloud storage with data efficiency features.
Snapshots are ubiquitous and can be used for quick recoveries—both AWS and Google Cloud support them for block storage. AWS and Google Cloud charge a nominal fee for snapshots, while CVO does not charge a fee for snapshots.
Additionally, snapshots and clones happen instantly on CVO. The snapshot process kicks off the creation of an index of all of the active data blocks on a volume at that particular point in time. No additional storage is consumed except for changes after the snapshot or clone creation. By contrast, snapshots get copied out to extra storage on both AWS and Google Cloud.
On the other hand, clones are writable snapshots used for testing or disaster recovery. In Google Cloud, snapshots can be copied to create clones. Like any manual process, it’s slow and error-prone. In AWS, snapshots get copied to S3 and can then be copied back to block storage. These processes in AWS and Google Cloud are cumbersome. With NetApp, clones are effectively writable snapshots, and data is available instantly and does not require a manual copy.
Challenge #2b: Adjust Tiering and Performance Characteristics Workflows
By using tiering to object or sending low-priority application data to less expensive storage, organizations have an easy way to save on cloud storage spend. We will delve into the workflows for adjusting performance characteristics and tiering to data objects.
For these tests, we assess support for tiering data ranging from more costly primary storage to economical object storage. To assess the process for enabling tiering on both block and file protocols, we broke down the chain of tasks involved and assigned each task a score based on work required to complete the task. The scores reflect the relative difficulty and time needed to complete each task, with a score of 1 reflecting the least effort required (as little as a button click) and a score of 5 being the most demanding.
Note that we were unable to provide a comparative assessment of the task chain across providers, as both AWS and Google Cloud fail to support tiering of data. Our assessment of CVO shows that the tasks for enabling tiering on file and block protocols range from easiest (1) to moderate (3) in difficulty. Table 4 shows the analysis.
Table 4. Task Workloads for Tiering File and Block Protocols
Platform | Work Required | Task Description/Notes |
---|---|---|
CVO(AWS & Google Cloud)* | 1 | From BlueXP, go to canvas and select the CVO instance. |
3 | Check prerequisites and adjust accordingly.Tiering enabled on aggregate that contains volume. Required networking relationships are established. | |
1 | Select a volume and click Change Disk Type & Tiering Policy. | |
1 | Enable tiering for selected volume. | |
Google Cloud | N/A | (Not supported for block or file protocols.) |
AWS | N/A | (Not supported for block protocols. Limited tiering support to infrequently accessed EFS for file protocols.) |
Source: GigaOm 2022 |
* Supported also on Azure and On-Premises ONTAP
NetApp customers can complete both workflows from the same starting point in BlueXP. However, only NetApp supports tiering to object storage. With CVO, implementing this tiering setting is as easy as logging into BlueXP and following four easy steps.
Our tests found that AWS supports limiting tiering with file storage but not tiering to object storage. So, for example, AWS EFS can tier to another class of infrequently accessed file storage. On the other hand, Google Cloud does not support tiering to object storage, and moving data to object from Persistent Disk or Filestore is a manual process.
Another advantage of the NetApp solution is that any adjustments, like changing storage classes, required no downtime and did not affect volume operations. By contrast, AWS and Google Cloud solutions produced measurable downtime when changing performance characteristics. For instance, non-elastic EBS volumes must be detached and taken offline to change storage class.
7. TCO
If you visit the NetApp CVO TCO Calculator for AWS, you’ll see that CVO doesn’t become cheaper on a per TB basis until an organization hits 25 TB of storage (see Appendix). On Google Cloud, on the other hand, CVO becomes the cheaper per TB alternative before hitting 1 TB in capacity.
However, calculating TCO per TB basis misses a big part of multicloud project spending: labor costs. To get a handle on the cost outlook for a NetApp Cloud Volumes deployment, we developed a table that sums up the relative cost and time on task for the different roles typically engaged in the care and feeding of running multicloud workloads. The figures are based on a Fortune 100 company managing 1,000 volumes in two public clouds.
Table 7 in the Appendix explains the roles and hourly labor rates and total three-year costs required to manage day-to-day storage operations. Table 8 breaks down the labor costs incurred to remediate outage incidents over a three-year span. These costs are broken into labor required for managing and securing a single storage platform versus managing storage separately in two clouds. Totals are over a one-year and three-year operating cycle.
Figure 3 presents the data from Table 7 to show how the storage management labor costs over three years differ between a multicloud deployment using CVO and a two-cloud deployment that does not use CVO.
Figure 3. Cost to Establish and Operate a Two-Cloud Deployment With and Without NetApp CVO
Next, Figure 4 draws on the data from Table 8 to show the labor costs related to incident response and remediation activities over a three-year span. Here, NetApp CVO produces three-year incident response costs of $331,755, compared to $624,210 for an enterprise managing two clouds independently. Combining these figures with the day-to-day management costs depicted above, we see in Figure 4 that NetApp CVO cuts labor costs nearly in half.
Figure 4. Combined Management and Incident Response Labor Costs Over 3 Years
Both charts illustrate how NetApp CVO reduces the expense of unplanned downtime by enabling ease of use, lower cost at scale, and fewer incidents.
8. Analyst’s Take
Doing multicloud well and not getting dragged down by complexity can challenge any organization. To be successful, organizations need sameness somewhere in the infrastructure stack. Unfortunately, complexity leaves teams overwhelmed, burned out, and struggling not to create more technical debt.
NetApp CVO can be that consistency in the storage stack for organizations. Moving data where it needs to be and creating consistent, repeatable, and automatable workflows and processes while optimizing cloud spend solves many multicloud problems. From our testing of CVO with BlueXP, we found a cohesive platform that seemed to do these things. Getting data where it needs to be is more than replication. It’s also snapshots and cloning that enable teams to rapidly spin up tests.
In our tests, BlueXP cut through the mishmash of cloud provider interfaces, APIs, and overall differing cloud experiences with simplicity. This simplicity can greatly benefit teams that often end up re-working solutions and workflows in multicloud environments. In addition, most of the work in BlueXP was accomplished with drag-and-drop ease.
Lastly, NetApp’s ecosystem paired with a single storage platform can meet organizations wherever they are in their innovation journey—from on-premises ONTAP to Cloud Volumes ONTAP to a native, fully managed storage service offering like Amazon FSx for NetApp ONTAP. Organizations can continue to grow and innovate while ONTAP becomes the constant in an IT landscape rife with change.
9. Appendix
Environment Specifications
Following are specifications of the AWS and Google Cloud environments that we configured to test Cloud Volumes ONTAP (CVO), the results of which were used to derive the findings we have presented in this report.
For each environment we configured three virtual machines as follows:
- Cloud Volumes ONTAP
- Data Broker
- Data Connector
AWS Environment
1. Cloud Volumes ONTAP
Region: US-West-2
Instance: AWS m5.xlarge
Table 5. AWS Cloud Volumes ONTAP Storage (EBS)
Size | Type | Provisioned IOPS | Provisioned Throughput | Encrypted |
---|---|---|---|---|
47 | io1 | 1,250 | Yes | |
140 | gp3 | 3,000 | 125 | Yes |
540 | gp2 | 1,620 | Yes | |
500 | gp3 | 3,072 | 250 | Yes |
1,024 | gp2 | 3,072 | Yes | |
Source: GigaOm 2022 |
2. Data Broker
Region: US-West-2
Instance: m5n.xlarge
Storage EBS
Size: 10
Type: gp2
Provisioned IOPS: 100
Provisioned Throughput: None
Encrypted: No
3. Data Connector
Region: US-West-2
Instance: t3.xlarge
Storage EBS
Size: 100
Type: gp2
Provisioned IOPS: 300
Provisioned Throughput: None
Encrypted: No
Google Cloud Environment
1. Cloud Volumes ONTAP
Region: us-east1-c
Table 6. GCP Cloud Volumes ONTAP
Size | Type | |
---|---|---|
10 | pd-ssd | boot |
315 | pd-standard | core |
500 | pd-ssd | nvram |
64 | pd-ssd | root |
500 | pd-ssd | datadisk1 |
Source: GigaOm 2022 |
2. Data Broker
NAME: gcp-ontap-databroker-1-xaa-data-broker
ZONE: us-east1-b
MACHINE_TYPE: n1-standard-4
3. Data Connector
NAME: ontap-gcp-conn-2
ZONE: us-east1-c
MACHINE_TYPE: n1-standard-4
Cloud Volumes ONTAP vs Native Cloud Labor Savings
Here we present staffing costs along with the stakeholder roles and activities required to deploy 1,000 volumes on two native cloud environments versus doing the same using Cloud Volumes ONTAP. The assumptions used in our calculations were as follows:
- A multicloud deployment assumes the following activities: time to read request ticket, configure, deploy, wait for OS admin to mount, and close ticket.
- 10% of cloud volumes either need performance, financial, or size optimization.
- 5% of volumes have an incident that has to be resolved by a storage admin.
It should be noted that CVO currently scales to a maximum of 500 volumes per node. However, scaling does not impact labor savings values.
Table 7: Storage Management Base Labor Costs
Role | Rate per Hour (USD) | Hours per Solution Setup | Ongoing Hours per Year per Solution | 3-Year Total Hours |
---|---|---|---|---|
Cloud Architect | $115 | 1 | 1 | 4 |
Incident Managers | $90 | 2 | 1 | 5 |
Help Desk Level 1 | $35 | 20 | 8 | 44 |
Help Desk Level 2 | $60 | 40 | 16 | 88 |
ITSM Admin | $60 | 40 | 40 | 160 |
Security Admin | $105 | 80 | 40 | 200 |
FinOps Admin | $65 | 8 | 12 | 44 |
Storage Admin | $85 | 40 | 40 | 160 |
Source: GigaOm 2022 |
Table 8: Incident Response Labor Time on Task
Criteria | 2 Cloud Native | NetApp CVO |
---|---|---|
Create Cloud Services per Cloud (hours) | 2 | 1 |
Create Cloud 1,000 Volumes on Demand Without Automation (hours) | 2,000 | 1,000 |
Triggers to Optimize 1,000 Cloud Volumes per Year (10%) | 100 | 100 |
Incidents per 1,000 Volumes per Year (5%) | 50 | 50 |
Hours per 1,000 Volumes Spent Remediating | 230 | 50 |
Hours per 1,000 Volumes Spent Time at Level 1 | 10 | 10 |
Hours per 1,000 Volumes Spent Time at Level 2 | 20 | 10 |
Hours per 1,000 Volumes Spent Storage Admin | 200 | 100 |
Hours per 1,000 Volumes Spent Security Admin | 20 | 10 |
Hours Used by Incident Managers | 50 | 25 |
Source: GigaOm 2022 |
10. About Becky Elliot
Becky Elliott has worked as an independent industry analyst. Currently, Becky works in the public sector and has held roles in Dev, Ops, and the areas in between. Over the last twenty years, these roles have increasingly focused on virtualization, data management, and security. Becky has been a regular Tech Field Day Delegate since 2017. She’s active in both the Tech Field Day and vExpert community. She holds several industry certifications, including the Certified Information Systems Security Professional (CISSP).
11. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
12. Copyright
© Knowingly, Inc. 2022 "NetApp Cloud Volumes ONTAP: A GigaOm Benchmark Field Test" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.