This GigaOm Research Reprint Expires: Jul 28, 2023

GigaOm Radar for Cloud-Native Kubernetes Data Storagev3.0

Persistent Storage Solutions for Cloud-Native Applications

1. Summary

The adoption of cloud-native, container-based architectures and application modernization continues to fuel demand for persistent storage on Kubernetes platforms. Organizations understand that the benefits of cloud-native workloads in terms of performance, scalability, and portability are key enablers for achieving business goals.

Many enterprises are already running cloud-native workloads and understand the benefits of more agile and flexible architectures, including application portability that enables frictionless workload movement from the data center to the cloud, and even across clouds, providing greater flexibility and responsiveness to business requirements than legacy technologies do.

Data storage solutions for Kubernetes environments have evolved since our last report, especially in the realm of migration and mobility, as well as in maturing enterprise features for security, advanced data services, and enhanced developer experience.

A common pattern in the adoption of persistent storage solutions for Kubernetes is the reuse of existing enterprise storage solutions. This is considered a safe bet for the first couple of deployments, but it can’t cope with the sheer number of backend operations required by Kubernetes at scale. This limitation, together with the complexity involved in managing multicloud environments with traditional storage, encourages users to look for smarter and more efficient alternatives.

Compared to other types of storage systems, Kubernetes-native storage offers an environment that is more friendly to development operations (DevOps), helping to build a hardware stack that can be controlled by the operations team while enabling developers to allocate and monitor resources quickly, in a self-service fashion, when necessary. This is a major boon for enterprise IT organizations looking for the smartest way to evolve their processes and align them with the latest business and technology requirements.

Organizations can now consider more factors than ever before, including financial and business issues, when choosing where their applications and data should run—and they want the freedom to decide where that should be. The public cloud is known for its flexibility and agility, but on-premises infrastructures are still better in terms of efficiency, cost, and reliability. With widespread adoption across cloud, edge, and on-premises, Kubernetes is instrumental in executing the vision of portable, flexible, and agile hybrid cloud strategies, making applications and their data both portable and cloud-agnostic—for the most part. It needs the right integration with infrastructure layers—such as storage—to complement its still-maturing native support for stateful data storage.

It’s still a significant task to select and implement a Kubernetes storage solution for persistent data that makes the most of Kubernetes’s application mobility and data portability capabilities.

With Kubernetes now supporting business-critical applications and services, requirements have become more stringent. Scalability, performance, resilience, security, and other non-functional requirements are the order of the day, and Kubernetes needs to do it all to ensure a consistent level of throughput without service disruptions. These requirements drive the demand for enterprise-class stateful data services, solid security controls, mature multitenant performance management—like quality of service (QoS) and bandwidth throttling—and thorough alerting, reporting, and monitoring.

Lastly, enterprises do not want to be locked into any single vendor’s ecosystem as they reap the benefits of Kubernetes’s portable and agnostic promise, and they’re looking for a storage solution that works with feature parity across on-premises and cloud infrastructures.

This report focuses on cloud-native persistent storage solutions for Kubernetes. These are architectures specifically designed to address the needs of cloud-native applications without compromising on performance or scalability. They are usually not engineered to co-exist with other workload types, such as virtualization.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

2. Market Categories

In this report, we’re evaluating Kubernetes-native storage, referring to solutions built specifically to support stateful containers with scalable, distributed architectures. Typically, the storage system itself runs as a set of containers on a Kubernetes cluster, exposing storage via container storage interface (CSI) to the cluster to be consumed by workloads, and runs alongside the application workloads in the Kubernetes cluster.

These distributed storage solutions are tightly coupled with the container orchestrator and are container-aware so that when the orchestrator spins up or destroys a container, it also handles storage provisioning and deprovisioning operations. Storage operations are automated and invisible to the user.

These solutions are built to recognize and solve the challenges of Kubernetes storage and thus seamlessly integrate with the container ecosystem. The architectures have the tightest integration with the container environment; they closely follow—and implement—new technologies and protocols developed to extend Kubernetes storage capabilities. They also provide the best performance in day-to-day usage.

These solutions also scale more easily, adhering to the autoscaling rules of the cluster. If a cluster node is added or removed, the storage system automatically scales up and down as well. This automation makes this type of storage very flexible and dynamic, closely aligning with the application design paradigms it supports. Often, solutions in this category use storage policies to decouple workloads from the physical storage media, and they are hardware-agnostic to support a wide range of commodity servers and cloud services without any fundamental adjustments.

To better understand the market and vendor positioning (Table 1), we assess how well solutions for cloud-native Kubernetes data storage are positioned to serve specific market segments.

  • Small-to-medium business (SMB): In this category, we assess solutions on their ability to meet the needs of organizations ranging from small businesses to medium-sized companies. Also assessed are departmental use cases in large enterprises where ease of use and deployment are more important than extensive management functionality, data mobility, and feature set.
  • Large enterprise: Here, offerings are assessed on their ability to support large and business-critical projects. Optimal solutions in this category will have a strong focus on flexibility, performance, data services, and features to improve security and data protection. Scalability is another big differentiator, as is the ability to deploy the same service in heterogeneous environments, including on-premises and cloud. Finally, the developer experience piece is weighed in this category, as large enterprises often need self-service capabilities for their development teams.
  • Independent service provider/managed service provider (ISP/MSP): In this category, solutions that are suitable for ISPs and MSPs are assessed. These should include additional security and multitenancy capabilities and the ability to throttle performance per tenant.

Key to a successful deployment is a solution’s ability to go where the data goes. In other words, it’s important to determine whether the data storage solution can be deployed on-premises, in the cloud, at the edge, and at smaller independent service providers. Such flexibility not only takes the solution’s architecture into account but also indicates whether it can be deployed easily across the variety of environments organizations have to cope with.

Table 1. Vendor Positioning

Market Segment

SMB Large Enterprise ISP/MSP
Red Hat
3 Exceptional: Outstanding focus and execution
2 Capable: Good but with room for improvement
2 Limited: Lacking in execution and use cases
2 Not applicable or absent

Note that GigaOm is publishing another Radar report on Kubernetes storage focused on general-purpose enterprise storage systems that support Kubernetes-based container environments. Enterprise Kubernetes storage allows organizations to leverage existing deployed storage platforms to deliver persistent storage capabilities without having to architect new solutions. These solutions are mostly suited to mixed-workload environments or large data centers with a sizable investment in storage infrastructure.

3. Key Criteria Comparison

Building on the findings from the GigaOm report “Key Criteria for Evaluating Kubernetes Data Storage”, Table 2 summarizes how each vendor included in this research performs in the areas that we consider differentiating and critical in this sector. Table 3 follows this summary with insight into each product’s evaluation metrics—the top-line characteristics that define the impact each will have on the organization.

The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the market landscape, and gauge the potential impact on the business.

Table 2. Key Criteria Comparison

Key Criteria

Advanced (CSI) Integrations Deployment Models Advanced Data Services Control Plane Architecture Data Footprint Optimization Developer Experience Visibility & Insights
DataCore 2 3 1 3 1 3 2
Diamanti 2 3 3 3 3 2 3
IBM 3 3 2 3 2 3 3
Ionir 3 2 3 3 2 2 2
NetApp 2 3 3 3 3 3 3
Ondat 2 3 2 3 2 2 2
Portworx 3 3 3 3 2 3 3
Red Hat 3 1 2 2 3 3 2 3 3 3 3 2 3 3
SUSE 2 3 2 3 1 2 2
VMware 2 1 2 2 3 3 2
3 Exceptional: Outstanding focus and execution
2 Capable: Good but with room for improvement
2 Limited: Lacking in execution and use cases
2 Not applicable or absent

In each vendor write-up, we take special note of the deployment models a solution supports, including:

  • Physical appliance (storage-only or hyper-converged)
  • Software-only Kubernetes-native deployment (Operator, Helm chart, CRD, and so forth)
  • Public cloud image or marketplace
  • Virtual appliance
  • Managed service
  • Software-only
  • Cloud-adjacent physical appliance or service, directly connected to the cloud

Table 3. Evaluation Metrics Comparison

Evaluation Metrics

Architecture Scalability Flexibility Efficiency Manageability Performance
DataCore 3 3 2 1 2 3
Diamanti 3 3 2 3 3 3
IBM 3 3 2 2 3 2
Ionir 3 3 3 3 2 3
NetApp 3 3 2 3 3 3
Ondat 3 3 2 2 2 3
Portworx 3 3 3 2 3 3
Red Hat 2 3 3 3 3 3 3 3 3 2 3 3
SUSE 3 2 2 1 2 3
VMware 2 2 3 2 3 2
3 Exceptional: Outstanding focus and execution
2 Capable: Good but with room for improvement
2 Limited: Lacking in execution and use cases
2 Not applicable or absent

By combining the information provided in the tables above, the reader can develop a clear understanding of the technical solutions available in the market.

4. GigaOm Radar

This report synthesizes the analysis of key criteria and their impact on evaluation metrics to inform the GigaOm Radar graphic in Figure 1. The resulting chart is a forward-looking perspective on all the vendors in this report based on their products’ technical capabilities and feature sets.

The GigaOm Radar plots vendor solutions across a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation and Feature Play versus Platform Play—while providing an arrow that projects each solution’s evolution over the coming 12 to 18 months.

Figure 1. GigaOm Radar for Cloud-Native Kubernetes Data Storage

As you can see in the Radar chart in Figure 1, the cloud-native Kubernetes data storage space is evolving rapidly, solutions are innovative, and the market response is dynamic. This scenario explains why there are, like last year, no vendors in the upper half of the Radar, as customer requirements and vendor features keep evolving rapidly.

However, we see a broad characterization of different approaches in the market, with some vendors making the jump between categories (compared to last year’s report) based on customer demand.

In the Innovation, Platform-Play quadrant at the bottom right are vendors that are building cloud-native storage platforms. These are vendors that see persistent storage as their unique differentiation and are building a product portfolio around it.

Common in this group is the coupling of a storage platform with a Kubernetes distribution and cluster management product, the combination of which creates a highly integrated turnkey solution for customers looking to make their entrance into the world of Kubernetes. In this group, we see the most complete feature sets from the three strongest contenders in this Radar, each with strong enterprise approaches, mature advanced data services, and well-executed developer experiences.

In the group trailing slightly behind the Leaders circle are four Challengers, each with a completely different approach ranging from a focus on continuous data protection to a CNCF-backed open-source project. While the foundations of these products are solid, they’re missing the mark on some enterprise-grade capabilities.

In the Innovation, Feature-Play quadrant at lower left, we see three vendors with a different approach to the market, oriented more toward specific features, deployment models, or niche use cases.

VMware’s and Red Hat’s persistent storage for Kubernetes solutions are available only to those using their respective larger product sets, Tanzu and Red Hat OpenShift. While counterintuitive, this lock-in of storage feature sets means the storage platforms themselves lack the breadth of scope required to be positioned in the Platform-Play sector. Nevertheless, both Tanzu and OpenShift users will find a compelling storage solution bundled into the products.

Both vendors are leading in their own right, creating a fully integrated developer experience, including storage, Kubernetes cluster management, and development workflows. Their solutions offer the most convenient path to cloud-native Kubernetes storage for enterprises that are already invested in these platforms, and have strong enterprise-grade feature sets and management capabilities.

There are a few defining characteristics of your organization’s IT profile that can help you make the right purchase decision. You can pin them down by asking:

  • Do you already have a storage platform in place that can also support Kubernetes-based workloads?
  • Do you already have a cluster management solution that comes with, or prefers, a certain solution?
  • Do you prefer (or require) a commercial solution based on an open-source project, or even a completely open-source solution?
  • Where and how are you deploying Kubernetes clusters? On-premises, cloud, edge, and certain Kubernetes-based platforms (like OpenShift and Tanzu) all have an impact on which solution is best for you, and vendors in this report support varying levels of deployment flexibility.
  • What advanced data services, including synchronous and asynchronous replication, snapshots, deduplication, compression, and data protection features for disaster recovery and backup, do your workloads require?
  • Is there a storage team capable of managing the storage, or does it require developer self-service capabilities?

Inside the GigaOm Radar

The GigaOm Radar weighs each vendor’s execution, roadmap, and ability to innovate to plot solutions along two axes, each set as opposing pairs. On the Y axis, Maturity recognizes solution stability, strength of ecosystem, and a conservative stance, while Innovation highlights technical innovation and a more aggressive approach. On the X axis, Feature Play connotes a narrow focus on niche or cutting-edge functionality, while Platform Play displays a broader platform focus and commitment to a comprehensive feature set.

The closer to center a solution sits, the better its execution and value, with top performers occupying the inner Leaders circle. The centermost circle is almost always empty, reserved for highly mature and consolidated markets that lack space for further innovation.

The GigaOm Radar offers a forward-looking assessment, plotting the current and projected position of each solution over a 12- to 18-month window. Arrows indicate travel based on strategy and pace of innovation, with vendors designated as Forward Movers, Fast Movers, or Outperformers based on their rate of progression.

Note that the Radar excludes vendor market share as a metric. The focus is on forward-looking analysis that emphasizes the value of innovation and differentiation over incumbent market position.

5. Vendor Insights


In late 2021, DataCore acquired MayaData, the developer of Mayastor. Bolt, DataCore’s first new product resulting from this acquisition, is a proprietary, enterprise-grade cloud-native storage solution for Kubernetes, with firm roots in the Mayastor code base but positioned and built as a turnkey product to overcome Mayastor’s inherent complexities, such as its plug-in system and its community-controlled roadmap.

Bolt’s differentiation from Mayastor is its ease of use, both in terms of deployment and operations. It’s aimed at developer and DevOps users, not storage admins, broadening its applicability compared to Mayastor. Note that OpenEBS, the open-source project at the base of Mayastor, also remains available.

Bolt’s hyper-converged, containerized architecture allows it to scale with the application and takes care of node resilience by replicating volume data across nodes in a cluster. Reads are spread across replicas for optimal performance. Its Intel SPDK-based architecture is very well suited to high-performance, low-latency stateful applications.

Bolt is software-only and runs on on-premises hardware, as well as on multiple cloud platforms.

However, Bolt is a new entrant to the market, and is missing some critical features. It does not support many data services, including asynchronous replicas or clones. It currently supports only full-copy backups. While DataCore is expected to add these crucial missing features, this gap does raise the question of whether customers should choose Bolt or stay with Mayastor for the time being to enjoy features like application-consistent snapshots, data-at-rest encryption, and data optimization.

Strengths: A turnkey, opinionated fork of Mayastor, Bolt has potential to be a performance-oriented solution for companies without dedicated storage admins.

Challenges: Bolt’s enterprise capabilities (such as data protection, replication, and footprint optimization) are very limited compared to the competition and will need substantial effort from DataCore to reach feature parity. While the solution is under active development and expected to reach feature parity with Mayastor (as noted in last year’s report) in the near future, the question remains whether Bolt can unshackle itself from Mayastor’s inherent limitations and history.


Diamanti offers solutions consisting of Kubernetes cluster management (Spektra Enterprise) and Kubernetes data storage (Ultima Enterprise), with an optional hardware acceleration card for storage, Ultimate Accelerator. The company targets enterprise-grade stateful application use cases.

Ultima Enterprise is its software-only hyper-converged dataplane that converges networking and storage and can run on-premises or in the cloud via Amazon Web Services (AWS) and Google Cloud Platform (GCP), and has various deployment options; however, native support for deploying to Azure is missing.

The Ultima data plane consists of a distributed storage platform that also provides L2 and L3 networking capabilities, data protection features, container and virtual machine (VM) support, and CNI/CSI plug-ins. The solution comes with enterprise-grade features. Data can be mirrored across availability zones. Basic crash-consistent snapshots, backup and restore, and disaster recovery (with recovery and fire drill workflows) are supported across clusters and clouds; volumes can be migrated across clouds using asynchronous replication. Notably, these migration features do not require Ultima storage on both source and target environment, increasing migration flexibility.

Diamanti supports role-based access control (RBAC) and multitenancy (with Spektra), allowing policy-based isolation between tenants and teams. Those features, as well as its QoS support, are also a plus for MSPs considering delivering Kubernetes as a service to their clients. Data-at-rest encryption is supported at the volume and disk levels, and an advanced, built-in key management system is also provided.

Diamanti has a feature-rich management platform that allows organizations to manage multiple clusters across various clouds. It embeds cluster and application lifecycle management capabilities to enable faster application deployments. The management platform also integrates granular observability capabilities, providing an overall view of the environment’s health state and digging all the way down to the container level. The recent addition of GroundWork Monitor to its product portfolio will increase Diamanti’s monitoring and observability capabilities.

Spektra is the container management plane that enables management of Kubernetes clusters across clouds and locations (including core and edge), adding application and data mobility features, plus advanced data services, infrastructure observability, and control. Additionally, OpenShift is a factory-supported deployment option for customers more comfortable with OpenShift, with Ultima storage underneath.

Strengths: Diamanti’s NVMe-based hyper-converged architecture delivers high resilience and good performance. The combination of its (now) software-only deployment models (supporting data center, cloud, and edge) and flexible data migration features make Diamanti a great data mobility solution.

Challenges: While this solution is software-only, its architecture exclusively supports NVMe drives, making it less suitable for brownfield deployments. Data protection and data reduction features are lagging behind the competition.


IBM delivers cloud-native Kubernetes storage capabilities through IBM Spectrum Fusion, a software-defined solution designed for OpenShift. Spectrum Fusion is a cloud-native-based architecture delivering policy-based storage to OpenShift customers. Its strength is separating storage consumption (including more advanced data services) by developers from storage management by Kubernetes admins via policies, which are highly integrated into OpenShift. It offers block, file, and object services, as well as data protection features. It includes application-aware disaster recovery capabilities plus support for data migration use cases, and offers data efficiency capabilities in the form of erasure coding support.

Spectrum Fusion can leverage existing enterprise storage systems, including non-IBM block storage. Spectrum Fusion is optionally available as an integrated hardware appliance, based on IBM Spectrum Scale, and its software-only deployment supports both on-premises and cloud environments.

Notably, Spectrum Fusion has GPU direct support for AI workloads. Security includes encryption capabilities, immutable snapshots, and RBAC.

The solution is managed using the IBM Spectrum Fusion HCI dashboard, which provides standard monitoring and alerting capabilities. Integrations are possible with IBM Cloud Satellite and OpenShift Advanced Cluster Management. IBM Spectrum Fusion also includes call-home support and troubleshooting capabilities.

An interesting feature of IBM Spectrum Fusion is the availability of application paks, which consist of ready-to-deploy packages for popular applications, such as Cassandra, Kafka, MongoDB, and SAP HANA.

Strengths: IBM’s offering is a Kubernetes storage solution specifically designed to easily deploy the Red Hat OpenShift container platform in hyperconverged mode.

Challenges: Spectrum Fusion is designed specifically for OpenShift, inhibiting broader adoption for non-OpenShift users. Advanced data services are not as advanced as its enterprise storage lineup.


Ionir is a container-native, software-only storage solution for Kubernetes with advanced data capabilities. The solution consists of an elastic and scalable distributed microservices architecture, which implements a CSI plug-in that supports volume provisioning and snapshot management. Ionir uses NVMe over TCP, as well as the Intel SPDK framework, to provide an efficient I/O path and avoid performance bottlenecks.

Ionir’s metadata is based on a proprietary, patented database that records metadata on each write operation along with a name associated with the content of data and the time of the write. The timestamped record allows for retrieval of the state of a volume from any point in time in the past at the granularity of one second; this, in effect, translates to continuous data protection.

Ionir leverages the unique metadata architecture to deliver advanced data management services, such as replication, migration, and disaster recovery. The solution allows persistent volumes to be copied or moved across clusters or even globally across clouds in 40 seconds or less, making it ideal for data migration or replication in time-sensitive environments. It does this by making the volumes and hot data accessible on the target cluster and rehydrating cold data in the background while transferring only unique blocks, effectively providing deduplication between clusters. This feature requires Ionir storage on both ends. This approach also works well in Dev-Test scenarios with large environments where complete copies of data and environments are needed quickly and on demand.

Ionir also provides excellent data efficiency: users can expect inline data deduplication augmented with compression, and thin provisioning. Erasure coding is not yet supported but is planned in a later release.

From a security perspective, data-in-flight encryption is supported. Data-at-rest encryption is on the roadmap, leveraging the company’s IP to avoid potential conflicts with data deduplication. Per-volume encryption with deduplication is on the roadmap. RBAC and self-service developer access are available but are not yet mature.

Ionir has an easy-to-use management interface that handles all of the supported activities, including snapshot clone operations and granular data restores. The interface natively captures and exposes Kubernetes objects, including applications and secrets. Monitoring is handled through Prometheus and Grafana, while ELK handles logging, tracing, and visualization of log events. Ionir allows customers to enable these tools through a simple one-click deployment process.

Strengths: Ionir is a Kubernetes cloud-native storage solution with a strong focus on continuous data recovery and mobility, delivering per-second granularity. The solution has a well-rounded feature set that includes space efficiency.

Challenges: The solution has some features that have been lingering on the roadmap for over a year, including at-rest encryption and erasure coding. Multitenancy support is limited.


NetApp Astra Data Store is NetApp’s cloud-native persistent storage solution for Kubernetes, built on top of its open-source Astra Trident project. ADS is built using NetApp’s WAFL technology but runs as a distributed storage system on standard servers using local storage in a shared-nothing architecture with node, rack, and data center failure awareness for resilience. Astra can run on-premises, in the cloud (using hyperscale cloud services), and as a fully managed cloud service using NetApp Cloud Volumes (offering enterprise-grade, high-performance storage on public clouds). It’s deployed on Kubernetes using an operator and can also run on bare metal or in VMs. The key differentiator for ADS is that it’s a re-engineered ONTAP, which means it is fully compatible with the ONTAP ecosystem. ADS supports ONTAP’s broad set of features, including data protection, global deduplication, compression, replication, and QoS.

Currently, ADS exposes only file-based storage; however, support for block and object are on the roadmap. The advantage, and a key selling point for NFS support, is that ADS supports both VM-based and container-based applications. In conjunction with ADS’s support for NetApps SnapMirror replication technology, ADS is uniquely positioned as an application modernization tool for customers already running ONTAP because ADS can receive SnapMirror replication from existing ONTAP environments. Similarly, ADS’s support for VMs makes it an ideal platform for edge deployments that require running both VMs and containers.

Astra Control is the multicluster or environment management plane that sits on top of ADS, providing global storage management and health monitoring with NetApp Cloud Insights. Astra Control can also manage FAS and AFF arrays and the NetApp Cloud Volumes offerings, making it the control plane for cloud-native applications. It provides a clean and usable management interface that shows users all the information they need or the actions they can perform. Multitenancy is supported as well, with RBAC support and access granularity at the application level. At-rest encryption is supported when ONTAP is used as the storage provider, as is Cloud Volumes, with keys that are managed by the Cloud Volumes service.

Strengths: Astra Data Store is ONTAP but re-engineered for cloud-native environments. This strong bedrock gives ADS ONTAP’s full set of features. Additionally, NetApp has a convincing roadmap showing lots of potential. Notable are the strong migration capabilities for customers already in the ONTAP ecosystem, as ADS supports both VMs and containers.

Challenges: Certain features are weak compared to the competition, most notably (synchronous) disaster recovery. The lack of block and object storage protocols may be problematic for some but are being actively addressed.


Ondat, previously StorageOS, is a company focused on delivering cloud-native persistent block storage capabilities to Kubernetes environments. The solution aims to address the storage needs of high-performance, mission-critical containerized applications.

The architecture consists of containers that are local to each Kubernetes cluster node. Each of these containers manages the locally attached storage present on the nodes they run on. The capacity across all these nodes is aggregated and pooled through a pool layer, which is presented to the cluster. It can deploy both on-premises and in the cloud with feature parity, has wide integration with cloud marketplaces, and is certified for EKS, AKS, GKE, Anthos, Rancher, OpenShift, and more.

The Kubernetes orchestrator can then communicate with Ondat to provision or deprovision persistent volumes as needed for any of the containers executing on the nodes in the cluster. The solution is resilient through cross-node volume replicas and is built to deliver both scalability and performance, with particular attention to latency-sensitive workloads such as databases.

The solution currently supports synchronous replication, but intra-cluster replication only. However, clusters can stretch availability zones using topology-aware, in-cluster replication across availability zones.

A delta sync feature replicates only the missing data in case of a cluster rebuild. For optimization and efficiency, Ondat uses data compression and an intelligent thin provisioning feature. Erasure coding and deduplication are not supported.

Multiple storage classes are currently supported, but additional multitenancy features (such as QoS and affinity groups) are not yet available, though on the roadmap. Security is a notable area, with RBAC and namespaces, data-in-transit and data-at-rest encryption, as well as the ability to use unique per-volume encryption keys.

Besides its own graphical user interface, the solution also integrates with Prometheus and Grafana, with a focus on IOPS, bandwidth, and free space. Although Ondat primarily targets business-critical applications, its architectural foundation is well suited to address edge use cases (thanks to its low overhead), and it should be a great fit once asynchronous replication capabilities are implemented.

Strengths: Ondat is a lightweight solution with a lot of potential and currently is a strong fit for performance-oriented cloud-native applications. The solution provides a robust and scalable architecture designed to meet demanding latency and throughput requirements and is engineered to run on any platform, with potential for edge use cases.

Challenges: Lack of inter-cluster and asynchronous replication limits migration and (some) disaster recovery scenarios, restricting its overall applicability as a multicloud storage fabric and in some mission-critical use cases. A lack of advanced data efficiency mechanisms, such as deduplication and erasure coding, may become a challenge in the future.

Portworx by Pure Storage

Portworx is one of the most advanced solutions for cloud-native Kubernetes storage. PX-Store is a hyper-converged, Kubernetes-native storage solution, aggregating and pooling storage capacity for cluster consumption. A series of advanced data management components that are part of the Portworx Data Services platform delivers more advanced storage capabilities, including database lifecycle management.

The solution offers broad deployment choices and supports bare metal and virtualized environments, including Pure Storage physical arrays, existing cloud block services, and cloud-based Kubernetes services, as well as those from other ecosystem partners, providing a consistent experience across infrastructures, platforms, and locations.

Portworx includes a comprehensive set of advanced data services.

Portworx Data Services’ database-as-a-service platform is a unique capability, automating the lifecycle of database provisioning and deployment, Day 2 operations, and data protection with support for Apache Cassandra, Apache Kafka, Apache ZooKeeper, PostgreSQL, RabbitMQ, and Redis.

PX-Backup handles data protection and supports application-consistent backups that are Kubernetes-complete; that is, not only is the data backed up but also the entire application state, including all objects, application configuration data, and dependencies. Granularity is provided, allowing organizations to back up either individual applications or thousands of applications and namespaces, and to define schedule policies as required. Restores can be performed locally or on any cloud.

PX-Store is a modern, distributed, container-optimized cloud-native storage with elastic scaling, storage-aware class-of-service, multiwriter shared volumes, local snapshot capabilities, and multiple failover options (node aware, rack aware, availability-zone aware). Local synchronous replication for data center resilience is also supported.

PX-DR (an add-on module) expands those capabilities to provide disaster recovery and data replication capabilities. It supports multisite synchronous replication and zero recovery point objective (RPO) disaster recovery within a metro area, and multisite asynchronous replication for cross-WAN connections. PX-Migrate handles multicloud and multicluster app migrations, as well as snapshots and application-consistent snapshots to the cloud.

PX-Secure constitutes the security layer of the Portworx solution, offering cluster-wide (per-volume) encryption, granular container-based or storage class encryption (available when organizations bring their own key management system), RBAC, authorization and ownership mechanisms, as well as integration with Active Directory and LDAP through OIDC.

Finally, PX-Autopilot orchestrates automated space reclamation activities at the container volume level as well as on entire storage clusters through resizing activities, with the goal of keeping storage costs under control.

The solution is managed via PX-Central, a comprehensive management plane that handles multicluster management, command-line interface (CLI) capabilities, proactive centralized monitoring, and cluster installation and setup functions. Integration with Pure Storage Pure1 allows this platform to consume telemetry data from Portworx and deliver app-centric analytics and, eventually, recommendations.

From an efficiency perspective, the solution handles compression for all snapshots, but true data reduction is achievable only when Portworx uses an underlying enterprise-grade platform with built-in data efficiency capabilities, such as Pure Storage FlashArray.

Strengths: Portworx is a complete enterprise-grade solution with outstanding data management capabilities, unmatched deployment possibilities, and superior management features. Portworx remains the gold standard in cloud-native Kubernetes storage for the enterprise.

Challenges: Data efficiency capabilities are limited when the solution is not coupled with enterprise shared storage.

Red Hat

Part of a portfolio of solutions named Red Hat Data Services, Red Hat OpenShift Data Foundation (ODF) is a cloud-native storage solution based on Red Hat Ceph, Rook, and Noobaa. The solution is scalable and resilient, and currently supports only Red Hat OpenShift, which is itself based on Kubernetes. For organizations considering consolidation on one technology stack, ODF provides frictionless operations at the storage layer.

ODF is versatile and supports block, file, and object storage. It can be deployed on-premises or in the cloud and supports snapshots and clones. For data protection, Red Hat’s approach is to enable the ecosystem of third-party data protection vendors through its APIs. Advanced data protection features, including replication and disaster recovery, are available only in the Advanced Edition.

ODF delivers strong performance without compromising on data optimization capabilities: erasure coding, compression, and deduplication are currently supported. Multitenancy capabilities go beyond Kubernetes storage classes and include support for ResourceQuotas and LimitRanges, giving organizations control over resource usage and enabling them to overcome the hurdles of workload consolidation and the adverse impact from noisy neighbors.

The solution is excellent from a security perspective, with support for in-flight and at-rest data encryption (at the physical and volume levels). Key management is also supported by ODF. Monitoring and reporting capabilities are good, with integrations into the OpenShift console giving organizations all the basic performance and health metrics.

Finally, edge deployments are also supported when ODF is deployed in Compact Mode, starting at three nodes.

Strengths: A cloud-native storage solution with enterprise-grade features and an innovative approach to cloud deployments, ODF delivers solid value on multiple capabilities. Managed OpenShift services and storage options are available on multiple clouds now, enabling users to execute on their hybrid and multicloud strategy.

Challenges: Current support is limited to Red Hat OpenShift. Advanced data services remain a weak area. Even though RedHat OpenShift has good data protection APIs, its approach to data protection relies on the solution ecosystem, which might deter small organizations looking for an integrated solution. is an innovative, application-aware, cloud-native Kubernetes solution with enterprise-grade capabilities. The solution can run anywhere, either on-premises (bare metal, virtual machines) or on all major public cloud providers. The company was acquired by Rakuten Symphony in early 2022.

The product, called Cloud Native Storage for Kubernetes (CNS), discovers and pools local disks of any type on Kubernetes cluster nodes but can also pool storage capacity from cloud disks and SAN systems. Robin CNS delivers a resilient architecture with strictly consistent replicas across cluster nodes, auto-resync for nodes falling behind, and fast-failover capabilities. The solution enables bare-metal performance, live data rebalancing to avoid I/O bottlenecks, and use of QoS to throttle IOPs usage. QoS isn’t limited to storage but also extends to CPU, memory, and network resources.

CNS shines with its advanced data services. Multiple replication modes are supported, with awareness at the node, rack, data center, and zone levels, providing organizations with sufficient granularity. To satisfy application-level deployment and performance requirements, advanced placement capabilities allow organizations to define fine-grained placement policies using affinity/anti-affinity rules. Robin’s management interface includes an “application bundles” section that provides rapid deployment capabilities akin to an app store experience while respecting best practices deployment topologies for those applications.

The solution also supports snapshots and application-consistent, incremental forever backups. Replication capabilities can be used for data copy and application cloning, disaster recovery, and application mobility across clouds. Data compression is possible, and object storage is supported through integrations with MinIO.

Robin CNS supports per-volume encryption, although customers have to operate their own key management system. Monitoring and observability capabilities have been improved, with additional visualizations in the UI, while also opening up the data source for scraping using a third-party monitoring tool.

Although it’s available as a stand-alone product, CNS can be coupled with CNP, Robin’s Kubernetes management solution, for a fully integrated infrastructure stack. This solution is well suited to address edge computing use cases. Robin has a proven track record with various telcos, for whom edge deployments related to 5G infrastructure are one of the major use cases for containers.

Strengths: delivers a comprehensive, feature-rich, enterprise-grade experience with an uncompromising adherence to cloud-native development and deployment principles. Advanced data services and application-awareness capabilities are among the highlights of this solution, and its backup solution has recently opened up to support non-Robin storage.

Challenges: Further improvements in migration capabilities (including onboarding applications not running on Robin storage nodes and attaching Robin storage to non-Robin clusters), as well as security and data footprint optimization capabilities, would further strengthen Robin’s position as a leader, although Rakuten’s recent acquisition of the company may impact Robin’s roadmap and future.


Longhorn is an open-source, cloud-native storage solution originally developed by Rancher Labs and acquired by SUSE. It was accepted by the Cloud Native Computing Foundation in 2019 and is currently an incubating project.

Longhorn provides resilient persistent storage for Kubernetes through a two-layer architecture consisting of a data plane and a control plane, by which Kubernetes itself handles the orchestration. The data plane consists of distributed block storage that aggregates and pools the local disk capacity available on each of the nodes. The control plane, via the Longhorn manager, creates volumes by spinning up Longhorn engine instances on the node the volume is attached to and then creates replicas on the nodes where these should be placed. The outcome is a distributed and resilient storage platform with high-performance characteristics. Although Longhorn prioritizes resiliency, performance is adequate and may see further improvements as a result of roadmap development activities.

This solution handles backups and snapshots using a copy-on-write block storage layer that allows point-in-time recovery. Those backups can be exported either to S3 or NFS for offsite storage. The same technology can be used for disaster recovery and replication use cases with an active-passive cluster topology, making multisite disaster recovery possible. A feature called “disaster recovery volumes” also enables cross-region asynchronous replication in the cloud, with defined RPOs and reduced recovery time objectives (RTOs).

The solution offers no particular data footprint optimizations, although backups are compressed and based on changed block tracking. Some techniques are used on secondary storage to either reclaim unused space or apply some degree of deduplication on backup blocks within a single volume. There are no plans to implement data efficiency capabilities for in-cluster storage because of a focus on high performance and resilience. Organizations are thus expected to leverage application-level data efficiency mechanisms.

On the security side, RBAC is supported through Kubernetes, and integration with Rancher technology enables the use of Active Directory and other enterprise-grade authentication providers. In-flight and at-rest encryption for data volumes are supported. Monitoring and alerting are handled through the standard Prometheus and Grafana integrations.

Organizations can deploy Longhorn as a standalone solution or benefit from the strong integration Longhorn has with Rancher. Notably, Harvester is its all-in-one hyper-converged solution integrating Longhorn’s storage capabilities with Rancher’s multicluster management capabilities.

Strengths: Longhorn is an interesting choice for those seeking an open-source, CNCF-backed storage solution. In conjunction with Harvester, the solution’s migration capabilities are a great fit for organizations looking to bridge the gap between virtualization and cloud-native architectures.

Challenges: Longhorn’s feature list is limited, and some core capabilities are missing, such as support for large volumes (over 1 TB) and data footprint optimization.


VMware Tanzu is built on top of vSAN and thus can be used either in standard on-premises VMware vSphere environments with vSAN or as a part of VMware Cloud Foundation (VCF). VCF offers a full hybrid cloud experience and vSAN constitutes VCF’s storage foundation.

When Tanzu is deployed on vSAN, it allows the consolidation of traditional virtualized workloads and cloud-native applications on the same layer and is therefore best for organizations already using vSAN in production environments. This mode allows storage to be provided to cloud-native workloads from the same storage clusters without any architectural changes.

VMware also offers an additional deployment option via the vSAN Data Persistence platform (DPp), a framework for modern stateful service providers to build Kubernetes plug-ins or operators on and for the underlying vSphere infrastructure. Stateful services running on the DPp can be deployed on a vSAN datastore with the vSAN host-local shared-nothing architecture (SNA) policy or in a second mode called vSAN Direct. The first option, SNA policy, allows the application to control placement and take over the duty of maintaining data availability. The technology makes it easy for the persistent service to co-locate its compute instance and a storage object on the same physical ESXi host. With the host-local placement, it is possible to perform such operations as replication at the service layer and not at the storage layer.

The second option, vSAN Direct, consists of dedicated hardware with optimal storage efficiency and near bare-metal performance. vSAN Direct allows modern stateful services to leverage the availability, efficiency, and security features built into the modern stateful service layer, and to have direct access to the underlying direct-attached hardware.

Part of Tanzu’s strength derives from vSAN’s Storage Policy-Based Management (SPBM) capabilities. Various storage policies can be created, each with different resilience requirements, capabilities (such as encryption), QoS (IOPS throttling), and so on. SPBM can be expanded by organizations using existing API integrations to automate container-provisioning workflows. Individual software vendors can integrate their application’s native data management, replication, and service capabilities (such as app-level replication, erasure coding, and encryption) directly into vSAN DPp to shift some of the storage policies at the application level and avoid resource waste.

Management of the Tanzu environment is handled through Tanzu Mission Control, which allows multicluster Kubernetes management on-premises and across clouds. Data migration is available through Velero.

The solution offers great security capabilities with software-based in-flight and at-rest data encryption, FIPS 140-2 cryptographic modules, support for third-party KMIP-compliant key managers, and the ability to enable datastore-level encryption with a single click. RBAC is natively supported through vSphere and VCF.

Strengths: Tanzu is ideally suited to organizations with a strong VMware focus because they already have the building blocks in place to adopt Tanzu quickly and effortlessly, enabling a great developer experience with little friction.

Challenges: Although very well architected, Tanzu’s dependency on other VMware products creates platform overhead that is unnecessarily complex for organizations looking for a pure cloud-native deployment model.

6. Analyst’s Take

The market for persistent Kubernetes storage is moving quickly with lots of innovation, but so are its customers, who are demanding more mature enterprise-grade solutions with each passing year.

That means requirements are shifting and ​​become more strict year-over-year. This market dynamic is beneficial to customers looking for a Kubernetes-native persistent storage solution, but choosing the right solution in this ever-changing market is paramount as each vendor is focusing on a different set of priorities.

In this space, we see two groups of competitors, roughly divided between those that see persistent storage as their unique differentiation in the market and so are building a product portfolio around it (including various Kubernetes cluster-management solutions) and those for whom storage is but one feature in a larger platform play, usually Kubernetes-based developer platforms.

It’s in the former group we see the most complete feature sets, each vendor uniquely positioning themselves against their competition. Discovering which vendor’s positioning best matches your requirements will be beneficial for long-term success, whether that’s performance, scalability, advanced data service capabilities (like replication or deduplication), specific deployment models (for edge and other use cases), or developer experience and self-service capabilities.

Similarly, the market has evolved and matured beyond proof of concept and early production environments, and has firm security and other enterprise-grade requirements. However, not all vendors have caught up with these demands, and some lack basic security capabilities or even basic data services like snapshots.

It’s worth the effort to investigate a vendor’s support beyond just storage as lots of innovation is happening in the interface between storage and Kubernetes cluster management, including emerging deployment models for highly integrated turnkey solutions for edge and bare metal.

7. About Joep Piscaer

Joep Piscaer

Joep is a technologist with team building and tech marketing skills. His background as a CTO, cloud architect, infrastructure engineer and DevOps culture coach. He has built many engineering and architect teams and culture.

Founder of TLA Tech, a tech marketing firm focusing on cloud-native. Co-hosts TheCUBE sometimes. Blogs at

8. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

9. Copyright

© Knowingly, Inc. 2022 "GigaOm Radar for Cloud-Native Kubernetes Data Storage" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact