Table of Contents
Primary storage systems for large enterprises have adapted quickly to new needs and business requirements, with data now accessed from both on-premises and cloud applications. We’re in a transition phase from storage systems designed to be deployed in data centers to hybrid and multi-cloud solutions, with similar functionalities provided on physical or virtual appliances as well as through managed services.
The concept of primary storage, data, and workloads has radically changed over the past few years. Mission- and business-critical functions in enterprise organizations were concentrated on a few monolithic applications based on traditional relational databases. In this scenario, block storage was often synonymous with primary storage, and performance, availability, and resiliency were prioritized, usually at the expense of flexibility, ease of use, and cost.
Now, after the virtualization wave and the exponential growth of microservices and container-based applications, organizations are shifting their focus to AI-based analytics, self-driven storage, and improved automation as well as deeper Kubernetes integration. Moreover, the thirst for performance is still high; support for new storage types and NVMe transport protocols is now becoming the golden standard.
Finally, organizations have not abandoned their appetite for cost optimization. In this context, when it comes to total cost of ownership (TCO) and flexibility, the emergence of storage-as-a-service (STaaS) provides cloud consumption models that increasingly are being sought after.
When it comes to modern storage, and block storage in particular, flash memory and high-speed Ethernet networks have commoditized performance and reduced costs, allowing for more liberty in system design. Fibre Channel remains a core component in many storage infrastructures, for legacy reasons only. At the same time, enterprise organizations are working to align storage with broader infrastructure strategies, which address issues such as:
- Better infrastructure agility to speed up response to business needs
- Improved data mobility and integration with the cloud
- Support for a larger number of concurrent applications and workloads on a single system
- Simplified infrastructure
- Automation and orchestration to speed up and scale operations
- Drastic reduction of TCO along with a significant increase in the capacity per sysadmin under management
These efforts have contributed to the growth in the number of solutions, as startups and established vendors alike move to address these needs. Traditional high-end and mid-range storage arrays have been joined by software-defined and specialized solutions all aimed at serving similar market segments but differentiated by the focus they place on the various points described above. A one-size-fits-all solution doesn’t exist. In this report, we will analyze several aspects and important features of modern storage systems to better understand how they impact the metrics for evaluating block storage systems, especially in relation to the needs of each IT organization.
How to Read this Report
This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.
2. Market Categories and Deployment Types
This report is designed specifically around solutions for large enterprises. For the reader’s benefit, we also provide insight into how the vendor was evaluated (in terms of ability to address market segment needs) in the other companion radars for small and midsize businesses.
For a better understanding of the market and vendor positioning (Table 1), we assess how well solutions for primary storage are positioned to serve specific market segments:
- Small businesses: In this category, we assess solutions on their ability to meet the needs of small businesses, for whom ease of use and $/GB are important focus areas.
- Midsize businesses: In this category, we judge solutions on their ability to meet the needs of medium-sized companies. Also assessed are departmental use cases in large enterprises, where ease of use and deployment are more important than extensive management functionality, data mobility, and feature set.
- Large enterprises: Here offerings are evaluated on their ability to support large and business-critical projects. Optimal solutions in this category will have a strong focus on flexibility, performance, data services, and features to improve security and data protection. Scalability is another big differentiator, as is the ability to deploy the same service in different environments.
- Specialized: Optimal solutions are designed for specific workloads and use cases, such as managed service providers, big data analytics, and high-performance computing (HPC).
In addition, we recognize two deployment models for solutions in this report: hardware appliance and software-defined storage.
- Hardware appliance: These solutions are provided as a self-contained physical device with all the components necessary to deliver primary storage capabilities. The device is fully supported by the vendor, and other than managing the platform, the customer only needs to apply hotfixes or patches. This deployment model delivers simplicity at the expense of flexibility.
- Software-defined storage: These solutions are meant to be deployed on commodity servers on-premises or in the cloud, allowing organizations to build hybrid or multi-cloud storage infrastructures. This option provides more flexibility in terms of deployment, cost, and hardware choice, but it can be more complex to deploy and manage.
Table 1. Vendor Positioning
|Small Businesses||Midsize Businesses||Large Businesses||Specialized||Hardware Appliance||Software-Defined Storage|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
Readers should note that vendor positioning data in Table 1 above is a consolidated view across all three primary storage radars, i.e. for small businesses, for midsize companies and for large enterprises.
The ratings provided above are performed across each vendor’s entire primary storage portfolio. Some of the vendors listed below will not appear in this radar, but our intent is to provide a holistic view of primary storage solutions across market segments and deployment models.
3. Key Criteria Comparison
Building on the findings from the GigaOm report, Key Criteria for Evaluating Primary Storage, Table 2 summarizes how each vendor included in this research performs in the areas that we consider differentiating and critical in this sector. Table 3 follows this summary with insight into each product’s evaluation metrics—the top-line characteristics that define the impact each will have on the organization. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the market landscape, and gauge the potential impact on the business.
Table 2. Key Criteria Comparison
|AI-Based Analytics||New Media Types||NVMe-oF||NVMe/TCP||Cloud Integration||API and Automation Tools||Kubernetes Integration||Storage-as-a-Service|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
Table 3. Evaluation Metrics Comparison
|System Lifespan||Efficiency||Flexibility||Ease of Use||$/IOPS||$/GB|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
By combining the information provided in the tables above, the reader can develop a clear understanding of the technical solutions available in the market.
4. GigaOm Radar
This report synthesizes the analysis of key criteria and their impact on evaluation metrics to inform the GigaOm Radar graphic in Figure 1. The resulting chart is a forward-looking perspective on all the vendors in this report based on their products’ technical capabilities and feature sets.
The GigaOm Radar plots vendor solutions across a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—Maturity versus Innovation, and Feature Play versus Platform Play—while providing an arrow that projects each solution’s evolution over the coming 12 to 18 months.
Figure 1. GigaOm Radar for Primary Storage for Large Enterprises
As you can see in the Radar chart in Figure 1, the majority of vendors are platform players (on the right side). Among these, three groups can be identified.
The first one consists of innovators: NetApp and Pure Storage are side-by-side, with Infinidat closing in. NetApp has a strong vision around large enterprise needs with unmatched multi-cloud support capabilities and a compelling Kubernetes solution. Traditionally it has provided high-end systems and is now completing its offering with QLC-based, capacity-oriented systems to strike a balance between performance and capacity. Pure Storage has built its strategy on a vision in which storage becomes gradually abstracted to leave room for a cloud-based consumption model built on self-driven storage that empowers storage consumers such as developers. This vision pervades its solutions with Pure1, an extremely innovative AIOps solution, an excellent STaaS model, and best-in-class on-premises Kubernetes support. Pure Storage also gives a nod to the large enterprise market with the very recent introduction of its FlashArray//XL systems, built for scale and performance. Infinidat traditionally targets the large enterprise market with its modern hybrid architecture that is well known for its cost effectiveness. The solution has several highlight areas, such as AI-based analytics, great automation capabilities, improving Kubernetes support, and a compelling STaaS model. Although Infinidat has lagged behind in terms of all-flash support, it has been able to deliver enterprise class performance with its AI-based DRAM caching layer. Additionally, Infinidat started shipping an all-flash version of the InfiniBox in December of 2020 that comes with an excellent NVMe/TCP implementation.
The second group consists of mature solutions that are shifting towards the innovation space, with Hitachi Vantara and Dell Technologies. Hitachi Vantara provides a robust, multi-controller architecture praised for its reliability, but the company is innovating in several areas, particularly with a solid suite of operations management tools including AI-based capabilities and an interesting STaaS offering. Dell Technologies also moves towards innovation with its PowerMax platform. PowerMax systems inherit all of the capabilities of the VMAX but also embed an ML-based engine that continuously assesses the system state and makes placement and optimization decisions. Innovation is also happening at the software and services stack level with Dell through its AIOps / management CloudIQ platform and with its promising but still partly roadmapped APEX STaaS service.
The third group consists of HPE and IBM, two vendors well-known for their robust and proven solutions. HPE has revamped its large enterprise offering recently with the Alletra 9000, a new, NVMe-based solution that supersedes its Primera offering but does not bring significant architectural improvements. However, HPE is demonstrating strength in several adjacent areas, such as AI-based analytics and self-service management with InfoSight and Data Services Cloud Console. The company is also betting heavily on its service provider transformation, offering all of its portfolio through HPE GreenLake, a cloud-like consumption model that also includes STaaS. From that perspective, hardware capabilities become abstracted and what matters to the consumer is the data services set. IBM, although on a more conservative path, is evolving in a similar fashion. High-end IBM FlashSystem 9200 arrays provide excellent reliability, massive scalability, and NVMe flash performance. IBM also has built a solid AI-analytics / management platform and is building up its STaaS offering; however, Kubernetes support remains to be improved.
On the left side of the radar, three feature-play, fast movers can be identified. Pavilion Data provides a no-compromise NVMe implementation that delivers massive scalability and performance across block, file, and object protocols. The solution is particularly well engineered and moving towards the center but currently lacks several capabilities, such as AI-based analytics, self-service management, and cloud integration. Excelero is an interesting case with its NVMesh software-defined architecture built around a compelling implementation of NVMe-oF and NVMe/TCP protocols. Although NVMesh is particularly capable of supporting high-performance workloads, the solution also supports QLC flash. NVMesh still lacks complete coverage in several areas for the needs of large enterprises, but Excelero is gradually developing some capabilities for cloud integration and Kubernetes support. Finally, Zadara takes a different approach with Zadara Edge Cloud Services. It provides compute, storage, and networking resources deployed either on-premises or in the cloud and is offered through a cloud-like, as-a-service consumption model. The entire stack is fully managed by Zadara and eliminates complexity and management overhead; however, organizations have to operate within the capabilities of the solution.
Inside the GigaOm Radar
The GigaOm Radar weighs each vendor’s execution, roadmap, and ability to innovate to plot solutions along two axes, each set as opposing pairs. On the Y axis, Maturity recognizes solution stability, strength of ecosystem, and a conservative stance, while Innovation highlights technical innovation and a more aggressive approach. On the X axis, Feature Play connotes a narrow focus on niche or cutting-edge functionality, while Platform Play displays a broader platform focus and commitment to a comprehensive feature set.
The closer to center a solution sits, the better its execution and value, with top performers occupying the inner Leaders circle. The centermost circle is almost always empty, reserved for highly mature and consolidated markets that lack space for further innovation.
The GigaOm Radar offers a forward-looking assessment, plotting the current and projected position of each solution over a 12- to 18-month window. Arrows indicate travel based on strategy and pace of innovation, with vendors designated as Forward Movers, Fast Movers, or Outperformers based on their rate of progression.
Note that the Radar excludes vendor market share as a metric. The focus is on forward-looking analysis that emphasizes the value of innovation and differentiation over incumbent market position.
5. Vendor Insights
Dell Technologies offers two solutions for the large enterprise market: PowerMax and PowerFlex. The PowerMax consists of a modular architecture based on bricks, in which each brick provides storage, compute, and cache capacity to the PowerMax system, allowing the solution to scale up and scale out. PowerMax systems are NVMe-based, embed a persistent storage-class memory tier, and are designed to deliver high throughput and ultra-low latency to performance-oriented workloads. Organizations can opt for two PowerMax editions: the 2000 series, and the 8000 series. Both models support NVMe-oF, Fibre Channel, and iSCSI protocols. PowerMax systems are powered by PowerMax OS, which also includes an embedded hypervisor. The various modules execute as services on top of PowerMax OS, delivering management, data services, and other capabilities.
Data services include advanced data reduction through global in-line deduplication and compression. Replication is one of the strong capabilities of the PowerMax platform, thanks to the robust and long-established SRDF (Symmetrix Remote Data Facility) feature, which enables synchronous and asynchronous replication modes at scale. To be noted, SRDF/A (asynchronous) supports VMware vVols integration when used with VMware Site Recovery Manager with SRDF/A, enabling a full-storage, policy-based management experience. Other data services include end-to-end encryption with data reduction efficiencies, SnapVX space-efficient snapshots that are immutable, embedded NAS, and PowerPath. PowerMax also offers the ability to create secure snapshots that can’t be deleted manually until a user-specified expiration time. Finally, PowerMax also embeds a real-time, machine-learning engine that analyzes host I/O traffic and ensures optimal data placement on NVMe flash or storage-class memory according to each workload’s I/O profile.
Cloud support is available through Cloud Mobility for Dell EMC PowerMax, a feature that runs as a virtual machine on PowerMax OS. Cloud Mobility allows seamless and transparent data movement between PowerMax systems and object storage, whether cloud-based (AWS, Microsoft Azure) or on-premises object stores such as EMC ECS and PowerScale. This easy data flow enables archiving and long-term data retention use cases, freeing up space on PowerMax systems and leveraging low-cost, cloud-based, object storage economics. Besides archiving, data sets present in the cloud or in on-premises object storage can be made available to other workloads, either through an AWS marketplace appliance or through a vSphere-based vApp (for ECS and PowerScale).
The system is managed through Unisphere for PowerMax, a dedicated management interface that supports multiple PowerMax systems and delivers a comprehensive overview of the managed systems with various metrics around health, performance, capacity, and compliance. PowerMax also integrates with CloudIQ to benefit from more advanced monitoring and alerting capabilities around health checks and cybersecurity recommendations, performance impact/anomaly/workload contention analysis, capacity forecasting, and more. Regarding API and Automation support, organizations can either rely on CloudIQ (which provides a unified Webhook & REST API support across products) or directly use PowerMax APIs.
Kubernetes integration is available through a vVol integration with VMware Tanzu or through Dell’s Container Storage Modules (CSM). Organizations running PowerMax alongside VMware vSphere will appreciate seamless integration capabilities between the two platforms. The other deployment model leverages CSM, a regularly updated, open-source suite of modules developed for Dell EMC products. CSM covers storage support (through CSI drivers) and other capabilities such as authorization, resiliency, observability, snapshots, and replication. Finally, OpenShift and Docker platforms also are supported by CSM.
In addition to PowerMax systems, Dell Technologies also proposes its Dell EMC PowerFlex software-defined infrastructure platform. Although the solution is software-defined, it comes pre-configured on Dell-provided PowerFlex appliances, or in a fully-integrated rack-scale fabric including connectivity. PowerFlex is built for linear scalability, resiliency, and performance; Dell Technologies presents several use cases such as enterprise databases and workloads; analytics AI and ML; and modernized containerized applications. Compared to the monolithic approach of the PowerMax, and putting the rack-scale option aside, PowerFlex provides a different approach. It allows organizations to start small without compromising on performance, with the ability to massively scale.
Dell Technologies offers STaaS capabilities through its APEX offering in which clients can order block or file services, and define performance tier, base capacity, subscription length, and deployment location of APEX (on-premises or on a Dell-provided colocation site), various criteria that will also define the storage platform deployed in the background (because storage is offered as a service, this is less relevant).
Strengths: With PowerMax, Dell Technologies continues to demonstrate its relevance thanks to a robust yet innovative architecture designed to offer the best reliability to mission-critical workloads. PowerFlex is a relevant alternative option for organizations that require performance but want to scale at their own pace.
Challenges: Compared to other players in the market, cloud-integration capabilities remain average.
Excelero offers a low-latency, distributed, primary storage system for web-scale applications. It proposes an NVMe solution that can be deployed across multiple networks and supports both local and distributed file systems. Excelero started as an on-premises software-defined storage solution and in the last year has morphed into a cloud storage solution. With an offering that provides storage on-premises as well as in the cloud, the solution helps customers with specific needs to solve their challenges.
The Excelero NVMesh architecture provides both scale-up and scale-out. NVMesh provides a balanced price/performance density, which will be improved by the ELECT technology, combining ultra-low-latency drives with QLC cost-effective flash. The power of Excelero is that customers can start with partially populated storage targets and then gradually increase the storage capacity over time. But customers always have the option to add more storage targets within the same namespace.
The Excelero NVMesh architecture allows for additional efficiency gains by running applications and the data path on the same nodes, a usage pattern or deployment architecture embraced by most customers. Excelero NVMesh is a scale-out solution with an efficient single-hop data path, thanks to an architecture designed for efficiency from the outset.
Excelero’s NVMesh offers great performance and allows GPU-optimized servers to access scalable, high-performance NVMe flash storage pools as if they were local. This technique ensures efficient use of both the GPUs themselves and the associated NVMe flash. For the customer, it means good ROI, easier workflow management, and faster time to results.
NVMesh is deployable on multiple public clouds with extreme ease of use on Microsoft Azure, thanks to the Azure marketplace option and a convenient portal for launching and managing marketplace-based deployments.
Excelero NVMesh comes with a built-in RESTful API, a modern CLI, and a Python library, making it easy to use and integrate into modern large-scale dynamic environments.
It is deeply integrated with Kubernetes including CSI functionality, and uniquely so as a high-performance, container-native storage solution. The ability to run NVMesh as an operator while retaining RDMA capabilities makes it a great option for modern environments and their modes of operation.
NVMesh supports persistent, low-latency container storage for hyperscale architectures using Kubernetes. It makes use of pooled, redundant NVMe storage for container applications requiring persistent volumes, so enterprises can obtain both local-as-cloud performance and container mobility at data center scale.
By leveraging Kubernetes with NVMesh, Excelero is enabling its customers to use containers that have high-performance storage with both persistence and mobility. NVMesh has been qualified as a Red Hat OpenShift operator and works seamlessly with Microsoft Azure Kubernetes Service.
Excelero NVMesh-on-Azure can be consumed as STaaS but the on-premises solution is not offered as such. Excelero is a solution that’s very well suited for specific customers that are in need of a high-performance and low-latency storage solution that can be consumed on-premises as well as in the cloud. At the moment, the cloud solution leans heavily on Microsoft Azure. While Azure offers worldwide coverage for customers, this prevents organizations relying on other public clouds from leveraging Excelero for their cloud-based workloads.
Strengths: Excelero offers a high-performance, distributed storage system architected around the NVMe protocol and targeting latency-sensitive workloads. The solution is deployable both on-premises and in the cloud and incorporates GPU optimizations. It is classified as a Red Hat Openshift Operator with a CSI driver.
Challenges: While compelling from a performance and architectural perspective, the solution shows gaps or limited support across several key criteria: AI-based analytics are absent, cloud integration is limited to Microsoft Azure, and Kubernetes and STaaS support remains limited. Despite an outstanding $/IOPS ratio, Excelero offers a much less appealing $/GB ratio, narrowing the solution’s deployment to a limited number of use cases.
Hitachi Vantara has focused its products traditionally on medium and large organizations. Those systems use the same OS and expose the same feature set, enabling users to design their infrastructures with a consistent set of characteristics both at the core and at the edge. Two storage series are relevant for large enterprise businesses: the VSP 5200 and 5600.
The VSP 5000 series is a storage solution that can scale up and scale out easily. The VSP 5000 series consists of a hardware appliance form factor that offers great value for large enterprise businesses. The VSP 5200 can provide up to 23 PB in capacity that is available in all Flash or Hybrid form. The VSP 5600 can provide up to 69 PB in capacity that is available in all Flash or Hybrid form. The scalability of these systems is great in both performance and capacity. The VSP 5000 Series will allow non-disruptive upgrades to the next generation of storage and can be bought as all-NVMe, hybrid, or all-flash storage. Data services include adaptive data reduction, storage virtualization, and in-system replication, as well as copy data management and non-disruptive migration capabilities. From a connectivity perspective, the VSP 5000 series products have support for NVMe-oF, Fibre Channel, and iSCSI interfaces.
From a management perspective, Hitachi Vantara offers Hitachi Ops Center Suite, an ML-powered management platform aimed at simplifying and improving operations for the entire storage stack. This solution consists of several highly-integrated components. Hitachi Ops Center Clear Sight provides cloud-based monitoring capabilities. Hitachi Ops Center Analyzer provides real-time observability and anomaly detection, while Hitachi Remote Ops handles the resolution of infrastructure issues with up to 90% of problems automatically resolved. Additional components such as Administrator and Automator provide configuration and automation capabilities. Through OpsCenter Suite, organizations can take advantage of various automation and API integration capabilities between their Hitachi Vantara storage and their automation platforms.
Hitachi is still a bit behind when it comes to cloud integration and Kubernetes integration with its products but recent announcements show that Hitachi is working on closing the gap. Support and integrations have been announced with Anthos and Red Hat OpenShift. Hitachi’s CSI driver is updated but still lags behind, with Kubernetes version 1.20 as the highest one at the time of writing. OpenShift 4.7 is also supported.
Hitachi Vantara proposes an interesting STaaS solution that offers pay-as-you-go and flexible consumption with guaranteed SLA/SLOs, a fixed rate card, and integrated analytics. The solution supports scaling up and scaling down, with transparent pricing on a $/Gb/month basis, while it includes five storage service classes, with data availability of either 99.995% or 100%. Finally, Hitachi claims it can deploy the STaaS infrastructure as early as 60 days after contract signing.
Also worth mentioning, Hitachi proposes a software-defined storage solution branded Virtual Storage Software Block (VSS Block). The company offers VSS Block ready nodes that allow organizations to quickly scale their software-defined storage system, which also enables the data plane to be extended from VSP solutions described above. VSS Blocks runs virtualized and integrates with an organization’s core storage platform and existing hypervisor.
Strengths: Hitachi Vantara continues to provide proven storage solutions built upon a robust architecture, and recently the company has introduced a promising STaaS offering that allows its customers to consume infrastructure in a more flexible manner.
Challenges: Although the systems support NVMe flash, that support is only for NVMe-oF, a potential challenge that limits organizations who are heavily invested in TCP.
HPE announced a new portfolio of primary storage solutions in 2021. In the large enterprise segment, HPE introduced the Alletra 9000 platform, an all-NVMe array built upon the foundations of the HPE Primera architecture. This solution aims to satisfy the requirements of mission-critical workloads with ultra-low latency and high IOPS, while also a no-questions-asked 100% availability guarantee. Regarding system availability, HPE leverages its InfoSight infrastructure management and AIOps platform, backed by AI and ML, to predict and prevent service disruptions, enhancing its management and analytics capabilities.
The Alletra 9000 is currently available in two models, the 9060 and the 9080, with the latter supporting more caches per node and an increased system cache ratio (3x more on the 9080 than on the 9060). As an all-NVMe architecture, Alletra 9000 supports NVMe-oF (over Fibre Channel). The solution can be extended with Alletra 2240 storage enclosures, which communicate with the Alletra 9000 through NVMe-oF (RoCE v2) connectivity. Alletra 9000 introduces non-disruptive controller upgrades, a new capability, into HPE’s storage portfolio.
HPE is moving away from traditional storage management approaches. Besides InfoSight, organizations can take advantage of HPE Data Services Cloud Console. This SaaS-based, intent-based provisioning solution enables a cloud-like experience that combines policy-based storage management and a self-service approach to workload provisioning with AI-driven workload placement. Data Services Cloud Console provides a rich and unified set of REST APIs across HPE products, allows workload movement to and from the cloud, and supports advanced security capabilities. HPE also supports cloud storage through HPE Cloud Volumes, a cloud-based platform that allows organizations to provision block volumes on either AWS or Azure. Cloud Volumes also offers a backup capability, but it doesn’t support immutable snapshots yet. Finally, HPE also has a good roadmap regarding Kubernetes integration with dedicated CSI drivers for Alletra systems.
The Alletra 9000 platform can be deployed and consumed through GreenLake, HPE STaaS solution, which HPE touts as its primary go-to-market infrastructure delivery solution. The HPE GreenLake STaaS solution appeals to customers with subscription options that range from traditional purchasing models to models that simplify the transition from capital expenditure (CapEx) spending to operating expenditure (OpEx) spending for HPE customers.
Strengths: HPE occupies an interesting position with a solid all-NVMe platform, but undoubtedly most of the value that HPE can deliver in the large enterprise segment comes from its heavy investments in various data services platforms such as InfoSight and Data Services Cloud Console. The HPE GreenLake is also a strong differentiator, providing organizations with the option to consume every HPE offering as a service. Those three elements give a clear picture of where HPE is heading: delivery of infrastructure and services through a cloud-like model.
Challenges: HPE’s strategy to act as a trusted, cloud-like provider through GreenLake services unnecessarily obfuscates HPE’s individual offerings, creating confusion for potential customers and making it difficult to evaluate each solution’s capabilities. HPE Cloud Volumes capabilities remain limited.
IBM offers a comprehensive storage portfolio including all-flash and hybrid solutions. Primary storage solutions for large enterprises are covered by the IBM FlashSystem 9200 Series with two deployment models: the FlashSystem 9200 and the 9200R.
The FlashSystem 9200 is an NVMe all-flash array that supports either 2.5-inch NVMe FlashCore Modules from IBM (with higher densities and self-compression) or industry standard 2.5-inch NVMe flash drives. The system also supports storage-class memory and can be expanded with 24-drive or 92-drive enclosures, scaling to up to 760 SAS drives in expansion enclosures for each control enclosure. The FlashSystem 9200R is a validated, full-rack design based on the FlashSystem 9200. It will suit the most demanding organizations that require capacity-dense deployments without sacrificing performance and throughput.
From a connectivity perspective, the FlashSystem 9200 architecture supports iSCSI (iSER – iWARP and RoCE) as well as Fibre Channel and NVMe-oF.
The FlashSystem 9200 is based on IBM Spectrum Virtualize, a storage operating system now common to entry-level, mid-range, and high-end IBM storage systems. Supported features include automated tiering, and other resource optimization techniques are available to improve capacity consumption and $/GB such as compression, deduplication, unmap, and automated thin provisioning. Several replication capabilities are available and include FlashCopy, Metro Mirror (synchronous replication), Global Mirror (asynchronous replication), 3-site replication and Global Mirror with change volumes. Additionally, a high-availability solution called HyperSwap can be implemented.
The IBM Storage Insights predictive analytics suite can monitor both IBM and several third-party systems, helping to establish a complete view of the storage infrastructure from a single interface, and automation can be achieved by taking advantage of the IBM Spectrum Virtualize REST APIs. These APIs are common to all IBM systems based on Spectrum Virtualize, allowing organizations that are leveraging multiple IBM storage products based on Spectrum Virtualize to baseline their automation functions.
Integration with the cloud is achieved through cloud tiering features embedded in the systems and virtual instances of Spectrum Virtualized deployed in the public cloud to provide a consistent user experience and set of features across different environments.
Kubernetes clusters can provision block storage dynamically through IBM’s block storage CSI driver, but functions remain limited.
IBM offers an STaaS solution branded IBM Storage as a Service, which allows organizations to consume capacity on demand. For primary storage, the solution is branded IBM Block Storage as a Service and is based on the FlashSystem storage. Organizations select a base capacity and performance tier, then IBM delivers the required hardware with an additional 50% capacity to cover growth or burst consumption. When a 75% usage threshold is reached, additional capacity is delivered and installed automatically to shorten procurement cycles. Data resiliency options can be added through IBM FlashSystem Safeguarded Copy to create immutable data copies in the cloud.
Strengths: Besides being based on a robust and proven architecture, one of the highlights of the FlashSystem series is the AI-based IBM Storage Insights platform that provides predictive analytics and proactive support capabilities. Besides compelling data services, the solution is also well architected to be deployed at scale, thanks to the IBM FlashSystem 9200R validated rack designs.
Challenges: Although IBM FlashSystem mid-range platform offers a respectable set of capabilities, innovation and core differentiators remain limited. Among other storage contenders of IBM’s size, IBM is the only solution not capable of delivering file services on top of primary block storage.
Infinidat boasts a modern, AI-based, hybrid storage architecture that delivers a no-compromise feature set with compelling $/GB and $/IOPS figures. To achieve this goal, InfiniBox takes advantage of a data path designed around a combination of DRAM, Flash memory, and hard disk drives associated with sophisticated AI-based caching technology to optimize data placement. These characteristics enable Infinidat customers to consolidate several workloads and more data per storage system to reduce the overall TCO of the infrastructure. With the announcement of InfiniBox SSA, Infinidat now offers an All-Flash solution to its customers as well.
Infinidat also has developed an easy-to-use, AI-driven, management system to support and simplify day-to-day operations and provide proactive support. Infinibox now supports NVMe/TCP with plans to support NVMe-oF in future product versions. Infinidat systems currently do not provide support for new media types such as QLC 3D NAND or storage-class memory. Infinidat claims that those media types must be thoroughly tested and currently do not bring significant advantages to its platform. However, Infinidat’s software-defined storage technology is media independent and can support other media types in the future.
Infinidat offers InfiniVerse, a cloud-based storage AIOps solution that helps the customer to manage and secure data on the Infinidat storage systems. It provides protection against ransomware and can be leveraged to back up the customer’s environment. Infinidat solutions also integrate with data center AIOps solutions such as Splunk, Dynatrace, ServiceNow, and many more.
To meet the needs for enterprise, modern data protection, backup disaster recovery (DR), and business continuity, Infinidat offers its InfiniGuard solution. The InfiniGuard solution dramatically accelerates data protection software and works with the major backup software vendors, such as Commvault, IBM, Veritas, VEEAM, and others. Additionally, InfiniGuard with its CyberRecovery solution (announced April 2021) provides immutable snapshot copies of source data sets that incorporate logical air-gapping—both local and remote. When a customer has a cyberattack, they can move the copies into a secure, fenced network to check for malware or ransomware. Once a known good copy of the data set is identified, the customer can do near-instantaneous recovery of the known good copy.
Support for cloud-native applications is good because of a clever implementation of the CSI plug-in for Kubernetes, which allows users to copy and migrate data to remote systems or the public cloud for backup, DR, or development activities. Infinidat has a complete offering of integrations for major OS and virtualization platforms.
Infinidat proposes various consumption options in which STaaS is offered currently through InfiniVerse, as was announced in June of 2021.
Strengths: High-end enterprise characteristics and a balanced AI-based architecture that enables users to consolidate a wide range of workloads in a single system and deliver a consistent performance experience.
Challenges: Infinidat now supports NVMe/TCP but does not yet support NVMe-oF, although it has committed to supporting both NVMe/FC and NVMe/RoCE in the future. The entry-level configuration is not well suited for small enterprise needs.
NetApp provides a long-standing line of storage solutions, built on top of the NetApp ONTAP solutions that jump-started NetApp. With ONTAP as the glue with which all NetApp storage services are built for large enterprise customers, NetApp provides primary storage in many different forms while maintaining a consistent OS. Extending these solutions into the cloud with solutions in Azure, AWS, and Google Cloud make it easy for customers to extend their storage solutions off-premises as well.
NetApp’s flagship ONTAP operating system seamlessly embraces traditional, SDS, and cloud paradigms simultaneously. It provides customers the flexibility to deploy ONTAP for different workloads on the type of architecture that best fits that particular requirement: on-premises on optimized AFF and FAS appliances, as software-defined on commodity hardware with ONTAP Select software, and in any major cloud as either a self-managed or a fully-managed offering. Still, a customer will have a unified view across all data and assets, consistent policy application, and simplicity of management.
All entry, mid-range, and high-end systems, including NVMe-based and hybrid models, can count on a series of integrations at the high level common to all storage systems (such as with SnapMirror for data replication), as well as a unified platform for monitoring and analytics (ActiveIQ). NetApp’s AFF A-series products support end-to-end NVMe, meaning both the back-end NVMe SSDs and front-end NVMe-oF connectivity to the host. NetApp provides both NVMe/FC and NVMe/TCP support, and the solutions help customers modernize their infrastructure with higher performance, lower latency, and simplicity of deployment.
NetApp uses AIOPs to drive down administration costs for its customers through Active IQ. Active IQ is a digital advisor that simplifies the proactive care and optimization of NetApp storage. It uses AIOps to uncover opportunities to improve the overall health of the storage environment and provide the prescriptive guidance and automated actions to make it happen.
With the FAS500f, NetApp introduced a balance of capacity and performance leveraging QLC-NAND technology. It provides its customers with a wide variety of workloads: media and entertainment; medical research and imaging; and large-scale analytics.
NetApp’s focus is on cloud-led, data-centric software development initiatives that are designed to help its customers unlock the best of cloud, whether it’s private, public, hybrid, or multi-cloud. NetApp AFF and the Cloud Volumes family (including Cloud Volumes ONTAP, Cloud Volumes Service, Azure NetApp Files, and AWS FSx for NetApp ONTAP) make up the unified ONTAP platform that uses common management, tools, and utilities across environments, and can run primary as well as secondary workloads to meet the customer’s needs. The primary storage and SDS offerings share common driving factors for these development initiatives.
NetApp has been developing and publishing certified Ansible modules for ONTAP since Ansible launched its certification program and provides certified Ansible modules for storage management. NetApp’s certified Ansible modules for ONTAP can be integrated with any other modules that Ansible distributes on Galaxy. For the Cloud Volumes implementations, NetApp provides application template functionality that allows for self-service provisioning of policy-based configurations (volumes, services, protocols, permissions, etc.) for line-of-business and DevOps groups needing storage resources.
NetApp has been a pioneer in enabling the use of enterprise-class persistent storage for stateful containerized apps, as evidenced by the success of its open-sourced, Trident-CSI compliant, dynamic storage orchestrator, which allows ONTAP to provide persistent storage for Kubernetes workloads, as well as the ROSA compliance certification. On top of that, NetApp’s Astra Control Service enables advanced data protection, DR, portability, and migration for Kubernetes workloads using the Cloud Volumes platform as a storage provider within and across public clouds and for ONTAP on-premises.
NetApp Keystone Flex Subscription delivers Storage-as-a-Service at one of multiple prescribed service levels for unified file and block, block only, and object storage. Each service level has defined IOPS and latency performance SLOs/SLAs. In addition to subscribing to performance service levels, the customer subscribes to a capacity commitment, with the ability to burst up to 20% on-demand. NetApp guarantees 99.999% uptime. Storage efficiencies are integrated into the service offering and are reflected in a lower service price per capacity than would be charged if no efficiencies are used. The monthly or yearly service fee covers all aspects of the offering including service deployment; ongoing service health and operational management; and any required software and hardware upgrades. The minimum term is one year and the minimum capacity is 15 TiB per site.
Strengths: NetApp’s vision around data goes well beyond traditional storage systems and provides customers the needed solutions, such as hybrid cloud, data, and applications management.
Challenges: Even though NetApp has a strong vision for the large enterprise, especially in Kubernetes and Multi-Cloud, there is still room for some improvement when it comes to things like data management.
With its HyperParallel Data Platform, Pavilion Data proposes a primary storage solution that’s oriented at business-critical workloads, addressing performance, latency, throughput, and scalability requirements. The hardware appliance is populated with NVMe flash drives and multiple controllers and can scale up within a single system by adding up to 72 drives and 20 controllers within a single chassis. It can scale out as well across multiple systems while maintaining linear performance. Support for new media types such as Storage-Class Memory, while technically possible, is not present in Pavilion Data’s platform because the company claims that it would bring only marginal improvements to latency.
The solution provides block, file, and object capabilities that can scale out, with a parallel file system for block data, a global namespace for file data, and a global namespace for object data. The HyperParallel Data Platform was architected purposely for NVMe, and supports end-to-end NVMe connectivity with NVMe-oF and NVMe/TCP transport protocols supporting both Ethernet and Infiniband.
The HyperParallel Data Platform supports block-storage data services such as enhanced storage QoS capabilities tunable at the volume level; snapshots and clones; thin provisioning; encryption; and on-line volume expansion. File data services include support for single or multiple namespaces with flexible client connectivity including traditional NFS v3/v4 and NFS over RDMA. Client plug-ins deliver file access to Gluster, Hadoop, and Spark; tiering and replication as well as snapshots, clones, and encryption are supported for file services. Object data services broadly recoup previously highlighted capabilities with snapshots and clones support; encryption; and replication. Objects can be accessed also as files, and object storage supports tiering, cloud integration (native tiering to cloud object storage), and WORM features.
Block, file, and object services can be delivered granularly either by dedicating an entire chassis or a portion of a chassis to a specific service, or by clustering multiple chassis, thus providing outstanding flexibility.
Pavilion Data provides a comprehensive management platform; however, AI-based analytics are currently not available. Organizations can automate operations on the HyperParallel Data Platform through the use of REST APIs. The solution also supports SNIA Redfish and Swordfish specifications. Finally, the platform can be used to provision storage for Kubernetes clusters through a CSI plugin.
There is no STaaS consumption model envisioned currently, and the solution can be consumed either through a traditional CapEx model or on a consumption basis.
Strengths:t Pavilion Data provides an ideal, high-performance, primary storage solution for organizations that seek block, file, and object storage capabilities to accelerate their mission-critical workloads. The solution combines outstanding parallel I/O combined with excellent scalability and connectivity options.
Challenges: Several gaps need to be addressed by Pavilion Data to fully meet large enterprise requirements. Providing a comprehensive management and analytics platform based on AI and offering self-service capabilities would help elevate the solution. Cloud integration remains very basic, with only object-storage replication currently available.
Pure Storage has architected its solutions around all-flash technology. The FlashArray product line serves primary storage use cases with four products built around the same operating system: the FlashArray //X, ideal for business-critical applications and performance-oriented workloads; the FlashArray //C, which targets capacity-oriented workloads with an optimized $/GB price; the brand-new FlashArray //XL, purpose-built to meet the needs of large enterprises; and last but not least, Cloud Block Store, which brings primary storage and enterprise data services to the cloud.
In addition to its flagship, top-of-the-line FlashArray //X90 system, and beyond existing //C60 capacity-oriented systems, Pure Storage introduces its //XL series with two models: the //XL130 and the //XL170. The //XL systems are more dense, packing up to 40 drives in a 5U package, versus 20 drives in a 3U package for //X systems. FlashArray //XL can be expanded with up to two DirectFlash shelves for a total of 96 additional drives. Besides significantly increasing capacity and workload density, those new systems also dramatically reduce the amount of rack space required. Finally, the //XL backplane was redesigned to support greater throughput, provide more I/O capabilities, stronger resiliency, and support future expansions.
The FlashArray product line puts non-disruptive activities at the heart of its architecture: all of the major activities such as capacity expansions, controller upgrades, hardware replacement, and software upgrades can be performed without incurring downtime or service interruption. This seamlessness is made possible by a highly available architecture in which all modules are hot-swappable, controllers are stateless, and all components are configured either in mirrored mode or in active-active HA configuration.
A particularity of Pure Storage systems is that they embed flash memory in a proprietary form factor (DirectFlash NVMe modules) that contributes raw flash capacity to the system. The solution coordinates data placement optimization and erasure operations across all DirectFlash modules and thereby eliminates any overhead typically associated with per-drive Flash Translation Layer (FTL). The result is better data placement decisions, increased utilization compared to. flash media in traditional form factors, and significantly improved media endurance, even with QLC flash. The //XL model introduces distributed NVRAM, which allows bandwidth and capacity to scale with the number of modules, lifting the limit on write throughput.
All FlashArray models use the same Purity operating system. FlashArray//C, //X, and //XL systems are unified file and block storage systems that benefit from a common set of data services. Among these, data efficiency mechanisms such as always-on inline deduplication, compression, and pattern removal can significantly improve the raw capacities given above. In addition to these, deep reduction algorithms can be applied to data at rest to further improve the data consolidation ratios provided by in-line deduplication.
Other data services include snapshots/clones (including SafeMode read-only snapshots) and advanced data replication capabilities: backups can be made to NFS targets or in the cloud to AWS S3 and Microsoft Azure Blob targets with Purity CloudSnap. Purity also supports DR and active-active metro clusters with solutions such as ActiveDR (near-zero RPO DR with test/real failover, resync, and failback) or ActiveCluster (synchronous replication, symmetric active-active clustering), to cover these briefly; VMware SRM integration is supported as well. Data can be replicated to other Pure Storage systems such as FlashBlade arrays or Pure Storage Cloud Block Store for AWS, an additional cloud-integration capability of the solution besides CloudSnap.
From a connectivity and protocol support perspective, the full FlashArray product line, which includes the //X, //XL, and //C systems, leverages Fibre Channel, NVMe-oF (ROCE and FC), SMB, and NFS.
Advanced management is provided by Pure1, a management platform common to all Pure Storage solutions, which combines AI-based analytics with AIOps and self-driving storage capabilities. Besides proactive monitoring and reporting of issues, Pure1 includes AI-driven recommendation capabilities to simulate the impact of net-new workloads to an existing environment, and the ability to estimate storage costs for Pure-as-a-Service. Pure1 also can be used to assess whether SafeMode snapshots are enabled across all Pure storage arrays.
Pure1 also provides a unified set of REST APIs as well as a digital marketplace where organizations can consume Pure Storage products and services directly, including STaaS with the Pure-as-a-Service solution.
Pure-as-a-Service is a subscription service for hybrid-cloud storage where organizations can consume foundational block, file, and object storage services through a pay-as-you-go model. These services can be consumed on-premises in an organization’s private data center, in edge/hosted colocation facilities, and /or in the public cloud with Cloud Block Store. There is no hardware to purchase or large, required, storage capacity commitment upfront. Instead, customers can reserve as little as 50 TiB of storage, committed at a discounted rate, with access to unlimited on-demand consumption thereafter. Pure as-a-Service keeps storage infrastructure fresh with an evergreen architecture that scales and stays modern non-disruptively.
With several services, performance tiers, and use cases offered, Pure as-a-Service maintains 25% headroom above actual customer usage, making sure that there’s always elastic and available capacity when needed. There is no cost to the customer if this headroom is not used. Pure-as-a-Service subscriptions are managed by Pure1, so the solution reaps all of Pure1’s capabilities presented earlier.
Kubernetes support is yet another highlight area for Pure Storage, thanks to the deep integration of Portworx into the FlashArray product line. It is available through a FlashArray-specific version of Portworx Essentials for which the node count limit of the Essentials version has been lifted. Organizations can start their cloud-native journey with Portworx Essentials directly on top of FlashArray without having to plan for additional investments, and they can upgrade seamlessly afterward to Portworx Enterprise as they advance through their journey and need to scale Kubernetes services.
Strengths: The newest FlashArray //XL systems allow Pure Storage to compete head-to-head with rack-scale solutions, providing equivalent usable capacity and performance through an efficient and compact form factor. Pure Storage also offers a comprehensive set of advanced data services including AI-based analytics, AIOps, and Kubernetes support, as well as a compelling STaaS offering.
Challenges: Putting cloud snapshot features aside, cloud integration capabilities are limited, with only Cloud Block Store available in the portfolio.
Zadara’s primary storage offering, zStorage, is part of Zadara Edge Cloud Services, a solution focused on partners such as regional cloud providers, MSPs (currently 300+ providers on 6 continents); enterprise customers can choose to deploy Zadara’s solution on-premises as well.
The Zadara Edge Cloud Services architecture consists of a full infrastructure stack providing compute, networking, and storage. zStorage is the storage layer of the solution and consists of one or more Virtual Private Storage Arrays (VSPAs) that can be deployed on NVMe SSD, SSD, hybrid, and HDD media types. A VSPA is able to serve block (iSCSI, FC, iSER protocols), file (SMB, NFS), and object storage (S3, Swift) services. These services can in turn be consumed on-premises, across clouds, or under a hybrid model. Various VPSAs can be created, each with its own engine type (which dictates performance) and its own dedicated set of drives, including spares, providing a strong multi-tenant solution. Currently, there is no support for NVMe-oF and NVMe/TCP, although the way the solution is deployed and provided to customers (primarily through MSPs) greatly reduces the need to care about connectivity protocols.
The solution offers thinly provisioned snapshots as well as cloning capabilities, which can be local or remote. The snapshot-based, asynchronous, remote mirroring feature makes possible replication to a different pool within the same VPSA, to a different local or remote VSPA, or even to a different cloud provider. The replicated data is encrypted and compressed before being transferred to the destination. The solution also allows for many-to-many relationships, which enables cross-VPSA replication in active-active replication scenarios. Cloning capabilities are also available remotely and can be used for rapid migration of volumes between VPSAs because the data can be made available instantly (although dependency on the source data remains until all of the data has been copied in the background).
As the solution is cloud-based already, cloud integrations consist of native backup and restore capabilities that leverage object storage integration with AWS S3, Google Cloud Storage, Zadara VPSA Object Storage, and other S3-compatible object stores. Object storage also can be used by Zadara for audit and data retention purposes. Zadara supports AWS Direct Connect as well as Azure ExpressRoute, both of which allow a single volume to be made available to workloads residing in multiple public clouds, enabling the use of a single dataset across multiple locations or clouds. Auto-tiering is supported on flash deployments; hot data is identified by the system and promoted to the flash/high-performance tier, while less frequently accessed data is moved to lower-cost hard disks or S3-compatible object storage.
Although Zadara implements detailed analytics and visualization capabilities combined with proactive support and integration with ITSM / ticketing systems, those capabilities are not yet augmented by AI / ML. Another aspect of the solution relates to API integration: all of the VPSA management functions are available via RESTful APIs, enabling automation of provisioning activities. In addition, a Python library covering the same VPSA management functions is available to automation developers.
Kubernetes integration is possible through Zadara’s CSI driver and a Kubernetes operator, both of which can deliver block and file storage services to containerized workloads.
The solution is provided as a fully integrated infrastructure stack delivered under a SaaS consumption model, which constitutes the essence of Zadara’s business model. An organization consuming storage services through Zadara, therefore, gets all of the flexibility benefits of STaaS.
Strengths: Zadara combines a simple and straightforward consumption model with a rich ecosystem of storage services and capabilities, complemented by multiple cloud integrations. The solution’s deployment model as a full-stack SaaS offering eliminates the complexity and overhead associated with deployment, management, and CapEx costs.
Challenges: Even if large enterprises consume Zadara as a SaaS solution (therefore not managing the back-end infrastructure), the absence of AI-based management/analytics is a gap area compared to general market trends.
6. Analyst’s Take
The primary storage market remains a very mature space. Large enterprises still consider architecture and reliability to be primary decision factors, and many solutions present in this radar owe their success to their robust architecture and ability to support mission-critical applications reliably.
For this reason, organizations are now seeking added value from adjacent capabilities such as cloud integration: with more data moving to the cloud, and with new application deployment models, solutions that integrate hybrid-cloud options are gaining more attention.
Among other key differentiators, large enterprises are seeking comprehensive management capabilities that take advantage of AI and machine learning. Those solutions should not only provide predictive analytics capabilities and proactive remediation, but storage should be self-driven to increase the storage capacity manageable by a single administrator. Organizations also are seeking to replicate the cloud experience with self-service capabilities and policy-based data placement; automation and API integrations play a key role in helping them deliver a seamless experience to their user base.
Another emerging trend is STaaS. Even if overall STaaS is still nascent, some of the vendors have built very compelling offerings that have the potential to transform the way storage will be consumed. Large organizations consider STaaS with great interest. It delivers cloud-based, flexible consumption options, and offloads the burden of management to the vendors. STaaS was built with large organizations in mind, so a broad majority of the vendors present in this radar offer this consumption model. Some are more advanced along their journey, while others are going through a transition away from their traditional CapEx sales model and are still adapting to this tectonic shift.
7. About Enrico SignorettiEnrico Signoretti
Enrico Signoretti has more than 25 years in technical product strategy and management roles. He has advised mid-market and large enterprises across numerous industries, and worked with a range of software companies from small ISVs to global providers.
Enrico is an internationally renowned expert on data storage—and a visionary, author, blogger, and speaker on the topic. He has tracked the evolution of the storage industry for years, as a Gigaom Research Analyst, an independent analyst, and as a contributor to the Register.
8. About Max MortillaroMax Mortillaro
Max Mortillaro is an independent industry analyst with a focus on storage, multi-cloud & hybrid cloud, data management, and data protection.
Max carries over 20 years of experience in the IT industry, having worked for organizations across various verticals such as the French Ministry of Foreign Affairs, HSBC, Dimension Data, and Novartis to cite the most prominent ones. Max remains a technology practitioner at heart and currently provides technological advice and management support, driving the qualification and release to production of new IT infrastructure initiatives in the heavily regulated pharmaceutical sector.
Besides publishing content/research on the TECHunplugged.io blog, Gestalt IT, Amazic World, and other outlets, Max is also regularly participating in podcasts or discussion panels. He has been a long-time Tech Field Day Alumni, former VMUG leader, and active member of the IT infrastructure community. He has also continuously been running his own technology blog kamshin.com since 2008, where his passion for content creation started.
Max is an advocate for online security, privacy, encryption, and digital rights. When not working on projects or creating content, Max loves to spend time with his wife and two sons, either busy cooking delicious meals or trekking/mountain biking.
9. About Arjan TimmermanArjan Timmerman
Arjan Timmerman is an independent industry analyst and consultant with a focus on helping enterprises on their road to the cloud (multi/hybrid and on-prem), data management, storage, data protection, network, and security. Arjan has over 23 years of experience in the IT industry and worked for organizations across various verticals such as the Shared Service Center for the Dutch Government, ASML, NXP, Euroclear, and the European Patent Office to just name a few.
Growing up as an engineer and utilizing that knowledge, Arjan currently provides both technical and business architectural insight and management advice by creating High-Level and Low-Level Architecture advice and documentation. As a blogger and analyst at TECHunplugged.io blog, Gestalt IT, Amazic World, and other outlets, Arjan is also from time to time participating in podcasts, discussion panels, webinars, and videos. Starting at Storage Field Day 1 Arjan is a long-time Tech Field Day Alumni, former NLVMUG leader, and active member of multiple communities such as Tech Field Day and vExpert.
Arjan is a tech geek and even more important he loves to spend time with his wife Willy, his daughters Rhodé and Loïs and his son Thomas sharing precious memories on this amazing planet.
10. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.