This GigaOm Research Reprint Expires: Apr 13, 2023

Key Criteria for Evaluating Managed Kubernetes Solutionsv2.0

An Evaluation Guide for Technology Decision Makers

1. Summary

Development in the enterprise has been shifting toward microservices-based applications for a while now. Time has been spent developing and testing these applications at smaller scales ready for proof of concept deployments, with business critical applications now ready for production deployments. From an infrastructure perspective, we have advanced from the initial curiosity and learning phases into small-scale training laboratories or non-critical production deployments. There is an increasing interest in solutions that can bridge the gap between user expectations and the operational reality of Kubernetes in action.

Kubernetes remains a complex platform that receives frequent updates and new features. For IT organizations accustomed to the ease of use and stability of technologies such as virtualization, this level of flux is a concern. In order for organizations to operate within the existing high standards of availability and security, they must keep up to date with the latest Kubernetes version and surrounding ecosystem projects, ensuring that security patches, API specifications, and performance improvements are realized as soon as possible. The reality is that most organizations are not geared up to work at this pace; managing existing workload demands and learning new skills simultaneously is a challenge. Managing an ever-evolving platform like Kubernetes is a demanding task, and on top of that, operational complexities in this type of platform can create reliability issues if not addressed correctly.

IT organizations favor containers because they enable true application portability, and Kubernetes is the right platform to manage container-based applications correctly, at scale. Kubernetes realizes the possibility of true hybrid cloud deployments, enabling organizations to build on existing data center solutions and expand into cloud environments seamlessly.

The easiest way to get all the advantages of Kubernetes and none of the complexity that comes with it is to choose the right managed Kubernetes services. There are plenty of options in the market at the moment and, although rooted in the same code base, they present several differences in technical features, consumption models, and support aspects.

Fortunately, Kubernetes didn’t see the proliferation of different distributions or many competing projects that occurred early on with the Linux OS. The core of Kubernetes is the same on all platforms, and they share the same commands, structure, and way of operating, making seamless application portability a reality.

In this report, we analyze the important features of managed Kubernetes systems to see how well they respond to enterprise needs and to enable organizations to evaluate specific solutions based on their own requirements.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

2. Managed Kubernetes Primer

Applications have changed significantly in recent years, and the advent of containers and, more broadly, micro-services, has shifted how we both develop and deploy. Containers allow for a decoupling of the application from dependencies such as system libraries or underlying configurations. Container images can be deployed easily and updated frequently, and because of their size, they are often portable across multiple locations.

Developers and businesses have embraced this new model of application development because of its increased efficiency and the reduced time to market that it offers. Projects have been launched to refactor legacy applications, and new requirements are evaluated in the container landscape as a primary deployment option. These trends have contributed to the acceleration of container adoption in production across enterprise environments, creating increased demand for enterprise-grade solutions and services that can work well across existing infrastructure and cloud deployments.

Kubernetes is fundamentally an orchestration platform. Applications are defined in code by describing the resources required, and the desired scale and state of the application. Deployment of applications is organized into sets of containers (called pods), either as individual deployments or scaled replica sets that allow for the application to grow and shrink based on demand. The orchestration layer continuously works to evaluate the desired deployment specification as defined and ensures that enough resources are allocated to provide the level of service required by the application and its users. This supply list can include the application containers, side-car services, load-balancers, security posture proxies, and much more. Containers are frequently spun up and down or moved to different nodes within the cluster. The number of operations in a large cluster could be huge, and the orchestration infrastructure needs to meet these demands.

Designing and deploying an infrastructure platform capable of meeting these demands requires specialist knowledge and skills. Maintaining the platform to keep up with the fast-paced development of Kubernetes and the surrounding ecosystem is challenging for even the most dynamic of organizations. Managed Kubernetes solutions alleviate this burden on resources and allow the deployment of modern applications within existing teams. Providers of the available solutions ensure compatibility, take care of scaling the resources required to run the Kubernetes infrastructure, and provide the interfaces and metrics in a manner that’s simple to consume and makes operational success easier. Using a managed Kubernetes solution allows teams to focus on what matters: deploying business critical applications.

3. Report Methodology

A GigaOm Key Criteria report analyzes the most important features of a technology category to help IT professionals understand how solutions may impact an enterprise and its IT organization. These features are grouped into three categories:

  • Table Stakes: Assumed value
  • Key Criteria: Differentiating value
  • Emerging Technologies: Future value

Table stakes represent features and capabilities that are widely adopted and well implemented in a technology sector. As these implementations are mature, they are not expected to significantly impact the value of solutions relative to each other, and will generally have minimal impact on total cost of ownership (TCO) and return on investment (ROI).

Key criteria are the core differentiating features in a technology sector and play an important role in determining potential value to the organization. Implementation details of key criteria are essential to understanding the impact that a product or service may have on an organization’s infrastructure, processes, and business. Over time, the differentiation provided by a feature becomes less relevant and it falls into the table stakes group.

Emerging technologies describe the most compelling and potentially impactful technologies emerging in a product or service sector over the next 12 to 18 months. These emergent features may already be present in niche products or designed to address very specific use cases, however at the time of the report they are not mature enough to be regarded as key criteria. Emerging technologies should be considered mostly for their potential downfield impact.

Over time, advances in technology and tooling enable emerging technologies to evolve into key criteria, and key criteria to become table stakes, as shown in Figure 1. This Key Criteria report reflects the dynamic embedded in this evolution, helping IT decision makers track and assess emerging technologies that may impact the organization significantly.

Figure 1. Evolution of Features

Understanding Evaluation Metrics

Table stakes, key criteria, and emerging technologies represent specific features and capabilities of solutions in a sector. Evaluation metrics, by contrast, describe broad, top-line characteristics—things like scalability, interoperability, or cost effectiveness. They are, in essence, strategic considerations, whereas key criteria are tactical ones.

By evaluating how key criteria and other features impact these strategic metrics, we gain insight into the value a solution can have to an organization. For example, a robust API and extensibility features can directly impact technical parameters like flexibility and scalability, while also improving a business parameter like total cost of ownership.

The goal of the GigaOm Key Criteria report is to structure and simplify the decision-making process around key criteria and evaluation metrics, allowing the first to inform the second, and enabling IT professionals to make better decisions.

4. Decision Criteria Analysis

In this section, we describe the specific table stakes, key criteria, and emerging technologies that organizations should evaluate when considering solutions in this market sector.

Table Stakes

Some important characteristics of managed Kubernetes are now common to most solutions available in the market, therefore users take them for granted. Table stakes in this report are:

  • Upstream Kubernetes
  • Role-based access control
  • Control plane and APIs
  • Container Storage Interface (CSI)
  • User interface and basic automation

Upstream Kubernetes
Kubernetes is still a young platform, but the proliferation of multiple managed services has cemented a set of features that are now standard across all offerings.

Services should be using main upstream versions of Kubernetes rather than any heavily customized distribution. The platform should keep in sync with major updates and offer deployment options for multiple versions of the Kubernetes applications. This updating ensures that changes to the API are managed for users and breaking changes are avoided.

Role-Based Access Control
User access controls across the infrastructure–controlled via policy and the assignment of roles to the various departments that will require access to the clusters– are foundational to ensuring that enterprises can meet compliance and security demands.

Control Plane and APIs
Standard Kubernetes control plane features should be available, including scalable management infrastructure, Kubernetes dashboard access, API access, and Custom Resource Definitions, to name a few. Adhering to a clean standard is important for maintaining the flexibility of deployment of applications. Decoupling the infrastructure from the application dependencies is key to success.

Container Storage Interface
As applications are deployed into production and we move from the initial wave of cloud native ephemeral applications to more traditional enterprise applications, the ability to provide persistent storage will be essential. The container storage interface (CSI) enables a common specification and architecture that can be extended with plug-ins for specific storage technologies.

User Interface and Basic Automation
Finally, mainstream adoption requires simple user interfaces and common automation options. Moving Kubernetes systems to production status requires visibility via dashboards and interfaces that can be monitored in operations centers. Maintenance and management is moving more and more toward Infrastructure-as-Code, and as such, automation is at the forefront of modern infrastructure deployment.

Key Criteria

As we move from basic system features (table stakes) to the features that tend to differentiate solutions (key criteria), it is important to consider how these vital features are designed and implemented. Moreover, the compelling value these features present in a solution today might, in a year or even less, show up in competing solutions. In short, they become table stakes. Likewise, new capabilities can introduce benefits or address new needs to positively impact manageability, scale, flexibility, and so on.

This section provides a brief description of the specific functionality or technology we are defining as a key criterion, its benefits in general terms, and what to expect from a strong implementation.

  • Hybrid cloud
  • Pricing model
  • Multi-zone deployments
  • Application lifecycle solutions
  • Security
  • Interoperability

Hybrid Cloud
One of the key tenets of Kubernetes and containers is enabling the reality of hybrid and multi-cloud solutions. Containers reduce the dependencies on infrastructure and operating systems that we have experienced with existing virtual machine solutions. Containers are lightweight (can be megabytes in size) and bring a new era of portability across multiple environments, because they can be hosted in the cloud, in on-premises data centers, or in a combined mixture of both.

Managed Kubernetes solutions in the marketplace today need to meet these needs for independence, and many factors may contribute to an application not being suitable for placement within a public cloud infrastructure. These limitations could be regulatory/compliance-based, involve the scale of the deployment required, or be related to limited connectivity back to the cloud (such as with many IoT deployments). So a multitude of deployment options are required.

The most common deployment topologies offered within the market today are:

  • Appliance/hardware-based: The coupling of hardware and software to provide a single managed solution is nothing new, and we have seen it in existing virtualization environments with technologies such as Hyper Converged Infrastructure. Managed Kubernetes solutions are no different and the offerings available, like AWS Outposts and Azure Stack, provide a logical extension of the public cloud into your existing on-premises data centers. Integrating into existing tooling, interfaces, and billing methods can provide a seamless experience across multiple locations.
  • Software-based: Software-only deployment brings a number of benefits, is the simplest method of distribution, and allows use of existing hardware already deployed in data centers. Support for a wide range of deployment options is available, using existing virtualization platforms or bare-metal hosts. These options remove the initial pain points associated with deploying Kubernetes infrastructure on your own. Many of the choices are already made, such as which container networking interfaces to use, which operating system to build upon, the management tools for the deployment, and its overall architecture.
  • Multi-platform management/federation: As Kubernetes deployments grow and spread out across multiple locations, the ability to provide a single point of management and common user interface across multiple clusters is becoming a necessity. Native Kubernetes federation is immature and still being developed, so this is not an option for many deployments. Managed Kubernetes providers have seen the need for solutions in this space and continue to deliver improvements at a regular pace.

Pricing Model
Selection of a managed Kubernetes service can involve many factors, and with IT budgets being squeezed, the consumption models of the service offerings can be very appealing. There are a number of areas to consider when it comes to evaluating a pricing model.

Managed Kubernetes services operate in multiple models. Some services will not charge for the underlying infrastructure that hosts the control plane and management components, instead charging only for the consumed resources of worker nodes and other associated resources within the cloud. Other popular options are charges based on the number of nodes managed within the service and subscriptions based on the number of clusters deployed and managed.

As with many cloud services, there are differences in price based on the deployment region and the features available within those facilities. It is important to consider the requirements of the applications being deployed. If using geo-dispersed clusters, there may be a higher cost of running in a region closest to the end consumer. Cloud providers are very transparent and online cost calculators and other tools are available to help model the overall costs of a deployment.

A recent trend in cloud computing has emerged in the form of “spot instances.” Spot instances are effectively a brokering of under-utilized, pre-reserved resources within the cloud. Steep discounts can be achieved using these types of instances; however, they are volatile and can be reclaimed with only minutes of notice. Managed services are available that make effective use of these instances, moving workloads between instances and pricing models to achieve the most efficient cost model.

Multi-Zone Deployments
Kubernetes is designed for modern architecture patterns, providing high availability and scale from the outset. The declarative specification allows for multiple pods (replicas) to be deployed and utilized across multiple worker nodes within the cluster. Scaling can be performed automatically based on metrics, manually as demand increases, or as part of upgrades and recovery operations. Should a node fail within a cluster, the applications are restarted on available nodes based on the specification deployed.

The cloud offers a number of new operating options when it comes to high availability. Due to the greater deployment scale compared to traditional on-premises data centers, cloud facilities can be split up across multiple availability zones. Each zone offers independent power, cooling, and connectivity. When assessing managed Kubernetes services, it is crucial that high-availability patterns are taken into consideration and fit the uptime requirements of the business and its applications.

Many Kubernetes applications are considered to be stateless and ephemeral in nature. These applications can simply start, run, and die without any associated storage of data. However, as we move more enterprise applications into containers and Kubernetes, the data that these applications process needs to come with them. Storage services offered within the cloud and Kubernetes services need to provide replication among zones and/or regions, or the ability to access the same data in storage from multiple locations.

Application Lifecycle Solutions
Kubernetes is an infrastructure platform, providing management and orchestration of applications deployed within containers. This management can be composed of networking, load balancing, security, monitoring, logging, and many other concepts. The learning curve is steep and there are many considerations to take into account when architecting production-ready deployments.

Traditional infrastructure teams are reacting to these requirements and learning these new patterns. Likewise, application developers are finding that they need to learn more about the underlying infrastructure and design applications with these new patterns in mind. This learning curve can slow adoption of Kubernetes in many cases.

The cloud providers have recognized this threshold need for deeper understanding and are developing solutions that sit on top of existing Kubernetes services to abstract away many of the infrastructure requirements. Services such as those described here bring more of a serverless deployment option to traditional Kubernetes offerings.

With solutions aimed more at application developers and DevOps teams, the requirement to understand the underlying platform is removed. The deployment of an application is reduced from complicated definitions and declarative specifications to publishing a container and stating the required memory and compute resources. The service handles deployment, scaling, security, and networking.

Security
When deploying new solutions, security should be considered in every aspect of the infrastructure. Managed Kubernetes services allow for data and applications to be deployed across multiple locations, both within the cloud and on-premises. So a solid foundation of security features should be available to ensure both that access is controlled and that any attack surface is minimized.

Currently, the essential security features of a managed solution for Kubernetes include:

  • Role-based access control (RBAC): Controlling both who and what can be accessed within the managed infrastructure using fine-grained, policy-driven methods is an important aspect of security management.
  • Federated authentication: Federated authentication builds on the standard RBAC offerings and allows for integration into existing authentication services using open standards like OAuth or SAML. Many organizations are already familiar with centralized management of accounts and permissions from experience within virtualized infrastructure platforms. Providing a consistent and familiar approach to security management improves adoption and ease of use.
  • API/control plane security: Due to the “as a service” nature of managed Kubernetes solutions, many times the end consumer has little or no control over the management infrastructure because it is deployed as a part of the service itself. The ability to restrict access to the control plane and overall Kubernetes API is a key part of good security practices. Options to restrict source IP access to the APIs, deploy behind a managed firewall, or deploy the control plane services within a controlled boundary (such as within the VPC) enable a “security first” approach to deployments.
  • Patching: A defining factor to consider when choosing to deploy managed Kubernetes is the removal of the management overhead associated with running a home-grown system. Patching of both Kubernetes application components and control plane infrastructure, as well as the underlying OS, should be handled discreetly by the service. The scalable and often stateless nature of Kubernetes applications allows for seamless addition of new resources to the clusters. Security updates should be handled by rolling new nodes into the cluster and discarding the old, with as little user intervention as possible.

Interoperability
Kubernetes facilitates deployment across the entire spectrum that containerized applications enable. The Kubernetes project is fast-paced, and updates are rolled out frequently. With this in mind, it is imperative that managed solutions offer a wide range of interoperability features.

Supported versions of the Kubernetes control plane within the managed service must be considered to ensure that adequate time is given to adjust to changes within the control plane and API structure. A typical approach to this task would be allowing an N-2 versioning structure in which upgrades are not forced until multiple new versions have made it into the production system.

Deployment of containers has been focused predominantly within the Linux OS ecosystem. Many flavors of Linux exist currently, and a number of lightweight specialized container operating systems have emerged. However, as adoption increases and more enterprise applications make their way to containers, support for Windows operating systems has increased. Choosing a managed Kubernetes service that provides flexibility in the pools of worker nodes that can be deployed, offering the ability to deploy both Linux and Windows applications, is advantageous. Pools should be independently scalable, and the Kubernetes declarative specification provides for scheduling of resources based on the operating system in use.

Another consideration with interoperability is the CPU architecture support available within the services. Data centers traditionally have been the home of x86 processors, and this is the most common deployment in the wild. However, as demands on infrastructure increase globally, solutions are being sought that will provide similar or increased performance with a lower power footprint. A prime example of this demand is the growing number of deployed processors based on the ARM architecture that deliver these efficiencies without sacrificing performance. Kubernetes and the surrounding ecosystem are developing solutions quickly to take advantage of these architectures, providing a single solution for orchestration across multiple architectures and operating systems within the same control plane and management frameworks.

Emerging Technologies

The managed Kubernetes systems we are evaluating today will last for years to come. With that in mind, it is important to plan in advance for future changes and expansions in the market.

In this section of the report, we analyze some of the most interesting technologies that are going to be implemented in the managed Kubernetes systems in the next 12 to 18 months. At this stage, implementations that exist are not mature enough to be included in the key criteria. However, when implemented correctly and efficiently, these emerging features will materially impact the same evaluation metrics we identified earlier in this report.

  • Data protection and disaster recovery
  • Service brokers

Data Protection and Disaster Recovery
As adoption of Kubernetes and containers increases in the enterprise landscape, the applications being deployed are changing. The early adopters of both cloud and Kubernetes relied heavily on stateless, ephemeral applications that were detached from any data. This allowed for simple solutions when it comes to data protection and disaster recovery.

Enterprise applications generally require a quantity of data close to the application, and the broad expansion of container storage offerings supports this trend. However, when the data associated with a container is no longer disposable, new technologies are required within the ecosystem to provide protection and recovery services.

Traditional data protection vendors and new players alike have seen this demand and are bringing enterprise grade solutions and integrations to the Kubernetes landscape. As dominant solutions emerge, integration into the managed Kubernetes services is expected to follow as partnerships form or acquisitions are made.

Service Brokers
Container-based applications generally are not deployed in isolation. Whether within on-premises data centers or in the cloud, external resources are generally required as a part of the overall deployment. Using multiple deployment methods, especially when moving toward environments predominantly managed by code, can be difficult.

An emerging trend is to create common APIs and interfaces that can integrate with the existing deployment specifications used by Kubernetes to create external resources. This approach could be used to query a database as a service with cloud providers, or for more traditional requests, such as spinning up a virtual machine deployment to support a container application.

As deployments within Kubernetes increase and enterprises start to migrate existing applications to modern infrastructure platforms, the connectivity between these different platforms becomes more and more important. There are a number of service broker projects in the works and an open standard has been proposed via the Open Service Broker API project. Current contributions are being developed by the likes of Microsoft Azure, Google Cloud, SAP, and Pivotal, to name a few.

Figure 2 offers insight into the importance of key criteria and emerging technologies, and the timing at which they are poised to have the greatest impact in terms of TCO and ROI over the next 36 months.

Figure 2. Timing of Key Criteria and Emerging Technologies Impacts on TCO and ROI

5. Evaluation Metrics

The most important metrics for the evaluation of a managed Kubernetes system that is aimed at running production application workloads should include:

  • Architecture
  • Flexibility
  • Scalability
  • Manageability and ease of use
  • Ecosystem

Architecture
Kubernetes can be very demanding on solution architects, and implementation requires a number of decisions to be made early on in the process. Many components of Kubernetes, such as networking and storage, have multiple offerings, and the right choice may not be evident immediately. With more enterprises moving applications and workloads to Kubernetes, it is becoming clear that providing enough skills across the industry is a pinch point. Solutions should deploy to a standard architecture with as many of these decision points as possible removed from the end-user customer.

Flexibility
Hybrid cloud is quickly becoming the default position of every IT strategy. Applications should be able to be deployed where they make the most sense, whether that is for cost reasons, burst capacity, regulatory compliance, or just to ensure the best user experience (bringing applications closer to end users). Kubernetes solutions enable this strategy and support multiple locations as well as multiple operating systems and compute architectures.

Scalability
The adoption of Kubernetes is still in the early stages for most enterprises. Many organizations have proof-of-concept projects or are working with test and development systems. Production systems in the pilot phase are also becoming commonplace. Infrastructure requirements in these early stages are low, and ensuring that a service allows you to start small and grow based on demands and business adoption, without having to redeploy or redesign, is essential.

Manageability and Ease of Use
Enterprises that are accustomed to a high standard of tools when it comes to managing infrastructure, virtualization, and the surrounding ecosystem set a high bar. Adoption of new services needs to be easy in order for it to be successful. Once systems are in production, it is imperative that they can be operationalized. That means providing logging, monitoring, and visibility. Common interfaces and APIs are needed to provide a consistent user experience across the entire stack.

Ecosystem
Kubernetes is only one part of an application deployment. Often external services such as storage, databases, firewalls, load balancers, and logging are required. Integration into existing services within the cloud and on-premises data centers is fundamental to a successful deployment. Joining together native cloud services, third-party solutions, and existing infrastructure products to build an entire ecosystem aimed at simplifying adoption and improving reliability improves the overall infrastructure TCO and increases time to market for applications.

6. Key Criteria: Impact Analysis

As described earlier, this report analyzes the impact of critical features of unstructured data management solutions available in the market. It puts them in context with the evaluation metrics that are usually at the core of strategic decisions. Table 1 shows the impact of critical features on evaluation metrics.

Table 1. Impact of Features on Metrics

Architecture Flexibility Scalability Manageability & Ease of Use Ecosystem
Hybrid Cloud 4 5 5 4 5
Pricing Model 2 3 4 2 3
Multi-Zone Deployments 5 4 3 3 4
Application Lifecycle Solutions 3 3 2 4 4
Security 4 2 3 5 4
Interoperability 5 4 4 3 5

Impact on Architecture
Depending on the level of maturity and requirements of the organization, choosing the right architecture is important in delivering a successful Kubernetes solution. Traditional enterprise features such as high availability and disaster recovery are expected; similarly, interoperability with existing systems and deployments reduces the steep learning curve of adopting a new solution. These are two of the most important factors to consider.

Impact on Flexibility
Many organizations now consider hybrid deployment to be the default stance when evaluating new solutions. Making the most use of existing investments but having the ability to scale outside of the current infrastructure ensures that time to market is not impacted and allows for more iterative deployment options. Interoperability across a wide range of existing infrastructure and operating systems is also a large consideration for enterprises looking to adopt Kubernetes solutions.

Impact on Scalability
The ability for organizations to start with just the required infrastructure for their initial requirements and early projects but still scale with demand and adoption to meet enterprise expectations, is a key factor in choosing the right Kubernetes solution. Multiple locations and operating models will be required to scale to meet the demands of enterprise applications, and being able to deploy to the right location at the right price is essential.

Impact on Manageability and Ease of Use
Managed Kubernetes services will be adopted by a variety of departments and job roles within an enterprise organization. The ability to provide the right information and access to the right people is fundamental, so a well-organized user interface with granular, role-based access controls and well-documented APIs is critical to meeting the needs of the enterprise. Security features, such as patching and monitoring, provide reassurance to operations teams who are required to onboard new solutions within an enterprise organization, and these features should be evaluated carefully.

Impact on Ecosystem
Solutions with broad ecosystem support allow organizations to deploy and migrate a wider range of existing and new applications into a Kubernetes/containerized environment. The integration points with existing systems, such as storage and networking, either within the data center or as part of native cloud offerings, are essential to success, allowing for flexibility of approach to deployment. For example, the ability to integrate into existing storage systems and have both the traditional application and Kubernetes containers accessing the same data reduces the amount of change required from the outset and increases the time to value.

7. Analyst’s Take

Kubernetes is becoming a mainstream topic of conversation for organizations of all sizes. The interest is continuing to grow, and the requirement to deploy containerized applications is increasing. Whether you are refactoring existing applications or deploying off-the-shelf offerings, containers are becoming the preferred packaging and delivery method for modern software.

Kubernetes provides a tried and tested method of operationalizing containerized software in a way that offers many of the benefits and extensibility that organizations have come to rely on from virtualization solutions. Skills in these areas are in great demand and not all businesses will be able to create teams capable of building and managing complex Kubernetes deployments.

Choosing a managed Kubernetes solution helps to speed its adoption and lessen the burden on already stretched infrastructure teams, allowing the business to focus on the deployment of applications and use of new development and operations methodologies to speed time to market. As systems move from proof of concept and test/development environments into business-critical production infrastructure, demands for enterprise grade features increase.

The key criteria analyzed in this report are crucial for identifying a managed Kubernetes solution that provides the features, scalability, and extensibility that modern application deployment requires. Still, readers should consider the fact that the impact of features on the selected metrics is not presented in absolute terms and always should be verified relative to specific use cases.

8. About Enrico Signoretti

Enrico Signoretti

Enrico Signoretti has more than 25 years in technical product strategy and management roles. He has advised mid-market and large enterprises across numerous industries, and worked with a range of software companies from small ISVs to global providers.

Enrico is an internationally renowned expert on data storage—and a visionary, author, blogger, and speaker on the topic. He has tracked the evolution of the storage industry for years, as a Gigaom Research Analyst, an independent analyst, and as a contributor to the Register.

9. About Jason Benedicic

Jason Benedicic

Jason is an independent consultant, based in Cambridge, UK. Jason works with customers to design and implement IT solutions that meet a variety of needs, including backup, virtualization, cloud adoption, and application modernisation. He is an expert in building and managing public cloud services and private/hybrid cloud infrastructure.

He has additional experience in Agile processes, the Software Development Lifecycle, and CI/CD pipelines. Jason is comfortable working in all areas of business from sales cycle through to support. Jason can communicate at all levels throughout your business and tailor messaging accordingly. He has additional interests in Digital Ethics, Influencing, Marketing, Strategy, and Business Processes.

Outside of the technology industry, Jason enjoys all forms of gaming, ranging from classic table-top to online RPGs. Jason has been a raider in World of Warcraft for the last 16 years and continues to push current end-game content with his guild. Along-side that he enjoys the varied activities in Destiny 2, where you can find him in the latest raid or crucible matches with my clan. He is also a keen cyclist.

10. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

11. Copyright

© Knowingly, Inc. 2021 "Key Criteria for Evaluating Managed Kubernetes Solutions" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.