Table of Contents
Development in the enterprise has been shifting toward microservices-based applications for a while now. Time has been spent developing and testing these applications at smaller scales ready for proof-of-concept deployments, and with business-critical applications, ready for production deployments. From an infrastructure perspective, we have graduated from the initial curiosity and learning phases into small-scale training laboratories or non-critical production deployments. Interest is increasing in solutions that can bridge the gap between user expectations and the operational reality of Kubernetes in action.
Kubernetes remains a complex platform that receives frequent updates and new features. For IT organizations accustomed to the ease of use and stability of technologies such as virtualization, this level of flux is a concern. To operate within the existing high standards of availability and security, organizations must keep up to date with the latest Kubernetes version and surrounding ecosystem projects, ensuring that security patches, API specifications, and performance improvements are realized as soon as possible. The reality is that most organizations are not geared up to work at this pace; managing existing workload demands and learning new skills simultaneously is a challenge. Managing an ever-evolving platform like Kubernetes is a demanding task, and on top of that, operational complexities in this type of platform can create reliability issues if not addressed correctly.
IT organizations favor containers because they enable true application portability, and Kubernetes is the right platform to manage container-based applications correctly, at scale. Kubernetes realizes the possibility of true hybrid-cloud deployments, enabling organizations to build on existing data center solutions and expand into cloud environments seamlessly.
The easiest way to capture all the advantages of Kubernetes and none of the complexity that comes with it is to choose the right managed Kubernetes services. There are plenty of options in the market at the moment and although rooted in the same code base, they present several differences in technical features, consumption models, and support aspects.
Fortunately, Kubernetes didn’t suffer the proliferation of different distributions or multiple competing projects that occurred early on with the Linux OS. The core of Kubernetes is the same on all platforms, and they share the same commands, structure, and way of operating, making seamless application portability a reality.
How to Read this Report
This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.
2. Market Categories and Deployment Types
For a better understanding of the market and vendor positioning (Table 1), we assess how well solutions for managed Kubernetes solutions are positioned to serve specific market segments. We recognize three segments in this report: small-medium business, large enterprise, and specialized:
- Small-to-medium enterprise: In this category we find solutions that appeal to customers that value ease of use and deployment. These organizations can range from very small startups to businesses with medium-sized infrastructures. The solutions also can be adopted by large enterprises for departmental use cases, often without a very rich feature set, and with limited data mobility and management capabilities.
- Large enterprise: Usually adopted for larger and business critical projects. Solutions in this category have a strong focus on flexibility, performance, data services, and features to improve security and data protection. Scalability is another big differentiator, as is the ability of hybrid solutions to host the same services both on-premises and in the public cloud.
- Specialized: Designed for specific workloads and use cases, such as big data analytics, edge, and high-performance computing (HPC), for example.
In addition, we recognize two deployment models for solutions in this report: cloud-only or hybrid and multicloud.
- Cloud-only solutions: Available only in the cloud. Often designed, deployed, and managed by the service provider, they are available only from that specific provider. The big advantage of this type of solution is the integration with other services offered by the cloud service provider (functions, for example) and its simplicity.
- Hybrid and multicloud solutions: These solutions are meant to be used both on-premises and in the cloud, allowing customers to build hybrid or multicloud Kubernetes infrastructures. The integration with the single cloud provider could be limited compared to the cloud-only option, while being more complex to deploy and manage. On the other hand, this approach is more flexible, and the user typically has greater control over the entire infrastructure and services.
Table 1. Vendor Positioning
|Small-to-Medium Enterprise||Large Enterprise||Specialized||Cloud Only||Hybrid & Multicloud|
|IBM Cloud Kubernetes|
|Oracle Cloud Infrastructure|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
3. Key Criteria Comparison
Building on the findings from the GigaOm report, “Key Criteria for Evaluating Managed Kubernetes Solutions,” Table 2 summarizes how each vendor included in this research performs in the areas that we consider differentiating and critical for managed Kubernetes solutions. Table 3 follows this summary with insight into each product’s evaluation metrics—the top-line characteristics that define the impact each will have on the organization. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the market landscape, and gauge the potential impact on the business.
Table 2. Key Criteria Comparison
|Hybrid Cloud||Pricing Model||Multizone Deployments||Application Lifecycle Solutions||Security||Interoperability|
|IBM Cloud Kubernetes|
|Oracle Cloud Infrastructure|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
Table 3. Evaluation Metrics Comparison
|Architecture||Flexibility||Scalability||Manageability & Ease of Use||Ecosystem|
|IBM Cloud Kubernetes|
|Oracle Cloud Infrastructure|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
By combining the information provided in the tables above, the reader can develop a clear understanding of the technical solutions available in the market.
4. GigaOm Radar
This report synthesizes the analysis of key criteria and their impact on evaluation metrics to inform the GigaOm Radar graphic in Figure 1. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and feature sets.
The GigaOm Radar plots vendor solutions across a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation, and Feature Play versus Platform Play—while providing an arrow that projects each solution’s evolution over the coming 12 to 18 months.
Figure 1. GigaOm Radar for Managed Kubernetes Solutions
As you can see in the Radar chart in Figure 1, the market for managed Kubernetes services continues to grow quickly in terms of deployments, but as is clear from the chart, the number of providers is still relatively small. The need for significant surrounding infrastructure and difficulty in differentiating services means that most vendors in this space are large service providers that can bolster Kuberenetes services with an extensive ecosystem of complementary services to support the complex needs of modern applications. In addition to the complexity, providers must keep up with the significant speed of development in the Kubernetes project and associated community projects.
The overall landscape for managed Kubernetes services is split into two different types of offering: those that cater to a specific market and are laser focused on providing a service for that audience, and those that cater to the broader market with a wider set of features that are more appropriate for larger enterprises.
The services of the major cloud providers are somewhat comparable, with Microsoft in the lead and AWS close behind, although the gap is closing quickly. Google Cloud is still strong in this space, and with the right execution of their hybrid-cloud strategy will catch up quickly. All three of these players have a very strong cloud offering, and the ongoing battle over the mid- to long-term will determine who executes most successfully in enabling a true hybrid-cloud solution. Alibaba follows the example set out by the other cloud providers; however, they are lagging in terms of execution and availability of global coverage.
IBM Cloud, Red Hat, Mirantis, and Oracle Cloud Infrastructure have different strategies for cloud and Kubernetes solutions. IBM recently acquired Red Hat, which allows a broader range of offerings across the combined portfolio to help customers modernize their application stacks, while maintaining a consistent experience both on-premises and in the cloud. Mirantis is continuing to build on a solid foundation acquired from Docker Enterprise and expanding the portfolio to build a solid platform offering both Kubernetes and virtualization solutions to customers who benefit from the security and operational expertise provided. Oracle is building on its existing software customer base and giving them the tools to successfully migrate existing applications to the cloud.
Inside the GigaOm Radar
The GigaOm Radar weighs each vendor’s execution, roadmap, and ability to innovate to plot solutions along two axes, each set as opposing pairs. On the Y axis, Maturity recognizes solution stability, strength of ecosystem, and a conservative stance, while Innovation highlights technical innovation and a more aggressive approach. On the X axis, Feature Play connotes a narrow focus on niche or cutting-edge functionality, while Platform Play displays a broader platform focus and commitment to a comprehensive feature set.
The closer to center a solution sits, the better its execution and value, with top performers occupying the inner Leaders circle. The centermost circle is almost always empty, reserved for highly mature and consolidated markets that lack space for further innovation.
The GigaOm Radar offers a forward-looking assessment, plotting the current and projected position of each solution over a 12- to 18-month window. Arrows indicate travel based on strategy and pace of innovation, with vendors designated as Forward Movers, Fast Movers, or Outperformers based on their rate of progression.
Note that the Radar excludes vendor market share as a metric. The focus is on forward-looking analysis that emphasizes the value of innovation and differentiation over incumbent market position.
5. Vendor Insights
Alibaba Cloud Container Service for Kubernetes (ACK)
Alibaba Cloud Container Service for Kubernetes (ACK) is similar to offerings from the other major cloud providers. Alibaba has a strong focus on the Asia Pacific region, with the largest number of data centers available there. However, they are expanding into other regions globally at a significant pace.
ACK offers both managed Kubernetes clusters and serverless clusters. The latter is providing the ability to launch applications without creating or managing any nodes at all. Managed Kubernetes clusters come in two varieties: Standard, for which you are charged by the number of worker nodes and other infrastructure resources, and Professional, for which you can be charged either by subscription or number of clusters. Serverless Kubernetes clusters are billed based on resource usage and duration of execution. Alibaba offers a wide range of node types and supports GPU instances as well as both Windows and Linux node pools.
You can centrally manage cloud and on-premises resources in the Container Service console. Other deployment models are available, such as those for Edge cluster management, bringing together a single solution across both the cloud and on-premises data centers or remote office deployments. Known as ACK One, Alibaba provides a distributed cloud container platform, allowing users to manage cloud-native applications in hybrid, multi-cluster, distributed, or disaster recovery scenarios.
Kubernetes clusters on Alibaba Cloud offer a comprehensive set of role-based access controls (RBACs) and integrations into their own cloud resource access management (RAM) systems and can be extended with OpenLDAP for SSO. Protection for the internal endpoint of the Kubernetes API can be achieved with network access control lists, and external access can be provided by assigning an Elastic IP address to the cluster. However, restrictions on the locations from which the cluster can be accessed are not available.
Strengths: Alibaba Cloud Container Service for Kubernetes is a strong solution with both hybrid and multicloud solutions, a number of integrations into their other services are also available. For businesses operating within the covered locations for Alibaba Cloud, this is a competitive solution.
Challenges: Alibaba Cloud is still working to gain presence in the wider global markets and may not offer data centers or services in the required number of regions or locations that larger global enterprises require.
Amazon Elastic Kubernetes Service (EKS)
Amazon Elastic Kubernetes Service (EKS) provides a wide range of options for deploying Kubernetes within AWS. Along with this deployment inside AWS, EKS now has multiple options for hybrid-cloud and on-premises deployments, thanks to EKS Anywhere and Outposts. Fargate helps users to simplify infrastructure management further, taking advantage of Kubernetes without the complexity.
EKS supports multiple operating systems, allowing users to create worker pools for both Windows and Linux operating systems. AWS also offers support for its own Graviton processors based on Arm CPU architecture. EKS supports GPU instances and provides optimized Deep Learning container environments for AI/ML use cases as well.
Amazon EKS Connector provides visibility of any conformant Kubernetes cluster. Connecting a cluster allows users to see status, configuration, and workloads for that cluster within the Amazon EKS console. However, management of these clusters isn’t included at this time.
With coverage in multiple regions worldwide and support for both Outposts and EKS Anywhere, the overall EKS deployment options are broad and far reaching, allowing customers to embrace both hybrid and multicloud architectures. Deployment of EKS Anywhere may not provide the same levels of integration as the native services within each cloud provider, so the ability to centrally manage from the EKS console will benefit customers with experience and familiarity with AWS as their primary cloud provider.
EKS provides a good level of security across the infrastructure by allowing the API server communications to be limited within your VPC and by which IP addresses can access it. Additional network policies can be added with tools such as Project Calico. Patching and updates of the Kubernetes system, as well as underlying node operating systems, are handled in a rolling manner ensuring no degradation of service during the process.
Strengths: AWS has demonstrated considerable commitment to expanding its Kubernetes offerings, especially in the hybrid space, with the introduction of EKS Anywhere and EKS Connector. EKS provides support for Graviton 2 instances, offering an alternative architecture type and a competitive price/performance profile.
Challenges: AWS Outposts is too expensive for small organizations looking at hybrid cloud capabilities. However, EKS Anywhere helps to address this gap, bringing VMware support for hybrid deployment. EKS Connector provides additional visibility to any Kubernetes cluster but currently lacks management features.
DigitalOcean Kubernetes Service (DOKS)
DigitalOcean Kubernetes (DOKS) is a cloud-only Kubernetes service that allows you to deploy clusters without the complexities of handling the control plane and infrastructure. DOKS provides a powerful yet easy-to-manage Kubernetes solution with native integration for DigitalOcean Load Balancers and block storage solutions.
Simple pricing starting at the very low end appeals to users and businesses that are just starting to look at containers and orchestration. This makes DOKS ideal for open-source projects, individual developers, small businesses, and start-ups looking to get to market quickly.
Security within DOKS is suitable for the use cases that this solution currently fits; however, it lacks features that established businesses and enterprises would expect. For example, the Kubernetes API can not have access restricted at this time. OS patching for worker nodes is applied during cluster upgrades, so enabling auto upgrades or regularly running these upgrades is important. Kubernetes RBAC is included and secured via certificates, although additional identity federation is not available at this time.
High availability (HA) for DOKS clusters’ control plane is in the early availability stages and limited to a smaller number of regions. HA clusters have replicated control plane components and can fail over to a redundant replica node, resulting in reduced downtime for management operations.
Continuous integration and continuous deployment (CI/CD) workflows are supported within the DOKS environment by using integrations with GitHub Actions, allowing developers to use a push-to-deploy architecture for applications.
DOKS includes several basic and advanced metric visualizations to provide insight into the health of Kubernetes clusters and deployed applications.
Strengths: DigitalOcean is very well positioned for individual developers, start-ups, and small businesses that just want to deploy containerized applications without the complexities of managing Kubernetes infrastructure.
Challenges: This solution lacks a number of features that enterprises have come to expect from managed Kubernetes services, and there are no options for hybrid-cloud or on-premises deployments.
Google Kubernetes Engine (GKE)
Google Kubernetes Engine (GKE) was the first hosted Kubernetes service to reach the market and still holds its own among the other leading cloud providers. GKE provides a stable, scalable, easily automated solution for deployment of modern containerized applications. Google continues to add to the solution with hybrid and multicloud options provided by Anthos. For serverless and application-focused deployments, Google offers Cloud Run, abstracting the application from the infrastructure to simplify deployment and management.
Management of clusters is controlled by standard RBACs with Google offering additional integration with Active Directory and Keycloak for federation and SSO. Security updates and OS patching are handled manually or automatically in Autopilot-enabled clusters. Upgrades are performed on a rolling basis and depending on the chosen architecture, either with no disruption for regional clusters, or minimal management plane interruption for zonal clusters.
Google offers a wide range of deployment options. Within the cloud you can deploy both Linux and Windows node pools, and they provide support for a number of Nvidia GPUs. On-premises deployments are supported on bare-metal and VMware. Anthos deployments to other major clouds are supported, along with the attachment of non-Anthos-managed clusters such as EKS/AKS.
GKE benefits from additional services provided by Google. Users can create CI/CD pipelines on Google Cloud using several hosted products following the popular GitOps methodology. Additionally, popular offerings within the market such as Jenkins and VSTS are also supported.
Backup of your application data and configuration can be included by enabling the Backup for GKE service, integrated into the GKE UI, APIs, and Cloud CLI.
Strengths: Google benefits from its very in depth knowledge of the Kubernetes project and its ecosystem. GKE is a very solid managed Kubernetes offering and integrates well with a wide range of complementary services.
Challenges: Anthos is still in early phases and needs improvements to address the applicable use cases. Google has had challenges when it comes to supporting the more traditional enterprise customers; however, they have made great strides into building the wider ecosystem and support for specialized areas such as data science and AI/ML.
IBM Cloud Kubernetes Service
Thanks to its acquisition of Red Hat, IBM can now offer IBM Cloud Kubernetes Service (CKS) alongside a Red Hat OpenShift managed service, enabling existing IBM and Red Hat customers to take advantage of a larger service catalog and enjoy freedom of choice for their hybrid-cloud infrastructures. In this context, IBM Cloud also offers a growing ecosystem of services and partners in an expanding number of regions worldwide.
IBM offers a wide range of pricing options to suit customers of all sizes, including a free tier designed to allow an exploration of its capabilities before committing to any spend. Cluster offerings come in multiple options including shared, dedicated, bare-metal, and virtual private cloud. Each option is available in multiple size tiers with competitive pricing that’s either hourly or monthly in the case of bare-metal.
Access to the IBM Cloud Kubernetes Service can be gained via public and/or private endpoints, each with their own options for securing access to the API endpoints. Using the private endpoint option provides greater security, enforcing access only from within subnets defined by the user and from networks connected to the private cloud network, including through IBM Cloud VPC VPN connection and WireGuard VPN. IBM also offers the ability to connect applications deployed within the Kubernetes service back to on-premises resources using the Strongswan IPSec VPN service deployed directly within the cluster.
Kubernetes cluster access and application deployment roles can be configured using RBAC. This role configuration is further expanded by the federation provided within the IBM Cloud ecosystem, allowing for single sign-on (SSO) with corporate accounts.
IBM Cloud Kubernetes Service offers both Windows and Linux Containers as well as compute instances with Nvidia GPUs. This allows flexibility for enterprises when it comes to migrating existing on-premises workloads over to the service.
Strengths: IBM Cloud is very focused on helping its customers transition from on-premises IT to hybrid cloud. Providing multiple options in Kubernetes services increases flexibility and improves that transition.
Challenges: IBM Cloud ecosystem is still limited compared to major service providers. Many of the services still need to be improved in terms of functionality and flexibility. The solution lacks options for hybrid and multicloud deployments.
Linode Kubernetes Engine (LKE)
Linode Kubernetes Engine (LKE) is a cloud-only Kubernetes service that allows you to deploy clusters across a number of global regions simply and without requiring the skills to manage the infrastructure and control plane. LKE provides an easy-to-manage Kubernetes solution that integrates with existing Linode storage solutions for persistent storage and load balancers for application availability.
Pricing is simple and appealing to users and businesses that are just starting to look at containers and orchestration. LKE is ideal for open-source projects, individual developers, small businesses, and start-ups looking to get to market quickly.
CI/CD workflows are supported within the LKE environment by using integrations with solutions from GitLab and others such as GitHub Actions, allowing developers to deploy applications easily by pushing code to a configured repository.
Security within LKE is suitable for the use cases that this solution currently fits; however, it lacks features that established businesses and enterprises would expect, like cloud firewall capabilities. For example, the Kubernetes API cannot be restricted to certain IP addresses or ranges. Kubernetes RBAC is included as well as the ability to apply layers of security via certificates, while additional identity federation is provided via Google SSO.
Highly available control planes are available within LKE, HA clusters have a replicated control plane, and components can fail over to a redundant replica node, resulting in reduced downtime for management operations. The HA feature comes at an additional cost and can be enabled either at the point of creation or by editing an existing cluster.
Strengths: Linode is well positioned for individual developers, open-source projects, and small businesses that want to get started with deployment of containers without requiring expertise in managing infrastructure.
Challenges: This solution lacks a number of the features that enterprises have come to expect from managed Kubernetes services. The number of available regions is limited outside of the USA, and there are no options for hybrid or multicloud deployments.
Microsoft Azure Kubernetes Service (AKS)
Microsoft Azure Kubernetes Service (AKS) is aligned with services from other major cloud providers, but Azure also offers managed OpenShift (jointly operated with Red Hat) and Container Instances, a service to run container applications without needing to manage the infrastructure.
Additionally, with Azure Arc-enabled Kubernetes, users can add and manage Kubernetes clusters running across cloud and on-premises locations. Support for GCP, AWS, VMware vSphere, and Azure Stack HCI is included with Azure Arc. Providing such a robust range of deployment options and locations gives customers the choice of adopting hybrid and/or multicloud applications with relative ease.
Microsoft has worked heavily on integrations and making adoption of modern applications as easy as possible for users. With a rich history established by products like Visual Studio and acquisitions like GitHub, the strengths of AKS are most visible within the ecosystem and integrations. DevOps integrations for users are provided with tight integrations into Microsoft offerings in the CI/CD space, allowing users to consume AKS resources from Azure DevOps pipelines and GitHub actions. Additional projects like Azure Service Operator (for Kubernetes) allow integration of other services, such as database services, within Azure.
Integration of Kubernetes into Azure Active Directory for RBAC can be managed directly from AKS. Microsoft provides a wide range of integrations for developers, making development of secure applications easier to achieve. One example is using pod-managed identities so that applications can access connection strings and authentication details directly from systems such as Azure Key Vault. AKS offers several other security components–from the integration into Azure Active Directory for identity management to securing management APIs within the managed environment by using authorized IP ranges or even deploying a fully private cluster, limiting API server access to your own virtual network.
Patching of the Kubernetes infrastructure components is provided and runs in a rolling fashion to ensure uptime and availability during the process. The ability to configure automatic updates based on a number of upgrade channels is also available.
Strengths: AKS in combination with Arc and the wider Azure ecosystem provides a comprehensive enterprise Kubernetes experience. Microsoft has an offering that makes cloud consumption easy for traditional enterprises and software developers using the Visual Studio suite.
Challenges: Azure Arc is still in the early stages, and not all features are available. Azure Stack has struggled historically to attract attention from enterprise users. While multiple options exist for deployments outside of Azure, the experience will vary based on location, as not all integrations are available in all locations.
Mirantis Container Cloud
Mirantis Container Cloud provides users a platform to consume Kubernetes across a range of cloud and on-premises deployment options, including AWS, Azure, Equinix Metal, bare-metal servers, and virtualization (VMware, OpenStack). The platform comes in both a SaaS offering and with the option of on-premises deployment for secure/dark site requirements. Utilizing Mirantis Kubernetes Engine (previously Docker Enterprise) under the hood, Container Cloud brings an automation and orchestration platform element that handles the deployment, management, upgrades, monitoring, and security of your Kubernetes clusters across all deployment locations.
Mirantis provides multiple operating models, and support can be provided via a “co-pilot” offering with the OpsCare package. For customers that want to go further, OpsCare+ provides Container Cloud as a fully managed service, including the full deployment and management of Kubernetes at scale. Pricing is based on consumption of assigned cores within virtualized environments or physical cores on bare-metal servers, allowing users to start as small as they wish and grow as the requirements expand.
Container Cloud provides a full set of identity integration features allowing you to plug in to an existing Identity Management System, MKE itself provides a robust set of RBAC features, including certificate-based authentication. The Mirantis container runtime is FIPS 140-2 compliant and NIST validated, providing options for environments that have the highest security requirements.
Upgrades within Container Cloud are comprehensive and cover Operating System, Mirantis Kubernetes Engine, Mirantis Container Runtime, and Logging Monitoring and Alerting components where installed. Updates are provided in a rolling manner and include provisioning of new nodes where required, and draining of existing applications, ensuring uptime, is maintained throughout the process.
Across the entire portfolio, Mirantis is aiming to provide a secure and stable platform providing a wide range of options for customers; combining Container Cloud, Secure Registry, and Lens, you get a modern application delivery platform.
Strengths: Mirantis provides support for both Linux and Windows worker nodes. Lens provides an interactive development environment for Kubernetes, and Stacklight provides monitoring and visibility across the environment. Container Cloud provides a secure and well-supported Kubernetes distribution across multiple locations both on-premises and within the cloud.
Challenges: Mirantis Kubernetes versioning is somewhat lagging behind the mainstream Kubernetes releases; however, the added security and hardening provided may be more important for some users than the latest features. Currently, there is no support for Google Cloud; however, this is something that Mirantis is addressing.
Oracle Container Engine for Kubernetes
Oracle Container Engine for Kubernetes (OKE) is based on the latest upstream Kubernetes stack and open-source projects. Oracle is working to keep the entire infrastructure as lean as possible, with a simple UI while adding integrations with the rest of its ecosystem and providing some useful services such as secure container registry. Oracle is also building a good partner ecosystem around Oracle Cloud Infrastructure.
Using the Oracle Cloud Infrastructure (OCI) Service Operator for Kubernetes allows users to integrate and manage additional OCI resources such as databases directly through the Kubernetes API. This integration allows the user to build a full application stack end-to-end using the same simple deployment methodology.
Oracle has introduced the Ampere A1 compute platform to its cloud and has integrated its deployment within the Kubernetes service. Allowing users the opportunity to deploy applications written for the Arm CPU architecture provides greater flexibility and removes the need to deploy different application stacks in different locations based on required architecture. GPU support is also available within OKE, providing a single Kubernetes cluster solution that can run traditional x86 workloads, GPU workloads such as AI/ML, and Arm workloads.
OKE supports a mix of both IAM and RBAC configurations to control access to the cluster and applications. Oracle Cloud Infrastructure supports SSO with common providers such as Active Directory, giving its users the ability to rely on their existing authentication and authorization structures within the Oracle Cloud and Kubernetes environments.
Oracle Application Container Cloud Service provides a lightweight service to deploy applications directly to the Oracle Cloud without managing the infrastructure components.
Strengths: OKE can provide a good platform for Oracle customers that want to migrate applications to the cloud and still rely on Oracle platforms. Some of these products are also ready to work in a container environment, and the pricing model of OKE is very user-friendly. OKE provides options for both GPUs and Arm CPU architecture, allowing the deployment of specialist workloads in the cloud.
Challenges: OKE can be seen as a good option for Oracle customers who want to migrate to the cloud, but Oracle Cloud Infrastructure is currently less feature-complete than offerings from other major cloud providers. Hybrid and multicloud options are missing unless you use third-party solutions.
Platform9 Managed Kubernetes (PMK)
Platform9 continues to pioneer and innovate hosted control plane solutions, with continued growth in its Kubernetes platform. The company is building on a solid architecture and improving the feature set at a great pace. Using the latest open-source technology, with the added benefits of expert support, Platform9 provides recommendations for tested and supported integrations and components throughout the stack.
Platform9 provides multiple subscription models, including its Freedom Plan with support for up to 20 nodes. Moving to the Growth or Enterprise Plans increases the number of managed nodes and provides 24/7 support and competitive SLAs. This enables organizations of all sizes to access Platform9 services easily.
Flexibility is a key feature of Platform9, and this is reflected in the existing feature set as well as within the roadmap and future vision for the platform. Customers can import existing Kubernetes clusters, deploy management agents to existing bare-metal OS or VM servers, or deploy a pre-packaged OVA for common hypervisors. A wide range of hardware and acceleration options are supported, including GPUs for AI/ML or graphics intensive workloads. Support for additional CPU architectures is not yet available, although the number of x86 options in the market is sufficient for most use cases.
With support for both on-premises and cloud-based deployments, including all the major hyperscalers, Platform9 truly provides a full range of locations around the globe. This enables the full use of hybrid and/or multicloud options for customers without any risk of location lock-in. Clusters can be removed from the platform with no impact on the running of underlying applications.
Platform9 offers a robust set of security options across the board, including enterprise authentication integration and network policies using Project Calico and Wireguard for encryption on the wire. An on-premises, self-hosted option is available for dark sites where connectivity to the SaaS platform is limited or non-existent.
Strengths: Extremely easy to deploy and use, intuitive user interface, growing feature set, and the flexibility to install worker nodes on bare-metal servers, virtual infrastructure, or public cloud. Growing partner ecosystem and pre-defined deployment options for many complementary services. Great solution for end users who need more control of their infrastructure, data, and applications while still maintaining many benefits of the cloud.
Challenges: Flexibility in the deployment of this type of architecture means that there can be more components for the user to manage, such as underlying VM or hardware, while some cloud providers can remove this part of overall solution management. However, Platform9 continues to address this challenge with expanded support for Kubernetes services and Continuous Deployment applications.
Red Hat OpenShift Kubernetes Engine
Red Hat OpenShift Kubernetes Engine is a self-managed Kubernetes offering that can be deployed across several clouds and on-premises hardware. Support for AWS, Azure, IBM Cloud, and Google Cloud is provided. Red Hat provides a fully automated installation experience deploying all the components required to run the Kubernetes Engine. Over the air, smart updates are included, allowing you to see what updates are available for your cluster. The updates can be deployed automatically by the OpenShift Kubernetes Engine along with any dependencies.
Expanding on this automation, the Red Hat OpenShift Container Platform adds a set of operations and developer services and tools that provide a more serverless approach to deployment of applications. This provides a cloud-like service, however, and without the option of a fully managed solution, still leaves an element of responsibility with the IT operations teams to manage and maintain the environment.
For more traditional managed offerings, Red Hat has partnered with Azure, IBM Cloud, and AWS to provide fully managed Red Hat OpenShift. These services are purchased from the cloud marketplaces and are jointly supported by the cloud provider and Red Hat SRE teams. Providing a consistent environment for customers already using OpenShift within existing self-managed environments.
Red Hat OpenShift Kubernetes Engine supports multiple deployment models, allowing users to choose whether to have the infrastructure provisioned by the automated installer or self-provisioned. Support for Red Hat Enterprise Linux for Virtual Datacenters, Red Hat CoreOS, and Windows operating systems are included, allowing flexibility to choose methods of container deployment and porting of traditional on-premises Windows applications.
Integrations are provided via the Red Hat Container Content hub, deployment of OpenShift Container Storage, Quay, and Red Hat Advanced Cluster Manager for Kubernetes. Red Hat has a large portfolio of services, so its overall integration of the service is very good, and options exist for many of the requirements of deploying cloud-native applications for an enterprise. However, this comes at the cost of reaching an understanding of the portfolio and knowing how to integrate the parts effectively.
Strengths: Red Hat provides multiple offerings in this space, and there is a great deal of flexibility in deployment because it is not tied to a particular location or cloud provider. Documentation and support are very good, and Red Hat has a huge amount of experience from the OpenShift platform.
Challenges: As a self-managed solution, it can present more components for the user to maintain and understand. The solution lacks many of the features of fully managed offerings and cloud Kubernetes services.
6. Analyst’s Take
The overall promise of managed Kubernetes solutions is to simplify deployment and management of the infrastructure and associated software stacks, while providing flexibility to the business that allows them to respond quickly to the ever-changing landscape. Looking at it from this point of view, all of the services evaluated will meet these needs, but as is usually the case, the best return on investment will come from closely aligning both the technical and business requirements.
Making a choice should involve considering a number of factors: the size of the organization, the type of organization (primarily development, traditional enterprise, or greenfield startup), and the overall direction of the IT strategy (on-premises, hybrid, or cloud-only).
If we look at the market from the user perspective, most of the vendors are bringing hybrid-cloud options to their customers. Going even further, some are now reaching out to the edge environments (which is why most are positioned in the Platform quadrant), and few vendors remain reluctant to embrace this approach. There are not many providers left maintaining a cloud-only stance.
7. About Jason BenedicicJason Benedicic
Jason is an independent consultant, based in Cambridge, UK. Jason works with customers to design and implement IT solutions that meet a variety of needs, including backup, virtualization, cloud adoption, and application modernisation. He is an expert in building and managing public cloud services and private/hybrid cloud infrastructure.
He has additional experience in Agile processes, the Software Development Lifecycle, and CI/CD pipelines. Jason is comfortable working in all areas of business from sales cycle through to support. Jason can communicate at all levels throughout your business and tailor messaging accordingly. He has additional interests in Digital Ethics, Influencing, Marketing, Strategy, and Business Processes.
Outside of the technology industry, Jason enjoys all forms of gaming, ranging from classic table-top to online RPGs. Jason has been a raider in World of Warcraft for the last 16 years and continues to push current end-game content with his guild. Along-side that he enjoys the varied activities in Destiny 2, where you can find him in the latest raid or crucible matches with my clan. He is also a keen cyclist.
8. About Enrico SignorettiEnrico Signoretti
Enrico Signoretti has more than 25 years in technical product strategy and management roles. He has advised mid-market and large enterprises across numerous industries, and worked with a range of software companies from small ISVs to global providers.
Enrico is an internationally renowned expert on data storage—and a visionary, author, blogger, and speaker on the topic. He has tracked the evolution of the storage industry for years, as a Gigaom Research Analyst, an independent analyst, and as a contributor to the Register.
9. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.