Table of Contents
As we learned in the associated GigaOm report. “Key Criteria for Evaluating Cloud Resource Optimization”, cloud resources that are not optimized prove costly. The most valuable cloud resource management solutions provide effective and reliable resource configuration recommendations and integrate into deployment pipelines and change management processes.
Furthermore, as the growth of cloud usage continues to outpace the rate at which IT operational analysts can be hired, automated optimization of these resources impacts the direct bottom-line of the cloud bill and the effectiveness of existing IT staff (who are freed up to work on higher-value business objectives). Taking an hour to determine whether a machine would benefit from less or more vCPU is hardly worth the time and effort but can generate significant excess spend or risk if ignored at scale.
As business leaders evaluate cloud resource optimization solutions, it’s important to keep the following in mind:
- Cloud resource optimization is closely aligned with the financial operations (FinOps) and cloud management platform (CMP) tooling categories, and solutions may lean in one of those directions with a strategy of providing a single solution.
- Private cloud and public cloud resources both require oversight and optimization, and solutions tend to be stronger in one area over another. Determine where your resource challenges exist today and what improvements you want to have made 12 to 18 months from now.
- Consider delegating resource and cost optimization to individual teams, with some limited central oversight. Individual teams are closely aligned to their application performance needs and, if motivated properly and given the right tools, will ensure a balance is reached between cost and performance.
In this report, we evaluate a number of vendors on their ability to analyze and optimize cloud resources. We’ve scored them against the functionality (key criteria) and requirements (evaluation metrics) necessary for successful cloud resource optimization. While the final report contains seven vendors, we ruled out many more in the process, including several vendors with offerings focused more heavily on FinOps or CMP (which are reviewed in separate GigaOm Key Criteria and Radar Reports).
How to Read this Report
This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:
Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.
GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.
Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.
2. Market Categories and Deployment Types
For a better understanding of the market and vendor positioning (Table 1), we assess how well solutions for cloud resource optimization are positioned to serve specific market segments.
- Small-to-medium business (SMB): In this category, we assess solutions on their ability to meet the needs of organizations ranging from small businesses to medium-sized companies. Also assessed are departmental use cases in large enterprises, where ease of use and deployment are more important than extensive management functionality, data mobility, and feature set.
- Large enterprise: Here, offerings are assessed on their ability to support large and business-critical projects. Optimal solutions in this category will have a strong focus on flexibility, performance, data services, and features that improve security and data protection. Scalability is another big differentiator, as is the ability to deploy the same service in different environments.
In addition, we recognize two deployment models for solutions in this report: software as a service (SaaS) solutions or self-hosted.
- SaaS solutions: These are available only in the cloud. Often designed, deployed, and managed by the service provider, they are available only from that specific provider. The big advantage of this type of solution is the integration with other services offered by the cloud service provider (functions, for example) and its simplicity. These components may support the installation of remote agents into customer-owned environments.
- Self-hosted solutions: These solutions must be deployed and managed by the customer, often within an on-premises data center or within a dedicated VPC of a cloud provider. These solutions are fully managed by the customer, which drives up operational costs but also allows for greater flexibility over the data collected by the platform.
Table 1. Vendor Positioning
|Spot by NetApp|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
3. Key Criteria Comparison
Building on the findings from the GigaOm report, “Key Criteria for Evaluating Cloud Resource Optimization,” Table 2 summarizes how each vendor included in this research performs in the areas that we consider differentiating and critical in this sector. Table 3 follows this summary with insight into each product’s evaluation metrics—the top-line characteristics that define the impact each will have on the organization. The objective is to give the reader a snapshot of the technical capabilities of available solutions, define the perimeter of the market landscape, and gauge the potential impact on the business.
Table 2. Key Criteria Comparison
|Integration with Infrastructure Provisioning Tools||Integration with DevOps Planning or ITSM Tools||Automatic Resource Reconfiguration||AI/ML-Driven Resource Predictions||Intelligent Resource Grouping||Abandoned Resource Identification|
|Spot by NetApp|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
Table 3. Evaluation Metrics Comparison
|Simple, Transparent Cost Model||Flexibility||Scalability||Usability||ROI (Based on Flexibility, Scalability, & Usability)|
|Spot by NetApp|
|Exceptional: Outstanding focus and execution|
|Capable: Good but with room for improvement|
|Limited: Lacking in execution and use cases|
|Not applicable or absent|
By combining the information provided in the tables above, the reader can develop a clear understanding of the technical solutions available in the market.
4. GigaOm Radar
This report synthesizes the analysis of key criteria and their impact on evaluation metrics to inform the GigaOm Radar graphic in Figure 1. The resulting chart is a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and feature sets.
The GigaOm Radar plots vendor solutions across a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—Maturity versus Innovation, and Feature Play versus Platform Play—while providing an arrow that projects each solution’s evolution over the coming 12 to 18 months.
Figure 1. GigaOm Radar for Cloud Resource Optimization
As you can see on the Radar chart in Figure 1, this space tends to be dominated by larger organizations that aim to be full platform players (such as VMware, BMC, IBM, and Cisco). These companies offer some capabilities in the cloud resource optimization space, but often they are heavily tied to other cloud management products that organizations may deploy as a cloud management platform.
There are a couple of outliers of note in this space, though. Companies such as Granulate offer a niche product that helps in this category by optimizing performance, but will often be paired with another solution in the category to maximize value for their customers. And then there are solutions, such as Spot by NetApp and Densify, that are focused more heavily on resource optimization through effective recommendations or autonomous optimizations delivered without the need to become a full blown CMP (though Spot also does place in the CMP category and offers all CMP features if desired).
Inside the GigaOm Radar
The GigaOm Radar weighs each vendor’s execution, roadmap, and ability to innovate to plot solutions along two axes, each set as opposing pairs. On the Y axis, Maturity recognizes solution stability, strength of ecosystem, and a conservative stance, while Innovation highlights technical innovation and a more aggressive approach. On the X axis, Feature Play connotes a narrow focus on niche or cutting-edge functionality, while Platform Play displays a broader platform focus and commitment to a comprehensive feature set.
The closer to center a solution sits, the better its execution and value, with top performers occupying the inner Leaders circle. The centermost circle is almost always empty, reserved for highly mature and consolidated markets that lack space for further innovation.
The GigaOm Radar offers a forward-looking assessment, plotting the current and projected position of each solution over a 12- to 18-month window. Arrows indicate travel based on strategy and pace of innovation, with vendors designated as Forward Movers, Fast Movers, or Outperformers based on their rate of progression.
Note that the Radar excludes vendor market share as a metric. The focus is on forward-looking analysis that emphasizes the value of innovation and differentiation over incumbent market position.
5. Vendor Insights
BMC Helix Continuous Optimization
BMC Helix is offered as both a SaaS and on-premises solution. Each offering supports “Continuous Optimization,” a set of capabilities previously named “Capacity Optimization.” The continuous optimization capabilities use the familiar terminology extract, transform, load (ETL) to describe how to connect to private and public cloud APIs for data collection and storage for later analysis. The solution provides out-of-the-box support for a wide range of public and private cloud ETLs along Moviri and Sentry ETLs that connect to other enterprise systems (such as Splunk, AppDynamics, Elasticsearch, K8s, and storage arrays).
As mentioned, BMC Helix is a suite of products delivered through SaaS; while the continuous optimization capabilities are available on their own, they’re best paired with other solutions to gain true efficiency and visibility within the organization. BMC Helix Discovery will discover and group workload dependencies, while BMC Helix Intelligent Automation is required to automate optimization recommendations or integrate into enterprise workflows (such as infrastructure provisioning pipelines or ServiceNow approvals). These two integrations result in BMC’s high ranking on the key criteria for intelligent resource grouping and automatic resource reconfiguration.
In addition to API-based ETLs, the BMC Helix continuous optimization solution offers an agent-based metric collection method for systems that may not share their metrics through an API. This solution extends the BMC strength of managing all data center and public-cloud server instances within a single solution. This capability becomes more useful to customers interested in migrating data center resources to the cloud with migration and scenario planning capabilities built in.
BMC has gone to great lengths to provide integrations across its line of IT management systems and has taken on some of the management burden with the Helix SaaS offering. Large enterprise customers that already have a deep investment in BMC technologies will likely benefit most from the solution; they may have the capabilities already licensed or may be able to negotiate for the capabilities with an enterprise license agreement. To get the best value from the continuous optimization capabilities, customers should also leverage other capabilities such as the discovery and intelligent automation products.
Strengths: BMC Helix has an extensive list of ETLs that integrate data from many enterprise and public cloud systems. The SaaS offering is BMC’s preferred approach and it reduces administrative overhead.
Challenges: Automation of optimization recommendations may require extensive professional services that may be difficult to maintain at scale. Value is likely to be derived only from integration with other products.
Cisco Intersight Workload Optimizer
Cisco Intersight Workload Optimizer is a SaaS solution provided by Cisco that’s based on an OEM agreement with the IBM Turbonomic platform. This solution provides capabilities to optimize public and private cloud infrastructure while integrating into the deeper Cisco Intersight platform for complete management of a Cisco-defined data center. Enterprise customers that already procure much of the Cisco infrastructure landscape will find this offering compelling to add onto their existing IT infrastructure.
Cisco differentiates itself from the IBM Turbonomic offering by offering its workload optimization capability as a fully-managed SaaS solution, which ranks high on the evaluation metric for scalability, and speeds time to value. In addition, Cisco has invested in providing deeper integrations into Cisco infrastructure components, creating a well-rounded management solution with deep root cause analysis capabilities for customers that leverage other Intersight products. With that said, Intersight Workload Optimization can also be used as a standalone product.
While the Cisco solution is currently based on the OEM agreement with IBM, Cisco has acquired companies that may bring some of these capabilities in-house. Cisco already owns AppDynamics as its primary application performance management play, and recently acquired Opsani and Replex—both focused on optimizing cloud spend on computing resources. It’s unclear how these acquisitions will unfold, but it is clear that Cisco is continuing to invest heavily in workload optimization.
Strengths: It’s great for customers that are all-in on Cisco and well-paired with solutions such as Cisco AppDynamics. Deep integration into infrastructure helps provide root cause analysis for complex infrastructure deployments.
Challenges: We’d like to see deeper automation integration with other, more common, infrastructure provisioning (like public cloud templates or standalone Terraform) and DevOps planning tools (like Jira, GitHub, GitLab). Lack of support for Google Cloud Platform (GCP) is a downside, but we’re told that it’s coming soon.
Densify offers one of the few solutions in this category that isn’t deeply tied into other cloud management platform components; as a result, it’s more attractive to enterprises that simply want to gain efficiencies without the need to invest in a large CMP or service-heavy rollout. Densify offers a SaaS solution that can optimize private (VMware) and public cloud (AWS, Azure, GCP) virtual machine (VM) instances and container workloads with a low-touch initial deployment.
Densify is focused on generating highly accurate recommendations that can be actioned confidently and automatically. Through a combination of workload pattern analysis, benchmarks, deep policies, API-driven integrations, app owner reports, effort rankings, ITSM integration, and more, the Densify solution provides deeper and more effective optimization than solutions that focus more on billing or “advisors” that generate more basic suggestions that require extensive review.
Densify also provides an extensive policy-based framework that allows for granular tuning of its resource optimization analysis for different portions of the environment (such as product versus test/dev) as well as different applications. Densify’s policies allow customers to codify the desired operational characteristics for workloads. Options include CPU and memory utilization thresholds, risk tolerance (for example, optimize to typical versus busiest day), high availability requirements, disaster recovery and business continuity requirements, approval policies, automation policies, and catalog restrictions. These policies enable a high degree of tuning to ensure that the recommendations are correct and the infrastructure is properly optimized to the types of applications being hosted.
Finally, it is worth noting that Densify ranks high on the key criteria for integration with infrastructure provisioning tools by providing an extensive API that enables integration into ITSM systems and infrastructure provisioning pipelines. While Densify can apply recommendations automatically in some cases, it can also provide recommendations via the API to integrate into any infrastructure deployment pipeline. Densify customers regularly use this functionality with Terraform or CloudFormation templates, but it could extend into any provisioning tool that can pull the recommended configuration values from the API.
Densify provides a simple licensing structure based on the number of targets undergoing optimization, but customers may also acquire the technology through Densify’s partnership with Intel and the Intel Cloud Optimizer program, where Intel funds a year of Densify for qualifying organizations.
Strengths: Highly effective recommendations deliver confidence and actionable results. Densify offers flexible automatic optimization and integration into deployment pipelines. As a single solution, it brings a fast time-to-value.
Challenges: Features tend to be stronger with AWS before becoming available in other cloud providers. We would like to see more of an “app-owner” self-service interface, as reports are currently delivered to app owners via PDF or through custom software integrations with the API.
Categorized as an autonomous optimization solution, Granulate.io is a differentiated offering that doesn’t exactly deliver on the key criteria functionality, but is worth mentioning because it delivers on the overall goals of performance improvement and cloud resource efficiency.
Unlike other solutions in this category, Granulate focuses on the optimization of computing resources within the VM. Granulate deploys an agent, identified as the gAgent, into any operating system. The gAgent monitors application patterns within the VM and optimizes the overall OS scheduling decisions to improve performance. It also sends metrics back to the gCenter, the centralized management dashboard provided by Granulate and delivered via SaaS.
Intel Workload Optimizer by Granulate is a partnership between Granulate and Intel that best describes the focus of the product. While other solutions in this category focus on financial reporting and better alignment of cloud configurations with usage, Granulate attempts to do more with its electronic resources. Think of it like adding a turbo to your car, where the engine is the same size, but you can gain more speed. This offering has a lot of promise to drive down cloud costs while also increasing overall system performance, but organizations will want to use it in conjunction with another cloud resource optimization solution in this category. Paired up this way, Granulate can create more efficient workloads, and then a compatible solution can right-size the resources once they’re efficient.
Granulate ranks high on the evaluation criteria for a simple, transparent cost model by charging based on the CPU hours that are managed and optimized. This billing model directly aligns with cloud utilization and could be accounted for by the savings it can generate with its optimizations. Granulate supports on-premises VMs as well and can be contacted for pricing in this area. Users wishing to evaluate the software capabilities can start with the free and open-source gProfiler capability to determine whether they’d benefit from the solution.
Strengths: Granulate delivers on the ability to optimize processes with very little user intervention, ultimately driving down overall cloud resource needs. Time-to -is very short, with a simple deployment model and simple effective dashboards.
Challenges: This solution does not actually help determine the optimal cloud resource configuration and so it needs to be paired with another solution in this category. Larger enterprises might have difficulty managing this scenario at scale with a single SaaS dashboard that manages all resources within the organization.
IBM’s acquisition of Turbonomic in 2021, paired with its ongoing OEM relationship with Cisco, demonstrates how well the solution is delivering on its capabilities to use AI to fine-tune resources and guarantee optimized performance of applications.
Turbonomic applies the financial model of supply and demand to its resource optimization capabilities, and while that’s often seen in more FinOps-focused tools, it applies this approach to deeper technical recommendations to remind IT operators that every resource has a cost and that resources are finite. This approach aligns resource optimization more closely with capacity planning; while capacity planning is thought of less in public cloud environments, it’s an important capability across both public (AWS, Azure, and GCP) and private cloud (VMware) infrastructures.
The Turbonomic platform grew out of the need to optimize private infrastructure and has easily extended its capabilities into the public cloud. This history of private cloud optimization has built up a robust corpus of infrastructure integrations that allows the solution to go deep and provide recommendations that may apply to individual workloads or shared platform components such as hyperconverged infrastructure or storage arrays.
Building on the financial cost-benefit model, Turbonomic models environments as a market and uses market analysis to manage the supply and demand of resources. It visually displays the supply chain for all resources in an intuitive UI that reshapes the way users think about their resources. Additionally, the data ingestion capabilities enable organizations to specify custom metrics that can be included in the market analysis, helping to tie infrastructure components together to build a more complete supply chain view. This approach is also how the Turbonomic platform extends to other key application visibility solutions such as IBM Instana, or other third-party application observability tools. In this way, a transaction flow can be visualized and analyzed from ingestion through storage volume and ranks high on the overall evaluation criteria of flexibility.
The IBM Turbonomic UI readily displays recommendations that are easily interpreted and classified as performance or savings. These recommendations can then connect to ServiceNow workflows or, in many cases, be automatically applied from within the UI. Organizations can use this solution to automatically purchase reserved instances, resize systems, move or reconfigure resources, or clean them up—striking a strong balance between FinOps and cloud resource optimization capabilities and ranking high on the key criteria for automatic resource reconfiguration.
IBM Turbonomic is delivered through the self-hosted deployment model and is licensed on a per-node basis, allowing customers flexibility to expand their use of the solution. There’s reference to a SaaS model that can deploy in either AWS or Azure, however, further details on this offering are unclear.
Strengths: Self-hosted deployment options provide flexibility. This solution strikes a good balance between FinOps and resource optimization capabilities, with strong intuitive UI and deep analysis of infrastructure components.
Challenges: Greater integration into DevOps planning and infrastructure provisioning tools would allow customers more flexibility.
Spot by NetApp
Spot by NetApp encompasses many capabilities that organizations may desire when it comes to optimization of cloud resources, and is famously known for determining the most cost-effective cloud resource to deploy applications to. Spot technology uses machine learning and analytics algorithms that enable organizations to utilize spot capacity (lower-cost instances) for production and mission-critical workloads.
Spot continuously scores the different capacity pools across operating systems, instance types, availability zones, regions, and cloud providers to make the most intelligent decisions in real time regarding which instances to choose for provisioning and which ones to rebalance and replace proactively.
Spot has recently acquired CloudCheckr, which provides cloud visibility, cost optimization, cost allocation, security and compliance, and resource management capabilities. This acquisition is notable as it expands Spot’s abilities to perform resource optimization analysis passively; in other words, without needing to fully manage the lifecycle of the cloud instances.
Customers looking to evaluate where optimization opportunities exist, but not looking to fully hand over the keys to a brokering service, can use CloudCheckr from day one. CloudCheckr can connect to any of the major cloud providers and begin to generate insights. Further, for AWS workloads, CloudCheckr has the ability to apply the optimization recommendations automatically, though this capability is lacking in other cloud providers.
When an organization seeks simplified automation, it likely leverages the Spot Elastigroup service, which ranks high on the evaluation metric of usability. It analyzes resource usage continuously and provides autoscaling groups that optimize compute resources to ensure availability and meet resource demands using the lowest-cost compute options without intervention.
Spot pricing has always been radically different from others in this category. Spot’s automated optimization products are priced based on percentage of your savings, while its CloudCheckr cost visibility and management product is priced based on your overall cloud spend.
Strengths: The pricing model guarantees savings, while SLAs ensure that applications still achieve the performance they require. This solution has a simple deployment model for continuous optimization and many integrations for ITSM workflows and infrastructure deployment pipelines.
Challenges: CloudCheckr is not yet integrated into the Spot console and appears to have some overlap potentially between its capabilities and some of those already offered by Spot. Lack of on-premises capabilities will limit use for some organizations.
VMware is no stranger to the private cloud, having led the space by the introduction of the VM-driven data center that was orchestrated by vCenter, an API accessible management solution. VMware expanded these capabilities into deeper operational insights and automation capabilities with the maturing of the vRealize software suite; notably vRealize Operations (vROps) and vRealize Automation (vRA).
Existing VMware customers that wanted to use the same solutions in the public cloud could now take advantage of VMware’s investments in running on AWS (VMware Cloud on AWS), Azure (Azure VMware Solution), and Google (Google Cloud VMware Engine). However, many customers have been deploying workloads to the public-cloud engines already (without the VMware tooling), and sought a way to optimize their public and private-cloud use under a single tool. VMware recognized this need and acquired CloudHealth to meet it.
CloudHealth is the primary product in this category that monitors public-cloud infrastructure utilization to drive optimization recommendations. It’s a well-balanced solution that providesFinOps capabilities paired with configuration efficiency. CloudHealth customers can improve resource utilization with tailored, right-sizing recommendations, manage commitment-based discounts throughout their lifecycle, and drive continuous optimization with governance policies and automated actions that execute changes in their public-cloud environment. VMware can also provide security and compliance assurance via CloudHealth Secure State.
To optimize both private and public-cloud resources, it’s recommended to use a combination of CloudHealth along with other VMware products.
The first such integration is with vRealize Operations. vRealize Operations has extensive capabilities to manage performance, availability, capacity, cost, and compliance in a hybrid infrastructure, while the management pack for CloudHealth acts as a bridge that connects these two worlds and brings the costs and resource usage of public cloud from CloudHealth into vRealize Operations. CloudHealth and vRealize Operations have a bi-directional integration, in which vSphere-based data from vRealize Operations can also be ingested into the CloudHealth platform. Finally, vRealize Operations Manager is then paired up with vRealize Automation to automate the workflows required to dynamically modify cloud resource configurations based on the recommendations collected by CloudHealth and vROps.
VMware provides solutions trusted by the largest organizations, and has also been a fit for small to medium-sized businesses. Experienced in deploying massively scaled private-cloud infrastructure (more than 300,000 VMs in a single instance), as well as CloudHealth supporting deployments with more than 1500 cloud accounts and a monthly spend of $30M, VMware excels in its ability to meet any demand a customer may throw at them and ranks high on the evaluation criteria of scalability. CloudHealth can be purchased on its own, but like most VMware products, it’s best packaged with other products to achieve the goals set out in this category. Most customers will be looking to acquire the vRealize Cloud Universal Advanced or Enterprise license, including CloudHealth, vRealize Operations Cloud, and vRealize Automation, among other additional products.
Finally, it’s worth noting that VMware is investing in its VMware AI Cloud service, which leverages reinforcement learning to fine-tune infrastructure recommendations while also delivering modeling capabilities. This feature currently supports only vSAN optimization, but is being actively expanded across compute, network, application, and cost optimization spaces.
Strengths: It’s an easy fit for customers already using other VMware products. The SaaS offering makes it easier for new customers to get up and running. VMware delivers confidence when delivering at scale.
Challenges: CloudHealth focuses on public cloud environments and depends on other VMware products for private and hybrid cloud resource optimization. The solution’s greatest benefits are realized by customers who have a deep investment in other VMware technologies.
6. Analyst’s Take
Looking at the cloud resource optimization space, it’s evident that solutions starting to solve a problem here often get acquired or packaged into other, broader, cloud operations management platforms. Some solutions in this space tend to lean towards better financial outcomes, while others lean towards better observability and capacity management outcomes.
Many of the larger infrastructure management players with experience in private cloud are using these capabilities to grow their foothold in cloud management and orchestration (such as Cisco, VMware, BMC, and IBM). However, companies like Spot and Densify have taken a different approach, foregoing the desire to orchestrate the data center and instead focusing solely on automating cloud optimization. This latter group of solutions will be attractive to customers born in the cloud, while the former group of solutions will be leveraged more by organizations that already have a deep investment in the technology and need to move private cloud resources to the public cloud.
When it comes to analysis and optimization of public cloud resources, not all solutions are equal and not all clouds are equal. Looking through the market, it’s clear that most solutions are first built for AWS, then subsequently ported to Azure, and then finally (if at all) ported to GCP. From the vendor perspective, this sequence is driven by customer demand, and as a result, it’s clear that if an organization wants the latest and greatest features, they will get them on AWS first.
So, what should business leaders consider from the information provided when adopting this type of technology?
- If you’re an organization with heavy investment in private cloud infrastructure and are looking to move to the public cloud, consider looking at your existing provider’s solutions before ruling them out.
- If you’re an organization born in the cloud or focused on cloud-only workloads, consider moving to a provider that manages the complexity of automation for you.
- If you’re heavily invested in infrastructure as code (IaC) and managing your own infrastructure deployment pipelines, strongly consider solutions that publish all optimization data and recommendations via API. They will allow you to integrate into your deployment pipeline without handing over the provisioning keys to a full-blown CMP.
- Don’t analyze resource use without a specific goal in mind. Align the metrics to a business outcome or focus on costs, and then consider whether the effort on automation is worth the savings.
7. About Shea StewartShea Stewart
Often placed directly between the product and the end customer, Shea has focused his nearly 20 year IT career on helping Enterprises evaluate, design, deploy, and operate cutting edge technologies from the datacenter to the cloud. Shea enjoys the challenge of understanding how a new tool or product might fit into an existing environment and how it can be scaled effectively with the right processes and technical glue.
Shea is a born technologist and has held roles in engineering, architecture, and operational aspects of datacenter and cloud technologies. Shea has built and managed a DevOps focused professional services consulting firm where he coached and supported new consultants in the way of Cloud Native technology platforms with partner technology such as Google Cloud, Red Hat, HashiCorp, SysDig and many others. Shea has also held the role of CSO and talked at length about the inclusion of security tools and processes within development and deployment pipelines. Shea will never run as root.
Currently Shea focuses on developer platforms and their associated tool chains in the Cloud Native and DevOps landscapes and works with startups, enterprises, and governments alike to provide objective opinion from a practitioner viewpoint.
8. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.