This GigaOm Research Reprint Expires: Jun 14, 2023

GigaOm Radar for Cloud Performance Testing Toolsv2.0

Vendor Evaluation and Comparison for Technology Decision Makers

1. Summary

Cloud computing technologies have achieved high adoption levels in many organizations, requiring key stakeholders on software teams—including developers, testers, quality assurance (QA), development operations (DevOps), performance engineers, and business analysts—to ensure applications can scale to meet demand in terms of users, transactions, and data and processing volumes. Confirming this ability to scale is accomplished using performance testing tools. In the companion GigaOm report, “Key Criteria for Evaluating Performance Testing Tools,” we describe the criteria and evaluation metrics used to assess vendors’ solutions in this market.

The range of vendors offering performance testing solutions is diverse and strong, with Leaders found in every quadrant of this Radar report. Providers positioned further from the center of the Radar may nevertheless offer the best solution for an enterprise’s needs and constraints, whether that be capabilities for testing as code, observability, automated root cause analysis, collaboration, scalability, chaos engineering, advanced load type testing, ease of reporting, real browser-based testing, or the ability to work with open-source tools, simulate network traffic impairments, or implement “shift left” testing.

This is the second year GigaOm has evaluated performance testing tools. Our 2021 Radar report included both cloud-based and on-premises tools. However, for this year’s report, we focused on cloud-based performance testing tools exclusively.

All the solutions assessed in this report are cloud-oriented and offer faster speeds and better affordability than on-premises solutions do, especially for large testing loads. They’re being developed at a fast pace; democratization of load testing and automated test creation are two ongoing trends to watch. Most of the solutions evaluated here offer either graphical user interfaces (GUIs) to manage tests or ways to record users navigating the application to create load testing scripts automatically.

How to Read this Report

This GigaOm report is one of a series of documents that helps IT organizations assess competing solutions in the context of well-defined features and criteria. For a fuller understanding, consider reviewing the following reports:

Key Criteria report: A detailed market sector analysis that assesses the impact that key product features and criteria have on top-line solution characteristics—such as scalability, performance, and TCO—that drive purchase decisions.

GigaOm Radar report: A forward-looking analysis that plots the relative value and progression of vendor solutions along multiple axes based on strategy and execution. The Radar report includes a breakdown of each vendor’s offering in the sector.

Solution Profile: An in-depth vendor analysis that builds on the framework developed in the Key Criteria and Radar reports to assess a company’s engagement within a technology sector. This analysis includes forward-looking guidance around both strategy and product.

2. Market Categories and Deployment Types

To gain a better understanding of the market and vendor positioning (Table 1), we assess how well cloud performance testing tool solutions serve specific market segments.

  • Small-to-medium business (SMB): In this category, we assess solutions on their ability to meet the needs of organizations ranging from small businesses to medium-sized companies. Also assessed are departmental use cases in large enterprises, where ease of use and deployment are more important than extensive management functionality, data mobility, and feature set.
  • Large enterprise: Here, offerings are assessed on their ability to support large and business-critical projects. Optimal solutions in this category will have a strong focus on flexibility, performance, data services, and features to improve security and data protection. Scalability is another big differentiator, as is the ability to deploy the same service in different environments.
  • Managed and cloud service provider (MSP/CSP): MSPs are enablers that take over a customer’s network operations and deal with maintenance, upgrades, and other day-to-day activities. Their needs may align with those in the above categories, and solutions are assessed on ability to meet them. CSPs are smaller cloud providers that try to add more value than the hyperscale cloud providers. CSP may be the cloud offering of MSPs and network service providers (NSPs).

In addition, we recognize two deployment models for solutions assessed in this report: software as a service (SaaS) and customer-managed.

  • SaaS-only solutions: These are available only in the cloud. Deployed and managed by the service provider, these solutions are available only from that specific provider. The big advantage of this type of solution is that upgrades, patching, and systems management are provided as part of the service, thus delivering a simplified experience to the user.
  • Customer-managed solutions: These solutions are meant to be installed by the customer, supporting deployments both on-premises and in the cloud, allowing the customer to build hybrid or multicloud solutions. They are more flexible, giving the end user more control over resource allocation and tuning across the entire stack. These solutions can be deployed as virtual appliances or as a traditional software component installed on virtual machines or containers and managed using Kubernetes.

Table 1. Vendor Positioning

Market Segment

Deployment Model

SMB Large Enterprise MSP/CSP SaaS Customer-Managed
Apache JMeter
Grafana Labs (k6)
Micro Focus
Tricentis (Neotys)
3 Exceptional: Outstanding focus and execution
2 Capable: Good but with room for improvement
2 Limited: Lacking in execution and use cases
2 Not applicable or absent

3. Key Criteria Comparison

Building on the findings from the GigaOm report, “Key Criteria for Evaluating Cloud Performance Testing Tools,” Tables 2, 3, and 4 summarize how well each vendor included in this research performs in the areas we consider differentiating and critical for the sector: key criteria, evaluation metrics, and emerging technologies.

  • Key criteria differentiate solutions based on features and capabilities, outlining the primary criteria to be considered when evaluating cloud performance testing tools.
  • Evaluation metrics provide insight into the impact of each product’s features and capabilities on the end user organization.
  • Emerging technologies and trends indicate how well the product or vendor is positioned with regard to technologies likely to become significant within the next 12 to 18 months.

The objective is to give the reader a snapshot of the technical capabilities available with the evaluated solutions, define the perimeter of the market landscape, and gauge the potential impact on the business.

Table 2. Key Criteria Comparison

Key Criteria

Automated Test Definitions Advanced Load Types Testing as Code Root Cause Analysis Performance Insight Collaboration Features Deployment Environment Support
Apache JMeter 1 1 3 1 0 1 2
Apica 2 2 0 3 2 1 2
Grafana Labs (k6) 3 2 3 1 2 2 2
Loadster 2 1 0 3 2 3 2
Micro Focus 3 3 3 2 3 2 3
Nastel 3 3 3 2 2 2 2
Radview 3 3 2 2 2 2 3
SmartBear 2 2 2 2 2 1 2
Tricentis (Neotys) 2 2 3 2 1 2 2
3 Exceptional: Outstanding focus and execution
2 Capable: Good but with room for improvement
2 Limited: Lacking in execution and use cases
2 Not applicable or absent

Table 3. Evaluation Metrics Comparison

Evaluation Metrics

Scalability Breadth of Coverage Usability Licensing Terms ROI & TCO
Apache JMeter 1 2 1 3 3
Apica 2 3 2 2 2
Grafana Labs (k6) 3 3 2 2 2
Loadster 3 2 3 3 3
Micro Focus 3 3 3 2 3
Nastel 3 2 2 2 3
Radview 3 3 2 2 3
SmartBear 2 2 2 2 2
Tricentis (Neotys) 3 3 2 2 2
3 Exceptional: Outstanding focus and execution
2 Capable: Good but with room for improvement
2 Limited: Lacking in execution and use cases
2 Not applicable or absent

Table 4. Emerging Technologies Comparison

Emerging Technologies

Integration of AI/ML Automated Test Creation Support for Kubernetes- or Cloud Foundry-Deployed Applications Support for Microservices & Service Meshes Support for Cloud-Based As-a-Service Testing or Integration
Apache JMeter
Grafana Labs (k6)
Micro Focus
Tricentis (Neotys)
3 Exceptional: Outstanding focus and execution
2 Capable: Good but with room for improvement
2 Limited: Lacking in execution and use cases
2 Not applicable or absent

By combining the information provided in the tables above, the reader can develop a clear understanding of the technical solutions available in the market. Buyers should start narrowing down their list to vendors that better match their needs and their future directions using Tables 1 through 4.

Please note that, while the scope of a project may be completely addressed by the table stakes (outlined in the corresponding Key Criteria report), readers also should look at what key criteria (Table 2) and emerging technologies (Table 4) might alter their shortlist because these capabilities project the longest useful life in the enterprise.

The evaluation metrics comparison (Table 3) is where operational issues and values are exposed. Some tools may have great features but be difficult to implement or scale, which can limit the useful lifespan of a product for growing enterprises.

4. GigaOm Radar

This report synthesizes the analysis of the key criteria and their evaluation metrics impact to inform the GigaOm Radar graphic in Figure 1. The resulting chart provides a forward-looking perspective on all the vendors in this report, based on their products’ technical capabilities and feature sets.

The GigaOm Radar plots vendor solutions across a series of concentric rings, with those set closer to the center judged to be of higher overall value. The chart characterizes each vendor on two axes—balancing Maturity versus Innovation, and Feature Play versus Platform Play—while providing an arrow that projects each solution’s evolution over the coming 12 to 18 months.

Figure 1. GigaOm Radar for Cloud Performance Testing Tools

As this is the second year we’ve reported on performance testing tools, it’s important to note that this year we looked exclusively at vendors’ cloud-based offerings, whereas last year we also included vendors’ on-premises tools in our assessment.

The Radar chart (Figure 1) shows how scoring these testing tools separately allows the vendors evaluated here to showcase their value in the cloud performance market. Some vendors provide their tools both as SaaS-only and customer-managed solutions, but this Radar focuses solely on their cloud features.

The LoadRunner product line from Micro Focus is still the strongest performer in this market, with the largest set of functionalities. Nastel’s Cybench is in the same Maturity/Platform-Play quadrant. Although Cybench is not normally considered a load testing tool, its inclusion in the report takes account of the total unit testing coverage it provides the “shift left” movement.

Looking to the top-left Maturity/Feature-Play quadrant, SmartBear, Tricentis (formerly Neotys), and RadView are all maturing their offerings while adding new functionality. In the lower-left Innovation/Feature-Play quadrant, Apache’s JMeter is representative of a state-of-the-art open-source solution, while Grafana Labs has been accelerating the development of k6 Cloud by adding enterprise functionality without impacting the usability of its open-source version. Apica is a fast-moving player and will become a Leader in the market as it works through its roadmap.

It should be noted that a few vendors, including Loadster and Grafana Labs, have made significant product developments since the last report. Loadster is a startup that’s disrupting the market, while Grafana Labs acquired k6 and has since added enterprise features while integrating k6 with its other products. A New Entrant to this year’s report is Nastel, which has a strong feature set. RadView has also crossed over into the Leader circle.

There’s no “best” Radar quadrant, as each buyer has unique needs and constraints. Using the key criteria scores to compile a short list can help narrow down the choices to one or two vendors put forward into the final selection process.

It’s important to remember that there are no bad products in the list of vendors evaluated for this report. A provider positioned further away from the Radar’s center may be the best choice for an individual enterprise’s needs. The weighting on the Radar graphic will match the needs of most companies.

Inside the GigaOm Radar

The GigaOm Radar weighs each vendor’s execution, roadmap, and ability to innovate to plot solutions along two axes, each set as opposing pairs. On the Y axis, Maturity recognizes solution stability, strength of ecosystem, and a conservative stance, while Innovation highlights technical innovation and a more aggressive approach. On the X axis, Feature Play connotes a narrow focus on niche or cutting-edge functionality, while Platform Play displays a broader platform focus and commitment to a comprehensive feature set.

The closer to center a solution sits, the better its execution and value, with top performers occupying the inner Leaders circle. The centermost circle is almost always empty, reserved for highly mature and consolidated markets that lack space for further innovation.

The GigaOm Radar offers a forward-looking assessment, plotting the current and projected position of each solution over a 12- to 18-month window. Arrows indicate travel based on strategy and pace of innovation, with vendors designated as Forward Movers, Fast Movers, or Outperformers based on their rate of progression.

Note that the Radar excludes vendor market share as a metric. The focus is on forward-looking analysis that emphasizes the value of innovation and differentiation over incumbent market position.

5. Vendor Insights

Apache JMeter

Apache JMeter is a 100% pure Java open-source software application originally designed for testing web applications. It has since expanded to other load testing functions, including functional behavior and measuring performance.

The solution can load and performance test many application, server, or protocol types. It manages testing across a range of platforms, protocols, and proprietary, third-party and open-source solutions commonly available in enterprises, with a range of inputs. This includes correlation made possible through the ability to extract data from popular response formats, including HTML, JSON, XML, or any text format.

JMeter also provides a full-featured, test integrated development environment (IDE) to speed up test plan recording (from browsers or native applications), building, and debugging. It also features out-of-the-box, dynamic HTML reporting, as well as Prometheus, Influx DB, and Grafana integrations.

Apache also highlights other key features, including the JMeter command line interface (CLI) mode (previously known as “non-GUI” or “headless” mode). This enables load testing from any Java-compatible operating system (OS), including Linux, Windows, and macOS, as well as portability across all operating systems that support Java.

Its full multithreading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups. The solution also allows for caching and offline analysis and replaying of test results, while it features an extensible core, in which pluggable samplers allow unlimited testing capabilities, along with scriptable samplers (for example, in JSR223-compatible languages such as Groovy).

Its software-only deployment model provides a core and plug-ins for multiple protocols. Additional protocols and plug-ins are available through, which works as a directory for plug-ins where third parties can register their commercial or free and open-source plug-ins.

A key criteria strength is its testing as code capabilities. JMeter can use data through its CSV Data Set JMeter component; test configuration can be obtained using the JMeter Property feature and test scripts can be stored as XML under configuration management. An area for improvement is JMeter’s missing DSL; the solution could better address testing as code requirements and sharing across multiple users through versioning systems. While there are third-party add-ons to JMeter, the buyer is responsible for integration and lifecycle management.

Strengths: Many testing tool development efforts tend to be underfunded, so the quality of this product and its free-to-use pricing makes it one of the most frequently used tools on the market. The popularity of JMeter has driven the development of many commercially-supported products to support purpose-built tests. As one of the more actively developed open-source testing products, it has a broad range of features that push commercial testing tool product development to keep pace competitively.

Challenges: The lack of an enterprise support agreement is a major challenge, as is the requirement for a customer to maintain the JMeter installations and perform all of the integration work alone. This scenario limits a company’s spot testing of user performance around the globe to their ability to spin up and spin down their own capacity, and do so securely.


Global organizations use Apica’s active monitoring platform to solve complex digital performance issues. Apica’s platform is designed to deliver scalable monitoring and insights across any location, device, app, or authentication. This SaaS platform aims to reduce friction and time to resolution for cloud migrations and applications and underlying infrastructure outages while assuring an optimal user experience (UX).

The platform can be deployed as a service in multicloud, hybrid, and on-premises environments. It consists of two main components: Apica Synthetic Monitoring (ASM) for high-scale synthetic monitoring and Apica Load Testing (ALT). Apica targets cloud and network service providers, as well as large, private FinTech and public sector organizations. Its competitors include Dynatrace, Datadog, AppDynamics, and Catchpoint.

Use cases for Apica’s platform include cloud migration, shift-left testing, DevOps, and full software delivery lifecycle testing, as well as monitoring digital experiences, and hybrid cloud, internet of things (IoT), application programming interface (API), and on-premises environments.

Features that differentiate Apica’s active monitoring platform include high-scale user journey monitoring, non-HTTP application testing, keyboard and mouse click simulation at scale, and non-standard protocol monitoring and testing. Apica has also developed Web 3.0 multistack testing capabilities to support industry transition fuelled by digital transformation. The solution rounds out its digital experience management (DEM) portfolio with new features, such as active synthetic transaction monitoring that will support customer needs in the future.

Apica also highlights its recent AIOps and observability innovations, as well as its use of synthetics at scale for load testing platforms, to help it stay ahead of the competition. These innovations figure in the key criteria evaluation of the platform, where it scored highly on the ability to test as code and perform detailed root cause analysis using a normalized data model. It also offers specific elements that simplify testing of specific target environments, which means it scored highly for deployment environment support. It was, however, marked down for lack of collaboration features that enable stakeholders to work together to create, run, and manage tests.

Apica’s active monitoring platform performed well on the scalability and breadth of coverage evaluation metrics. For this review, we evaluated the Apica solution bundle that includes an application monitoring tool; the combination in this solution allows for greater insight compared to most of the competition and compares favorably with its peers.

Strengths: The ability to integrate Apica’s application performance monitoring tools into its overall testing solution delivers greater insight into performance for users. An additional strength is its ability to provide value beyond initial deployment assurance to cover full lifecycle performance awareness needs.

Challenges: Apica’s support for Kubernetes and other container-based hosting options could be improved to support the testing of microservices and service meshes.

Grafana Labs k6 Cloud

Formerly known as LoadImpact, Grafana Labs k6 provides a development-oriented testing solution for developers, site reliability engineers (SREs), software development engineering in test (SDET), and QA teams. For example, it aims to remove QA silos by involving SDET and DevTeams in performance testing processes.

Public and private sector end-user organizations of every size use k6 Open Source (an extensible testing tool) and k6 Cloud (the SaaS offering) for various types of performance testing such as front end, back end, failure injection, and synthetic. Its competitors include LoadRunner, Blazemeter, Neoload, and Gatling.

The purely SaaS-based k6 Cloud platform supports four deployment modes: running on Grafana Labs Cloud infrastructure, deployed on the customer’s Amazon Web Services (AWS) private network, Kubernetes cluster, or owned infrastructure and reporting into k6 Cloud.

Recent innovation includes failure injection testing, also known as chaos engineering, which intentionally breaks systems to validate reliability and performance objectives. Chaos engineering tools commonly use the “yet another mark-up” data serialization language (YAML) syntax, in which k6 provides these capabilities implemented in dynamic JavaScript code.

With Grafana Labs, k6 can access system-under-test (SUT) metrics. The platform uses k6 machine learning (ML) to detect performance degradation during testing. Soon, it will also leverage the Grafana Labs ML and observability solutions to automatically detect performance and reliability degradation and correlate these issues with the root cause in the infrastructure or application. The change to a common ML process will increase the solution’s functionality faster than having separate ML processes.

However, it would improve the platform to offer enhanced enterprise management capabilities around audit logs, organization-wide administration, and more advanced role access control. It also needs to provide more flexibility for using different product versions, strengthen the capabilities of the various deployment options, and improve the ability to run multiple k6 versions with new extensions.

The k6 Cloud platform scored highly for the advanced load type testing key criterion, providing a Browser Extensions and HTTP Archive (HAR) recorder for test auto-generation. The k6 application programming interface (API) simplifies dynamic correlation processes to facilitate the maintenance of, and further changes to, the performance testing suite. This is valuable in maintaining ease of use levels for DevOps, letting the developer use the simple k6 interface while enabling more complex functionality for testers.

The platform also provides multiple dashboards to foster project collaboration, such as the performance overview dashboard, test results dashboard, thresholds statuses, and scheduling overview.

k6 Cloud also scored highly for its ability to scale testing loads of up to 500,000 virtual users automatically across multiple instances without additional configuration or test setup assistance. It also received a high score for breadth of coverage, with an extensible architecture that supports third-party tools and new testing protocols. Some recent examples include Redis, SQL database, Snowflake, PubSub, AMQP, and NATS.

Strengths: Grafana Labs k6 Cloud is developer-friendly and allows for graceful migration from open-source to SaaS-based services in support of a “shift-left” or a whole-team approach to continuous performance testing. And in doing so, it enables performance teams to access enterprise-level functionality.

Challenges: The solution’s root cause analysis capabilities could be improved by consuming Open Telemetry and Open Metric data from IT service management (ITSM) tools.


Loadster is a cloud-hybrid load testing platform for websites, web applications, and APIs. It combines a SaaS dashboard with on-demand cloud instances in 24 regions—for AWS and Google Cloud Platform (GCP). End users can also run browser bots (real, headless web browsers) or protocol bots (multithreaded HTTP clients) from their own engines with the public Docker image, where Loadster engines are x64 Docker containers.

It’s designed to manage load testing, stress testing, spike testing, and stability testing, and is used by public and private enterprises of all sizes, except cloud and network service providers. Loadster’s SaaS web-based dashboard and editors run in the end user’s browser. For on-premises load testing, users can run the Loadster Engine Docker image with an engine key, controlling it from the SaaS dashboard.

Loadster’s main competitors are Grafana Labs k6 Cloud (formerly known as LoadImpact), LoadNinja, and NeoLoad. But it specifically offers real browser-based testing for complex, Chrome-based web applications and highlights ease of reporting as a particular differentiator.

Areas for improvement include version control because Loadster’s dashboard and editors are all web-based, and the test scripts and scenarios are stored centrally. Some customers have expressed interest in storing these as ordinary files that they can commit to their own version control. The vendor said the next Loadster CLI release would make this storage and commit option possible.

Loadster also needs to improve its ability to integrate with customers’ other solutions—such as Grafana, Prometheus, and Kibana—where they want to collect load test data. Loadster also has work to do on handling advanced load types because it doesn’t allow custom ramp definitions, such as a traffic spike in the middle of the test. It also doesn’t support testing as code, but reports that it’s coming soon. It did not lag behind the competition on any of the evaluation metrics.

The solution scored highly for the key criterion of root cause analysis. Browser Bots, in particular, offer detailed traces and screenshots for finding errors. All bots provide percentile and average response times, with averages broken down by URL, to trace which URLs are contributing to performance issues. It also offers multiuser support for scripting, testing, and reporting, with indicators appearing when multiple users edit the same script.

Strengths: Loadster already does real browser testing, while other vendors are still only doing protocol-level testing, which can miss critical UX issues. We expect real browser load testing will become table stakes functionality over the next few years.

Challenges: The platform presents challenges for customers interested in storing load test artifacts and scripts as ordinary files that they can commit to their own version control. This is a feature required to support GitOps, tie load tests to releases, enable better test management, and support value stream mapping.

Micro Focus

LoadRunner SaaS from Micro Focus builds on its decades-long position as a leader for data center-based testing tool solutions. Benefits include fluid licensing to avoid shelfware, stiff competition on price with its peers, and its ability to work with open-source development tools, which allows for easier integration with developers and DevOps teams.

Scripts generated on different tools—such as JMeter, Gatling, Selenium, and Micro Focus Unified Functional Testing (formerly known as QuickTest Professional)—can be executed as they are (without the need to convert or adjust them) in the context of performance.

LoadRunner SaaS allows the end user to create, record, and develop scripts that emulate the behavior of real users in different ways, including via its proprietary IDE Virtual User Generator (VuGen). Alternatively, users can generate scripts using the LoadRunner Developer IDE plug-in from their HAR.

The solution can read packet capture (PCAP) and CSV files, as well as REST calls, which simplifies and provides for a more flexible script creation process. It also offers advanced and intuitive logs that allow for an easier and faster debug cycle.

When it comes to scenario modeling, the solution can enable real-world scenario emulation by configuring multiple settings, such as geographic load generation provisioning and user ramp up, tear down, and rendezvous.

A particularly strong feature is the ability to simulate network traffic impairments, like jitter, dropped packets, latency, and bandwidth limits. This capability is critical for organizations that need to understand what real-world impacts on network traffic mean for the way a user will perceive their experience, as well as for detecting issues that could result in a self-induced denial of service. This is one of the few products on the market that includes integrated wide area network (WAN) emulation testing as part of performance testing.

Network and service virtualization are integrated with the test to provide better coverage and accelerate testing. A user can begin seeing results (such as transaction response times, throughput, and hits per second) as soon as the test starts, and intelligent anomaly detection, among other artificial intelligence (AI)-based capabilities, will pinpoint problems in real time.

Strengths: Simulated network behavior is a game changer for achieving real user experience testing and is unique in this market. The ability to consume the output of open-source testing tools as the basis for high-scale testing is also unique in this cloud market. The breadth of what network functions can be tested and how the tests can be structured confirms this solution’s positioning as a Leader.

Challenges: LoadRunner SaaS can improve in two key areas: perceived value in pricing and use of AI. It can improve its position through the value provided by ML-based AI in root cause analysis and ingesting metrics from other monitoring tools. AI could also help with test creation functionality based on use cases or unit tests. Another improvement would be ways to help load test integration platform as a service (iPaaS), no/low-code and robotic process automation (RPA) tools, using some level of pre-integration.


CyBench from Nastel is a SaaS solution for continuous performance regression testing; this includes the analysis, reporting, and storage of performance reports. It integrates with IDEs and build, unit test, and continuous integration or development (CI/CD) tooling and pipelines.

Running on Nastel’s data analytics platform (Nastel XRay), CyBench features AI-based ML and natural language processing (NLP) analytics to help anomaly detection and other ML-related processes. CyBench integrates with next-generation, cloud-based CI/CD platforms to allow code benchmarks and performance testing to play a role in the software delivery tool chain.

The Nastel solution generates performance tests directly from unit tests automatically, eliminating the need for testers to write any code. This way, it provides good code performance test coverage for end-user organizations that have adopted test-driven development practices. This shift-left approach enables organizations to gain insight into the cost or performance impact down to the unit level for value stream mapping.

While other solutions offer functional testing, CyBench carries out performance and load testing to track performance drift across builds and releases. Meeting the increased demand for visibility into code performance within a sprint, this testing approach is proactive and allows for an immediate fix, so issues are addressed immediately rather than being added to the backlog for later prioritization. Other functional testing approaches don’t catch code performance bottlenecks during the CI/CD process and won’t alert the developer of issues until one or two sprints later (or until the release gets deployed to production).

Strengths: CyBench generates performance benchmarks from existing unit tests automatically, and uses ML and NLP to help run tests and interpret results. This approach ensures 100% test coverage and provides one of the fastest time-to-value measurements for companies using a separate tool for UX load testing that simulates real browser or mobile traffic behavior.

Challenges: CyBench is a shift-left tool; as such, it’s not intended as a complex end user load testing tool. This limitation requires that customers adopt CI/CD test-driven development to provide value, which is really a non-issue as test-driven development is standard best practice for developing software solutions.

However, the solution doesn’t emulate the browser or mobile client, which may be critical for those with high UX performance needs. This is why it scored ++ rather than +++ for automated test creation in the emerging technology section.

RadView Enterprise

WebLOAD is a load and performance testing tool from RadView. It enables the recording of scripts with an automatic correlation engine and advanced debugging capabilities for simple to very complex scenarios. It supports a wide range of protocols and architectures, including HTTP/S, HTML5, Oracle (App and DB) and SAP protocols, SOAP, AJAX, and push technologies, as well as FTP, SMTP, TCP, and DBs. In total, WebLOAD supports over 150 protocols, giving it one of the largest ranges of systems it can test. While this report measures its SaaS offering, RadView also supports self-hosted software that customers can manage on-premises or in private clouds.

The solution is sold as one product split into three modules: a script recorder where users create and maintain scripts, load generators that generate the test load, and an analytics dashboard.

To support developers, WebLOAD has added support to integrate with Selenium for scripting and enables scripting via JavaScript. To allow for automated testing as part of a CI/CD tool chain, it integrates with Jenkins and several application performance monitoring (APM) tools; for example, AppDynamics (by Cisco), New Relic, and Dynatrace.

While RadView does have its own IDE, WebLOAD also works with other IDEs, enabling the developer to use WebLOAD’s API to trigger tests. Its reports are rendered via integration with open source Grafana, including settings to create actionable reports quickly.

RadView uses AI-powered analysis to automate the detection, correlation, and (most importantly) discovery of the cause of performance bottlenecks. It also uses knowledge of previous tests or regression testing to provide trending data.

The design of WebLOAD allows it to conduct complex tests over multiple days, making it great for endurance testing or finding slow memory leaks.

Today WebLOAD is designed for performance engineers and operations experts. But it is expanding its ease of use features, with a goal of democratizing many load testing use cases. This is why we label it a Fast Mover on the Radar graphic.

Strengths: RadView offers easy-to-use and powerful test definition automation. Its wide range of load types leads the market, as does its ability to deploy testing by environment, eliminating the need to create custom test, stage, and production environment tests. For organizations with skilled testers, it provides great time to value, and its shift-left features generate shorter ROIs than most products in this market.

Challenges: RadView’s strength is apparent in its legacy support and features, but this has come at the cost of ease of use at the DevOps level for a typical developer who has never worked in a QA role. This is an issue RadView has indicated it’s addressing.


Two products from SmartBear are included in this evaluation, ReadyAPI and LoadNinja, which address two different markets. ReadyAPI, as the name suggests, is for performance testing of API interfaces, while LoadNinja is for web interface testing.

ReadyAPI can support GitOps, in which testing code can be stored in a Git repository and tied directly to specific application releases. LoadNinja stores the testing scripts and data in the LoadNinja application.

ReadyAPI is a fat-client application loaded and run on the developer’s workstation, while LoadNinja is a SaaS service.

While Ready API has load testing features that are typical in this market, LoadNinja offers only simple ramping of load, which provides comparatively less testing capability than most products in this market.

LoadNinja is SaaS-hosted and can scale to the same level as other products in this market. ReadyAPI is limited to the power of the workstation, whether desktop, laptop, or virtual desktop infrastructure (VDI) instance. To address this, ReadAPI has native scale integration with AWS; customers wanting to run the load in a different cloud environment would need to replicate that scaling functionality to address their cloud vendor of choice.

LoadNinja is designed for non-technical staff, enabling even small organizations or departments to create their own load tests. ReadyAPI is intended for moderate-to-advanced technical development staff.

The ReadyAPI workstation application places a significant burden on desktop support staff to manage the software, which can have a greater operational impact than testing tools that run on servers. This workload can include patch and change management processes as well as system integration testing with extended detection and response (XDR) or endpoint detection and response (EDR) solutions.

Strengths: LoadNinja is designed to be easy to use, with minimal training requirements, and can, therefore, be used by non-technical staff. Whereas ReadyAPI provides great shift-left support for API testing requirements, shift-left also describes a whole-team testing approach to remove QA silos by involving SDET and DevTeams in performance testing processes.

Challenges: LoadNinja continues to make development advances. But its ability to test behind firewalls for API testing is limited. ReadyAPI needs a server or SaaS-based counterpart to meet the large load testing needs of API-based integration to the likes of IoT applications. While it has autoscaling for AWS, other clouds require the customer to solve the scaling problem.

Tricentis (Neotys)

Tricentis acquired Neotys in March 2021, and in February 2022 acquired an AI testing solution to expand its testing coverage. Only NeoLoad is covered in this report.

NeoLoad allows automated performance testing through the continuous testing of APIs or applications. It enables end users to reuse and share test assets and results from functional testing tools with analytics and metrics generated by APM tools. The solution also supports a full range of mobile, web, and packaged applications (like SAP) to cover most, if not all, testing needs. It also allows customers to continuously schedule, manage, and share test resources and results across their organizations to ensure application performance.

NeoLoad claims to democratize test creation and management with an easy-to-use GUI. Power users carrying out more complex testing may prefer to use its API interfaces.

NeoLoad offers a wide range of performance testing capabilities that helps to quickly and accurately identify operational performance issues, however, staff and third-party tools are needed to identify their true root causes. Although this capability exceeds open-source alternatives, it is less comprehensive than some proprietary tools.

The solution integrates natively with popular CI tools and provides plug-ins for Jenkins, TeamCity, Bamboo, and XeniaLabs XL. It can also be integrated in Docker-friendly CI pipelines, such as GitLab, AWS codebuild, and Azure DevOps.

NeoLoad can carry out simulated network traffic impacts; while not a key feature, it provides this awareness in a simpler set of options. NeoLoad also emulates network conditions (latency, packet loss, and bandwidth). Combined with the ability to monitor while testing popular tools like Tomcat, MySQL, WebSphere, WebLogic (Oracle Fusion), and JBOSS, it can also monitor legacy OS, such HP-UX, which may typically host these applications on-premises.

NeoLoad as code is built into the existing product and uses modern tools, such as Docker and Git CLI to simplify API testing processes. This is possible because NeoLoad uses a YAML-based description format, which is human readable, implementation agnostic, and domain-specific to load testing.

It also automates pass or fail actions within pipelines based on service level objectives (SLOs) and thresholds being hit or missed. For example, bottleneck identification puts pressure on the app and compares service level agreements (SLAs) to server-level statistics to determine performance, automating SLA-based pass and fail triggers.

Strengths: The use of YAML for scripting and documenting structure is unique in this market. It enables non-technical users to do simple tests and reduces the need to train power users to do complex testing. NeoLoad is one of the few products in this market that makes it easy to test complex systems, such as SAP products, for example. Its design also makes it a good fit in DevOps and GitOps cultures.

Challenges: The insights NeoLoad provides on what’s causing a performance issue are limited, placing a higher burden on operations staff to manage root cause impact generation for each load run. This limitation can present labor and scheduling challenges for large enterprises with complex applications.

6. Analyst’s Take

Cloud-based load and performance testing solutions offer two major advantages over on-premises alternatives: the speed at which tests can be scheduled and run and the affordability of carrying out large load tests. This is because cloud performance testing tool developments are moving faster than their on-premises-only counterparts.

Most cloud vendors evaluated in this report have taken advantage of a greenfield opportunity to build out their solutions unencumbered by any requirements for backward compatibility with on-premises designs.

The limited capacity for on-premises testing resulted in scheduling conflicts between concurrent or late-running projects. Today’s SaaS-based vendors leverage cloud capacity to quickly spin up load-generating capacity where it best simulates the real UX.

A key takeaway from this Radar report is development in the democratization of load testing and automated test creation. Most of the solutions evaluated either offer GUIs to manage tests or provide methods to record users navigating the application to create load testing scripts automatically.

The growing movement to a shift-left testing approach, with its requirements to detect problems as early as possible, gives Nastel’s CyBench an edge. Its ability to provide 100% test coverage automatically with every build and, in some cases, down to the commit, puts it ahead of competitors in this regard.

At the opposite end of this spectrum is LoadRunner by Micro Focus, which can be included in the CI/CD tool chain as part of the software delivery pipeline (it fits into the right side of the development timeline). But it can provide unparalleled insight through its ability to see network impact and simulate real-world network conditions.

The other solutions sit in the middle ground between these two competitive extremes, each offering unique value opportunities. Loadster, for example, is an Outperformer that’s pushing its competitors to adapt to its market disruption. So, while there are no bad choices among the vendors evaluated in this report, only one or two are likely to be a great long-term fit, depending on the needs and capabilities of the customer organization and its end users.

One of the overarching issues in the performance testing market is that interoperability is extremely limited. So pick a vendor with a roadmap that aligns with your organization’s business goals and strategy for the next five years, as the cost to switch can be very labor-intensive.

7. About Michael Delzer

Michael Delzer

Michael Delzer is a global leader with extensive and varied experience in technology. He spent 15 years as American Airlines’ Chief Infrastructure Architecture Engineer, and delivers competitive advantages to companies ranging from start-ups to Fortune 100 corporations by leveraging market insights and accurate trend projections. He excels in identifying technology trends and providing holistic solutions, which results in passionate support of vision objectives by business stakeholders and IT staff. Michael has received a gold medal from the American Institute of Architects.

Michael has deep industry experience and wide-ranging knowledge of what’s needed to build IT solutions that optimize for value and speed while enabling innovation. He has been building and operating data centers for over 20 years, and completed audits in over 1,000 data centers in North America and Europe.. He currently advises startups in green data center technologies.

8. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

9. Copyright

© Knowingly, Inc. 2022 "GigaOm Radar for Cloud Performance Testing Tools" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact