This GigaOm Research Reprint Expires: Feb 5, 2023

API and Microservices Management Benchmarkv2.0

Product Evaluation: Kong Enterprise and Apigee X

1. Executive Summary

Application programming interfaces, or APIs, are a ubiquitous method and de facto standard of communication among modern information technologies. The information ecosystems within large companies and complex organizations encompass a vast array of applications and systems, many of which have turned to APIs for exchanging data as the glue that holds these heterogeneous artifacts together. APIs have begun to replace older, more cumbersome methods of information sharing with lightweight, loosely-coupled microservices. This change allows organizations to knit together disparate systems and applications without creating technical debt from tight coupling with custom code or proprietary, unwieldy vendor tools.

APIs and microservices also give companies an opportunity to create standards and govern the interoperability of applications—both new and old—creating modularity. Additionally, they broaden the scope of data exchange with the outside world, particularly mobile technology, smart devices, and the Internet of Things (IoT), because organizations can share data securely with non-fixed-location consumers and producers of information.

The popularity and proliferation of APIs and microservices have created a need to manage the multitude of services a company relies on—both internally and externally. APIs vary greatly in protocols, methods, authorization/authentication schemes, and usage patterns. Additionally, IT teams need greater control over their hosted APIs, such as rate limiting, quotas, policy enforcement, and user identification, to ensure high availability while preventing abuse and security breaches. APIs also have enabled their own economy by allowing the transformation of businesses into a platform (and even a platform into a business). Exposing APIs opens the door to many partners who can co-create and expand the core platform without knowing anything about the underlying technology.

Still, many organizations depend on their apps, APIs, and microservices for high performance and availability. For this report, we define “high performance” as companies who experience workloads of more than 1,000 transactions per second (tps) and need a maximum latency below 30 milliseconds across their landscape. For these organizations, their need for performance is tantamount to their need for management because they rely on these API transaction rates to keep up with the speed of their business operations. For them, an API management solution must not become a performance bottleneck. On the contrary, many of these companies are looking for a solution to load balance across redundant API endpoints and enable high transaction volumes. Imagine a financial institution with 1,000 transactions happening per second—that translates to 86 million API calls in a single 24-hour day! Therefore, performance is a critical factor when choosing an API management solution.

In this report, we reveal the results of performance testing we completed on two API and Microservices Management platforms: Kong Enterprise and Google Cloud Apigee X.

In this performance benchmark, Kong came out a clear winner—particularly because of its higher rate of transactions per second. Kong’s maximum transactions per second throughput, achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency, was 54,250. By contrast, Apigee X’s maximum throughput was 1,750.

Testing hardware and software in the cloud is very challenging. Configurations may favor one vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the workload itself. Even more challenging is testing fully managed, as-a-service offerings for which the underlying configurations (processing power, memory, networking, et cetera) are unknown. Our testing demonstrates a narrow slice of potential configurations and workloads.

As the report’s sponsor, Kong opted for a default Kong installation and API gateway configuration out of the box—the solution was not tuned or altered for performance. The fully managed Apigee X was used “as-is,” since, by virtue of being fully managed, we have no access to, visibility into, or control of its respective infrastructure.

We hope this report is informative and helpful in uncovering some of the challenges and nuances of API management platforms.

We have provided enough information in the report for anyone to reproduce this test. You are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.

2. Full Cycle API Management

This report focuses on API management platforms deployed in the cloud. The cloud enables enterprises to differentiate and innovate with microservices rapidly. API endpoints can be cloned and scaled in a matter of minutes. The cloud is a disruptive technology, offering elastic scalability vis-à-vis on-premises deployments, enabling faster server deployment and application development, while allowing cost savings on compute. For these reasons and others, many companies have leveraged the cloud to maintain or gain momentum as a business.

This report examines the results of a performance benchmark test completed with two popular API management vendors: Kong and Apigee—two full-cycle API management platforms with practically limitless scale-out potential and architectures for large-scale, high-performance deployments. Despite these similarities, there are some distinct differences between the two platforms.

Kong
Kong was known originally as Mashape until the release of its API platform. Kong had a keen eye on delivering performance with a lightweight, cloud-native infrastructure when it based its API Gateway and platform on a lightweight proxy—which is known for its ability to handle more than 10,000 simultaneous connections with minimal memory usage and reverse proxy with caching. Kong became an open-source project in 2015. Today, it is used by well over 5,000 organizations on 400,000 running instances and has had 54 million downloads from GitHub.

The Kong API Gateway is available as an open-source software component (OSS) that has an impressive range of functionalities, including open-source plugin support, load balancing, and service discovery. Kong Enterprise (KE) 2.7—the edition tested in this benchmark—features expanded functionalities, such as the management dashboard, a customizable developer portal, security plugins, metrics, and 24×7 support. In this report, any mention of Kong should be applied to Kong Enterprise as well.

Kong and Kong Enterprise can be deployed either in the cloud or on-premise. For us, the installation took less than 10 minutes from scratch on an Amazon Web Services (AWS) EC2 instance. Debian and RedHat-based package managers (Yum and Apt-Get) have Kong in their repositories, and Docker and CloudFormation options are also available.

Kong can operate as a single node or can join nodes to each other to form a cluster. In a single-node configuration, the PostgreSQL or Cassandra database can live on the same instance as Kong. In a cluster configuration (as pictured below), the database is on a separate instance. Scaling Kong horizontally is simple. Kong is stateless, so adding nodes to the cluster is as easy as pointing a new node to the external database, so it can fetch all the configuration, security, services, routes, and consumer information it needs to begin processing API requests and responses. Also characteristic of a cluster environment is a load balancer (such as Nginx or HAProxy) used at the edge to provide a single address for clients and distribute requests among the Kong nodes using a chosen strategy (e.g., round robin or weighted).

Kong has a thriving ecosystem of plugins (referred to as the Kong Hub) supporting both open-source and enterprise plugins, such as LDAP authentication, CORS, Dynamic SSL, AWS Lambda, Syslog, and others. Kong is based on Nginx and allows users to create their own plugins using LuaJIT.

Apigee X
Apigee has been around a long time—since well before the advent of containers. Google acquired Apigee in September 2016 to give itself an API management solution to compete with the products of large cloud vendors like Amazon API Gateway and Microsoft’s Azure API Management. Apigee’s latest microservices product is called X, which was released in early 2021. Apigee X is available on-premises (a deployment they call Hybrid Cloud) and as software-as-a-service (SaaS) X on Google Cloud Platform. In fact, Apigee even exhorts potential customers on its own website1 to “think twice” about an on-premises deployment, calling it “an iceberg of maintenance and cost.” This might result from Google’s influence on Apigee as it prefers to see the product deployed in Google Cloud.

Since Google is clearly recommending to customers its Apigee X fully-managed offering, we tested it out of the box with a Standard level license—permitting 180 million API calls per year and no rate limiting or bandwidth reduction.

3. GigaOm API Workload Test Setup

The benchmark was designed to test the performance of the two aforementioned API and microservice management platforms—Kong and Apigee. The goal was to ascertain how well each of these platforms withstand significant transaction loads to simulate the use case of a high-performance, high-availability environment within companies that rely heavily on APIs and expect superior results from their API gateways.

API Workload Test

The GigaOm API Workload Field Test is a simple workload designed to attack an API or an API management worker node (or a load balancer in front of a cluster of worker nodes) with a barrage of identical GET requests at a constant number of requests per second.

To perform the attacks, we used the HTTP load testing tool Vegeta, a free-to-use workload test kit available on GitHub. The Vegeta tool returns a results bin file that contains the latencies and status code of every request. The attacker measured latency as the time elapsed from the point when an individual API request was made to when the API response was received. Thus, if we tested 1,000 requests per second for 60 seconds, the attack tool recorded 60,000 latency values. We used that data to compile and interpret the results of the test.

The test also requires a backend API that can listen and respond to requests. In this case, our backend API listens for a GET request such as:

    http://ipaddress/api

The API would respond with a string of 1,024 pseudorandom Unicode characters, such as:

    taZ3psgHkQ...

For these tests, we used a request payload size of 1KB.

The back-end API we used is further documented in the Appendix.

We completed three attempts per test on each platform, configuration, and request rate. We started with an attack rate of 1,000 requests per second (rps) and scaled up to 5,000 rps, 10,000 rps, and 20,000 rps. We ran each test for 60 seconds. We captured the latencies at the 50th, 90th, 95th, 99th, 99.9th, and 99.99th percentiles and the maximum latency seen during the test run. We recorded the test run that resulted in the lowest maximum latency or the highest success rate in the event of errors. Error status codes included HTTP status code 429 “Too Many Requests” or any 5xx codes, most often 500 “Internal Server Error.” A success rate of 100% meant all requests returned a 200 “OK” status code.

The results are shared in the Field Test Results section.

Test Environments

A goal in the setup of the benchmark environments was to create as close to an apples-to-apples comparison as possible. This is a challenge in modern cloud infrastructures compared to the closed loop, “sterile” lab environments of benchmarking in the past. There was the added complexity of using Apigee SaaS while Kong’s on-premises offerings were tested.

Kong’s on-premise offerings can be obtained freely from Yum and Apt-Get repositories (after which a license key is required). Apigee X as an on-premise offering is not readily available, so its SaaS offering was tested. For the benefit of our audience, it makes sense to test the software the competitors are selling.

Therefore, we still required assurance that we are getting like-for-like infrastructure for both competitors. Apigee documents and recommends that on-premises customers deploy their message processors on machines with at least 8 CPU cores and 21GB of memory. Thus, we can only assume that Apigee SaaS customers also get at least 8 cores and 16GB of memory. Accordingly, we chose a c5n.2xlarge EC2 instance type, which also has 8 cores and 21GB of RAM for Kong.

Also, for Kong, infrastructures for other required services were deployed, which included:

  • HAProxy load balancer
  • Kong PostgreSQL database

All extra services were placed on c5n.2xlarge EC2 instances with 8 cores and 21GB of RAM. Our benchmark was designed so that these services would not be bottlenecks because we were most interested in the raw processing power of the API gateways themselves.

Unfortunately, guaranteeing the same configuration was virtually impossible. Apigee X has an auto-scale feature that will scale up to meet demands. For Kong, we tested a single node and a 3-node cluster.

Additionally, Kong boasts it can run on even very small hardware configurations. Therefore, we separately tested Kong on an EC2 c5n.large instance with 2 CPU cores and 4GB of RAM.

There were 20 API endpoints built specifically for these tests. The API endpoints were built using NGINX open source. They responded to every API request with a fixed number of kernel random characters (urandom) data (1KB). Local response times were approximately 2 microseconds.

Benchmark Configurations

We also conducted the benchmark test using a few different like-for-like configurations:

  • Reverse proxy/pass-through (no authentication)
  • With authentication/authorization enabled

For the first configuration, we used each platform “out-of-the-box” as a reverse proxy or pass-through without requiring authentication or authorization. Then, we implemented both JSON Web Tokens (JWT) from a server on the same virtual private cloud (VPC) as the API endpoints. We also tested a third-party authentication server using OAuth with OpenID Connect, hosted by Google Identity Platform. In both cases, we created 25 OpenID users and randomly selected them to send API requests to prevent the API gateway from caching the authorized request. Finally, we tested a multiple plugin scenario by enabling both logging and JWT authentication simultaneously.

Results may vary across different configurations and again, you are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.

4. Test Results

Latency

This section analyzes the latencies in milliseconds from the various 60-second runs of each of the scaled GigaOm API Workload Field Tests described above. A lower latency is better—meaning API responses are coming back faster. Also, the latency reveals the response time at the 50th, 90th, 95th, 99th, 99.9th, and 99.99th percentiles and the maximum latency. These are important values for service-level agreements (SLAs) and determining the slowest response times a user might experience.

Figure 1. 1,000 Requests per Second

With a 1 KB response and authentication turned off, Kong latency is minimal, while Apigee X has over 10x Kong’s latency at all percentiles, including 457x at the 99.99th.

Figure 2. 2,000 Requests per Second

At 2,000 requests per second and with authentication off, there is more latency. Starting at the 95th percentile, Apigee X had thousands of times more latency than Kong did.

Figure 3. 5,000 Requests per Second

At 5,000 requests per second and with authentication off, Kong continued with very low latency. Apigee X could not perform at this level.

Figure 4. 10,000 Requests per Second

At 10,000 requests per second, with authentication off, Kong continued with very low latency. Apigee X could not perform at this level.

Figure 5. 20,000 Requests per Second

At 20,000 requests per second and authentication off, Kong continued with very low latency. Apigee X could not perform at this level.

Figure 6. 1,000 Requests per Second – OAuth On

Using OAuthV2 and 1,000 requests per second, we see that, starting at the 95th percentile, Apigee X had hundreds of times the latency of Kong.

Figure 7. Requests per Second – JWT Auth On

Using JWT and 1,000 requests per second, we see that Apigee X, starting at the 95th percentile, still had hundreds of times the latency of Kong.

Maximum Throughput

The maximum transaction throughput achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency is shown on the Maximum Throughput charts.

Figure 8. Maximum Throughput Kong and Apigee X

We compared maximum transactions per second throughput achieved with 100% success (with no 5xx or 429 errors) and with less than 30ms maximum latency. Kong achieved 54,250 transactions per second on this metric, while Apigee X achieved 1,750.

Figure 9. Maximum Throughput with Different Kong Configurations

This shows Kong’s maximum transactions per second throughput achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency across different Kong configurations.

5. Conclusion

This report outlines the results from a GigaOm API Workload Field Test.

We experimented with different transaction loads and authentications, including none, and consistently received better transactions per second from Kong over Apigee X. For example, at 2,000 requests per second with authentication off, starting at the 95th percentile, Apigee X had thousands of times more latency than Kong did.

Kong’s maximum transactions per second throughput, achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency, was 54,250. Apigee X’s was 1,750.

For this test using this particular workload with these particular configurations, API requests came back with the lowest latencies and highest throughput on Kong versus the Apigee solutions.

Keep in mind, optimizations on all platforms would be possible as the offerings evolve or internal tests point to different configurations.

6. Appendix: Recreating the Test

The backend API used in this test was a custom NGINX configuration developed by GigaOm. The application works by binding the API application to port 1980 with NGINX and listening for GET requests, such as:

    GET http://fqdn-or-ip-address:1980

The API would respond with a string of pseudorandom Unicode characters from /dev/urandom, such as:

    taZ3psgHkQ

The following is the NGINX configuration for the backend API. You are free to use and modify at your own discretion. GigaOm makes no warranty or claim for its use beyond the scope of this test or report.

worker_processes auto;
worker_cpu_affinity auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
worker_rlimit_nofile 20480;

events {
   accept_mutex off;
   worker_connections 10620;
}

http {
   access_log off;
   server_tokens off;
   keepalive_requests 10000;
   tcp_nodelay on;

   server {
       listen 1980 reuseport;
       location / {
           return 200 "JXvkE5pBpPN3T8bknNsqaM0kKu0j8BCV0S6TkNljlpDCYi8dIdn2TL11oHv1iFkJAjj8VDnEcBoJSy73QTuCcI8oeCna3jg34beyd7n5fZ22WSZP6gynF6PF5lMKsJTRRFr1ur5trPpTU4nvzJOsbGY6O1bAoeCNTG1VpDHZXQH67wZi35mNj6flLR3glKJwkwXzdrVgbeivVbT2fOz9zjxr0U8A4SONXYRyEr4jZzCqlYG4EuV08X4e5unvkO410ZeRrt31arys9hwR8tuCSi4a6KUsVeA5eZ74GQMv2NByz7R5BFCHIg0BbtexFsxdE9RZyj2sINlqbTQHNqwuiWDRG1CSJdOrTXYNmNz98Ib9BtAGMY7ikINWTeCaH8Qjet6wsDMyLbMjDfH3TjBTeMJDVyLItqfY6MZbblEiEV0mNVBFlG0pn3s12EP0X7DzgIfSP6vU3jVdsuEWENja6DdWG0zciTAMbe4xwRpyG0GWLsmoUoEVAOPsWPeMthsLmjKO2WBQ9vUub2XV0IyO09vZKGajMaEZnXSqhblRrKYcknK7Is2TIgI6o6C0iIKEql1jhdJAl5iFj4VytPftb9k8qbA5QE4dr2wcjWp8b0Rw9wBx9xYUDIkJO6IdrZqgR1APvAF9UyokXgTkHtYycEC1QG0GSUhAT61FjGxtkZU86rV4djttr8zwJaKH7B126rSwvCVWYM82SRxZVJ2RkyQ3xOaRM9DilXg4J90LSAlYu2TUpZpkym8Uk0qOsIWPr2e9jwLkonfdh2AqRX4QS9tCrvA2pfwLEptRNxsVLKmNb2BJpt2YQ7K5OdYmW5oLwKTYtaB2sbCKQCGXWiieLfgt70gdumDsrBM8QslALQLZhX24rfadHvQ9sUKUrW7KW3rkAhxJ1cvvU1up8NHzal67KFLtFS8bJCb22cFL6L7sHynseVS9a1YxYOSroaRDhz0WX4xdW7UyJ4GrsqE9sXd66U8iAv78IaprC3M3HnJyieqyGzewvqSkAvhcnBKj";
       }
   }
}

We desired to create a backend API that was as performant and lightweight as possible so the latency generated by the application itself was minimized.

7. Disclaimer

Performance is important but it is only one criterion for an API and Microservice Management selection. This test is a point-in-time check on specific performance. There are numerous other factors to consider in selection across factors of Administration, Features and Functionality, Workload Management, User Interface, Scalability, Vendor, Reliability, and numerous other criteria. It is also our experience that performance changes over time and is competitively different for different workloads. Also, a performance leader can hit up against the point of diminishing returns and viable contenders can quickly close the gap.

GigaOm runs all of its performance tests to strict ethical standards. The results of the report are the objective results of the application of load tests to the simulations described in the report. The report clearly defines the selected criteria and process used to establish the field test. The report also clearly states the tools and workloads used. The reader is left to determine for themselves how to qualify the information for their individual needs. The report does not make any claim regarding the third-party certification and presents the objective results received from the application of the process to the criteria as described in the report. The report strictly measures performance and does not purport to evaluate other factors that potential customers may find relevant when making a purchase decision.

This is a sponsored report. Kong chose the competitors, the test, and the Kong configuration. GigaOm chose the most compatible configurations as-is, out-of-the-box, and ran the testing workloads. Choosing compatible configurations is subject to judgment. We have attempted to describe our decisions in this report.

8. About Kong

Kong makes securing, managing and orchestrating microservice APIs easier and faster than ever. That’s why it powers trillions of API transactions. That’s why technology companies, major banks, e-commerce innovators, and government agencies put Kong in front of their most important web workloads. And that’s why developers around the globe enthusiastically contribute innovations on top of the Kong platform.

Kong focuses on encompassing technology innovation for customer success. Not only does Kong Inc. build a world-class platform for powering microservice API development, it enables customers to succeed in realizing maximum value from their microservice infrastructure with comprehensive services to deliver even higher levels of agility, security, and scale.

9. About William McKnight

William McKnight

William McKnight is a former Fortune 50 technology executive and database engineer. An Ernst & Young Entrepreneur of the Year finalist and frequent best practices judge, he helps enterprise clients with action plans, architectures, strategies, and technology tools to manage information.

Currently, William is an analyst for GigaOm Research who takes corporate information and turns it into a bottom-line-enhancing asset. He has worked with Dong Energy, France Telecom, Pfizer, Samba Bank, ScotiaBank, Teva Pharmaceuticals, and Verizon, among many others. William focuses on delivering business value and solving business problems utilizing proven approaches in information management.

10. About Jake Dolezal

Jake Dolezal

Jake Dolezal is a contributing analyst at GigaOm. He has two decades of experience in the information management field, with expertise in analytics, data warehousing, master data management, data governance, business intelligence, statistics, data modeling and integration, and visualization. Jake has solved technical problems across a broad range of industries, including healthcare, education, government, manufacturing, engineering, hospitality, and restaurants. He has a doctorate in information management from Syracuse University.

11. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

12. Copyright

© Knowingly, Inc. 2022 "API and Microservices Management Benchmark" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.