Table of Contents
- API Management in the Cloud
- GigaOm API Workload Test Setup
- Test Results
- About API7
- About William McKnight
- About Jake Dolezal
This report focuses on API management platforms deployed in the cloud. The cloud enables enterprises to differentiate and innovate with microservices at a rapid pace. It allows API endpoints to be cloned and scaled in a matter of minutes. And it offers elastic scalability compared with on-premises deployments, enabling faster server deployment and application development, and allowing less costly compute.
More importantly, many organizations depend on their APIs and microservices for high performance and availability. For the purposes of this paper, we define “high performance” as that required by companies that experience workloads of more than 1,000 transactions per second and need a maximum latency of fewer than 30 milliseconds across their API landscape. For these organizations, the need for performance is tantamount to their need for management, because they rely on these API transaction rates to keep up with the speed of their business.
An API management solution cannot be a performance bottleneck. On the contrary, many of these companies are looking for a solution to load balance across redundant API endpoints and enable high transaction volumes. If a business experiences 1,000 transactions per second, that translates to 3 billion API calls in a month. Large companies with high-end API traffic levels are commonly seeing monthly API calls exceed 10 billion. Thus, performance can be a critical factor when choosing an API management solution.
In this paper, we reveal the results of performance testing we completed with two full-lifecycle API management platforms: API7 and Kong Enterprise (Kong EE).
API7 outperformed Kong EE at all attack rates for our single-node setup. API7 had almost 14 times lower latency than Kong EE at the 99.99th percentile at 10,000 requests per second. The latencies for API7 and Kong EE tended to diverge at higher percentiles. The difference in latency is pronounced at the 99.9th and 99.99th percentiles and at the maximum latency in all our runs.
Testing hardware and software in the cloud is very challenging. Configurations may favor one vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the workload itself. Even more challenging is testing fully managed as-a-service offerings for which the underlying configurations (processing power, memory, networking, and so forth) are unknown to us. Our testing demonstrates a narrow slice of potential configurations and workloads.
As the sponsor of the report, API7 opted for their default API gateway—the solution was not tuned or altered for performance. GigaOm selected the configuration for Kong Enterprise that was closest in terms of CPU and memory configuration.
We leave the issue of fairness for the reader to determine. We strongly encourage you, as the reader, to look past marketing messages and discern for yourself what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of platform selection.
We have provided enough information in the report for anyone to reproduce this test. You are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.