Table of Contents
- Cloud IaaS SQL Server Offerings
- Field Test Setup
- Field Test Results
- Price Per Performance
- About Microsoft
- About William McKnight
- About Jake Dolezal
The fundamental underpinning of an organization is its transactions. It must do them well, with integrity and performance. Not only has transaction volume soared of late, but the level of granularity in the transaction details has also reached new heights. Fast transactions greatly improve the efficiency of a high-volume business. Performance is incredibly important.
There are a variety of databases available to the transactional application. Ideally, any database would have the required capabilities; however, depending on the application’s scale and the chosen cloud, some database solutions can be prone to delays. Recent trends in information management see organizations shifting their focus to cloud-based solutions. In the past, the only clear choice for most organizations has been on-premises data using on-premises hardware. However, costs of scale are chipping away the notion that this remains the best approach for some, in not all, of a company’s transactional needs. The factors driving operational and analytical data projects to the cloud are many. Still, the advantages, like data protection, high availability, and scale, are realized with infrastructure as a service (IaaS) deployment. In many cases, a hybrid approach serves as an interim step for organizations migrating to a modern, capable cloud architecture.
This report outlines the results from a GigaOm Transactional Field Test, derived from the industry-standard TPC Benchmark™ E (TPC-E), to compare two IaaS cloud database offerings:
- Microsoft SQL Server on Amazon Web Services (AWS) Elastic Cloud Compute (EC2) instances
- Microsoft SQL Server Microsoft on Azure Virtual Machines (VM)
Both are installations of Microsoft SQL Server, and we tested Red Hat Enterprise Linux OS.
The results of the GigaOm Transactional Field Test are valuable to all operational functions of an organization such as human resource management, production planning, material management, financial supply chain management, sales and distribution, financial accounting and controlling, plant maintenance, and quality management. The underlying data for many of these departments today are in SQL Server, which is also frequently the source for operational interactive business intelligence (BI).
With the Azure feature of local cache, Microsoft SQL Server on Microsoft Azure Virtual Machines (VM) indicated 3x better performance over AWS when tested on RedHat Enterprise Linux (RHEL) 8.2. SQL Server on Microsoft Azure Virtual Machines (VM) had up to 68% better price-performance when comparing both on-demand and pay-as-you-go rates.
Testing hardware and software across cloud vendors is very challenging. Configurations favor one cloud vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the benchmarking workload itself. Our testing demonstrates a narrow slice of potential configurations and workloads.
As the sponsor of the report, Microsoft selected the particular Azure configuration it desired to test. GigaOm selected the AWS instance configuration closest in terms of CPU, memory, and disk configuration. There were tradeoffs which resulted in an input-output operations per second (IOPS) disadvantage to AWS (which we discuss in the report).
We leave the issue of fairness for the reader to determine. We strongly encourage you, as the reader, to look past marketing messages and discern for yourself what is of value. We hope this report is informative and helpful in uncovering some of the challenges and nuances of platform selection.
In the same spirit of the TPC, price-performance is intended to be a normalizer of performance results across different configurations. Of course, this has its shortcomings, but at least one can determine “what you pay for and configure is what you get.”
The parameters to replicate this test are provided. You are encouraged to compile your own representative queries, data sets, and data sizes and test compatible configurations applicable to your requirements.
The parameters to replicate this test are provided. We used the BenchCraft tool, which was audited by a TPC-approved auditor, who reviewed all updates to BenchCraft. All the information required to reproduce the results are documented in the TPC-E specification. BenchCraft implements the requirements documented in Clauses 3, 4, 5, and 6 of the benchmark specification. There is nothing in BenchCraft that alters the performance of TPC-E or this TPC-E derived workload.
The scale factor in TPC-E is defined as the number of required customer rows per single transactions per second. We did change the number of Initial Trading Days (ITD). The default value is 300, which is the number of 8-hour business days to populate the initial database. For these tests, we used an ITD of 30 days rather than 300. This reduces the size of the initial database population in the larger tables. The overall workload behaves identically with an ITD of 300 or 30 as far as the transaction profiles are concerned. Since the ITD was reduced to 30, any results obtained would not be compliant with the TPC-E specification and, therefore, not comparable to published results. This is the basis for the standard disclaimer that this is a workload derived from TPC-E.
However, BenchCraft is just one way to run TPC-E. All the information necessary to recreate the benchmark is available at TPC.org (this test used the latest version 1.14.0). Just change the ITD, as mentioned above.
We have provided enough information in the report for anyone to reproduce this test. Again, you are encouraged to compile your own representative queries, data sets, data sizes, and test compatible configurations applicable to your requirements.