PlanetScale vs Google AlloyDB benchmarks
This page includes benchmarks that compare the performance of Postgres on PlanetScale with Postgres on AlloyDB, along with all of the resources needed to reproduce these results. We also recommend reading our Benchmarking Postgres blog post, which covers the methodology used in these benchmarks and the steps taken to maintain objectivity. We invite other vendors to provide feedback.
Benchmark configuration
All benchmarks described here were run with the following configuration:
Provider & Instance | Region | vCPUs | RAM | Storage | IOPS |
---|---|---|---|---|---|
PlanetScale M-320 | AWS us-east-1 | 4 | 32GB | 929GB | unlimited |
AlloyDB N2 | GCP us-central1 | 4 | 32GB | auto scaling | N / A |
- Configuration: All Postgres configuration options left at each platform's defaults, except for connection limits and timeouts which may be modified to facilitate benchmarking.
- Benchmark machine: PlanetScale benchmarks run from a
c6a.xlarge
in AWS us-east-1. AlloyDB benchmarks run from ae2-standard-4
in GCP us-central1.
TPCC Benchmarks
TPCC is a widely-used benchmark to measure general-purpose OLTP workload performance. This includes selects, inserts, updates, and deletes.
Benchmark data: A TPCC data set generated with TABLES=20
and SCALE=250
using the Percona sysbench-tpcc
scripts. This produces a ~500 gigabyte Postgres database. You can replicate the data following these instructions.
Benchmark execution: Using the Percona tpcc scripts running a load with 100 simultaneous connections. We run the load on each database for 5 minutes (300 seconds).
Queries per second
Our first benchmark measures queries per second (QPS) at 32 connections and 64 connections, revealing that PlanetScale performs much better:
Click the graphs in the sidebar to toggle the number of connections. The PlanetScale database averaged ~18,000 QPS. AlloyDB averaged ~9,000 QPS.
p99 latency
We also measured the p99 latency for the duration of the benchmark run (lower is better):
Despite both being in the same region as the benchmarking instance, PlanetScale shows much lower latency due to locally-attached NVMe drives with unlimited IOPS, 8th-generation AArch64 CPUs, and high-performance query path infrastructure.
OLTP benchmarks
In addition to TPCC, we run the OLTP Read-only sysbench benchmark. OLTP workloads tend to be 80%+ reads, and this benchmark allows us to isolate performance for such queries.
Benchmark data: A simple OLTP data set generated with TABLES=10
and SCALE=130000000
using standard sysbench. This produces a ~300 gigabyte Postgres database. You can find instructions for replicating this data here.
Benchmark execution: Using the standard sysbench tool using the oltp_read_only
and oltp_point_selects
benchmarks. You can find instructions for replicating this benchmark here.
Queries per second
This benchmark contains only SELECT
queries, including ones with range scans and aggregations.
The PlanetScale database averaged ~35,000 QPS. AlloyDB averaged a much lower ~30,000 QPS. PlanetScale not only excels in QPS, but provides a much more consistent performance over time, leading to better predictability.
p99 latency
While running this benchmark, we measured the p99 latency of queries (lower is better):
The p99 was similar for both, with PlanetScale offering significantly better consistency, which is desirable for predictable performance.
Query-path latency
We measured pure query-path latency by running SELECT 1;
200 times in a row on a single connection. This tests the overhead of any database query.
Results compare PlanetScale + PSBouncer, standard PlanetScale connection, direct-to-Postgres on PlanetScale, and a direct connection to AlloyDB. Lower is better.
Direct connections to both Postgres and AlloyDB are very fast. Adding in PSBouncer and other proprietary PlanetScale networking technology add latency, but add benefits of query routing, buffering, and other features. (Note: These were all same-AZ tests).
Cost
A PlanetScale M-320 with 929GB of storage costs $1,399/mo. This includes three nodes with 4 vCPUs and 32GB RAM each, one primary and two replicas. Replicas can be used for handling additional read queries and for high-availability. The benchmark results shown here only utilized the primary.
AlloyDB's cost structure is broken down by hourly costs per-CPU, per-GB of RAM, and per-GB of storage.
- Single-node CPU Cost: 4 CPUs * $0.06608 * 730 hours = $192.95/mo
- Single-node RAM Cost: 32 GB * $0.0112 * 730 hours = $261.63/mo
- Shared storage Cost: 929 GB * $0.0004109 * 730 hours = $278.66/mo
To match the capabilities and availability of the 3-node PlanetScale M-320, we must add two replicas. Therefore, the total is $192.95 * 3 + $261.63 * 3 + $278.66
= $1642.41/mo.
PlanetScale offers better performance at a lower cost for applications that require high availability and resiliency.