What is logical replication?
Logical replication is a PostgreSQL feature that replicates data objects and their changes at the logical level, rather than the physical level. It works by:- WAL Level: The database must be configured with
wal_level = logicalto capture logical changes - Publication: Creates a logical replication stream on the source database
- Replication Slot: Maintains position in the WAL stream and ensures data consistency
- Subscription/Consumer: External tools consume the logical replication stream
- Replication of individual transactions and row changes
- Selective replication of specific tables or databases
- Cross-version replication between different PostgreSQL versions
- CDC integration with external tools and data pipelines
Configuration requirements
PlanetScale cluster parameters
To enable logical replication on your PlanetScale for Postgres cluster, configure these parameters in the Clusters > Parameters tab:| Parameter | Required Value | Description |
|---|---|---|
wal_level | logical | Setting the wal_level to logical enables logical replication, which captures row-level changes in a format that can be flexibly replayed on target systems. |
max_replication_slots | 2 x replicas | Set max_replication_slots to twice the number of replicas or subscribers. Each replica uses one slot, with the extra slots reserved for operations like failover. |
max_wal_senders | 2 x replicas | Likewise, set max_wal_senders to twice the number of replicas or targets, and not less than max_replication_slots. |
max_slot_wal_keep_size | > 4GB | The value of max_slot_wal_keep_size should be tuned to ensure you are keeping WAL files long enough for subscribers to consume them, while being sure that the source disk is not overrun by files. A reasonable starting point is > 4GB, and monitor your replication lag, your database’s change rate (inserts, updates, deletes), and available disk space. |
| Parameter | Required Value | Description |
|---|---|---|
sync_replication_slots | on | Set to on to enable synchronization of replication slots to the subcribers. |
hot_standby_feedback | on | Set to on to prevent query conflicts during replication. |
Verify configuration
After setting these configuration parameters in the dashboard, you can verify them in the CLI. For example, to verify the WAL level:logical:
CDC tool configuration
Ensure your CDC tool is configured properly:- Airbyte: Ensure replication slots are created with failover support (Setup Guide)
- AWS DMS: Manually create failover-enabled replication slots before configuring DMS (Setup Guide)
- ClickHouse: See ClickPipes documentation for PlanetScale configuration (Setup Guide)
- Debezium: Configure connector to use failover-enabled replication slots (Setup Guide)
- Fivetran: Create your own replication slot with
failover = true(Setup Guide)
Create and manage users
For production CDC deployments, login as the default user and create a dedicated replication user with minimal privileges:WITH REPLICATION clause allows the user to connect to the server using the replication protocol, to create and user replication slots, to stream WAL files, and to perform the logical decoding operations. You will configure this user to connect from your subscriber/consumer side.
Because of the edge connection settings, to login as this user, add the branch ID after the username, like this:
Create and manage replication streams
Create a replication slot
Using the dedicated replication role, create logical replication slots with thefailover option enabled to preserve the slots during any switchover or failover events:
Create initial publication
Some CDC tools require you to create publications to specify which tables to replicate. You will need to do this as the owner of the tables or the superuser. This example uses the default PlanetScale superuser.Add tables to a publication
Currently, tables must be added to the publication individually or as a comma-delimited list. Remember to update your publication when adding new tables that should be replicated.Replica identity configuration
For complete change tracking of both row values before and after changes (as well as to support any tables without a primary key), set the replica identity to FULL:Verify publications
Issue the following to see active publications with tables. Do this as the default user.Monitoring and troubleshooting
PlanetScale metrics for CDC monitoring
PlanetScale provides built-in metrics that are essential for monitoring your CDC setup. Access these through your Metrics dashboard to track replication health and performance:| Metric Category | Key Indicators for CDC | What to Monitor |
|---|---|---|
| WAL archival rate | Success/Failed counts | Monitor failed WAL archival attempts that could impact CDC streams |
| WAL archive age | Seconds behind | Age of oldest unarchived WAL - should be under 60 seconds for healthy CDC |
| WAL storage | Storage usage in MB | Track WAL disk usage; high usage may indicate CDC consumers falling behind |
| Replication lag | Lag in seconds | Monitor delay between primary and replicas; high lag may indicate CDC consumer performance issues |
| Transaction rate | Transactions per second | Track database workload intensity affecting CDC processing |
| Memory | RSS and Memory mapped | Monitor memory pressure that could impact logical decoding performance |
| Primary Storage Usage | MB disk utilization | Monitor disk utilization to be sure WAL files are being consumed quickly enough |
For detailed information about interpreting these metrics, see the Cluster
Metrics documentation.
Monitoring replication lag
Check replication slot lag. The replication_lag column shows how much WAL data the publisher is keeping because the subscriber has not confirmed or processed it yet. This value should be kept well belowmax_wal_size.
WAL retention and disk usage
Monitor WAL retention to prevent disk space issues. This is another way to see similar information, and will include any PlanetScale HA replicas.Common issues
Issue: WAL disk space growing rapidlyCause: Inactive or slow CDC consumers
Solution: Remove unused slots or troubleshoot slow consumers Issue: Failover breaks CDC stream
Cause: Replication slot not properly synchronized
Solution: Verify failover configuration and slot synchronization status
Best practices
- Always enable failover: Never deploy CDC to production without
failover = trueon replication slots and proper PlanetScale cluster configuration - Verify configuration: Double-check that both your CDC tool and PlanetScale settings are properly configured before going live
- Test failover scenarios: Test actual failover events in staging environments to ensure your configuration works
- Regular monitoring: Monitor replication lag, WAL retention, and slot synchronization status
- Slot cleanup: Remove unused logical replication slots to prevent WAL accumulation
- CDC client resilience: Ensure CDC clients can handle connection interruptions gracefully
Security considerations
- Logical replication exposes table data - ensure proper access controls
- Use dedicated database users with minimal required privileges for CDC
- Consider network security when streaming to external systems
- Monitor for unauthorized replication slots

