Skip to content
Discord Get Started

Migrate from Amazon RDS

This guide covers migrating from Amazon RDS for PostgreSQL (or Aurora PostgreSQL) to DB9. The process uses standard pg_dump for export and the DB9 CLI or psql for import.

For the general PostgreSQL migration guide, see Migrate from PostgreSQL.

  • SQL compatibility — DB9 supports the same DML, DDL, joins, CTEs, window functions, and subqueries you use in RDS PostgreSQL. Most queries work without changes.
  • PostgreSQL drivers — Any driver that connects via pgwire (node-postgres, psycopg, pgx, JDBC) works with DB9.
  • ORM compatibility — Prisma, Drizzle, SQLAlchemy, TypeORM, Sequelize, Knex, and GORM are tested and supported.
  • Data types — Common types (TEXT, INTEGER, BIGINT, BOOLEAN, TIMESTAMPTZ, UUID, JSONB, arrays, vectors) work identically.
AreaAmazon RDS PostgreSQLDB9
Connection stringpostgresql://master:pass@instance.region.rds.amazonaws.com:5432/dbnamepostgresql://tenant.role@pg.db9.io:5433/postgres
Port5432 (default)5433
Database nameCustomAlways postgres
Connection poolingExternal (PgBouncer on EC2, RDS Proxy)No built-in pooler — use application-side pooling
Extensions80+ supported9 built-in (http, vector, fs9, pg_cron, embedding, hstore, uuid-ossp, parquet, zhparser)
IAM authenticationSupportedNot supported — use connection string credentials
Read replicasSupportedNot supported
ReplicationLogical and streaming replicationNot supported
Table partitioningSupportedNot supported
LISTEN/NOTIFYSupportedNot supported
Automated backupsPoint-in-time recovery, snapshotsCLI-based backup (db9 db dump)
Multi-AZAutomatic failoverManaged by DB9

Review the Compatibility Matrix for the full list of supported and unsupported features.

  • Access to your RDS instance (master user credentials or IAM auth)
  • Network access to the RDS instance (your machine must be able to reach it — check security groups and VPC settings)
  • pg_dump installed locally (should match or be close to your RDS PostgreSQL version)
  • DB9 CLI installed: curl -fsSL https://db9.ai/install | sh
  • A DB9 account: db9 create --name my-app to create your target database
  1. Ensure Network Access to RDS

    RDS instances are typically inside a VPC. To run pg_dump from your local machine, you need one of:

    • Public accessibility enabled on the RDS instance, with your IP allowed in the security group
    • SSH tunnel through a bastion host in the same VPC
    • AWS SSM Session Manager port forwarding
    Terminal
    # SSH tunnel example
    ssh -L 5432:my-instance.region.rds.amazonaws.com:5432 ec2-user@bastion-host
    # Then use localhost:5432 in your pg_dump command

    If using IAM authentication, generate a temporary password:

    Terminal
    export PGPASSWORD=$(aws rds generate-db-auth-token \
    --hostname my-instance.region.rds.amazonaws.com \
    --port 5432 \
    --username master \
    --region us-east-1)
  2. Export from RDS

    Use pg_dump with your RDS connection details.

    Schema and data (plain SQL format)

    Terminal
    pg_dump --no-owner --no-privileges --no-comments \
    "postgresql://master:password@my-instance.region.rds.amazonaws.com:5432/mydb?sslmode=require" \
    > export.sql

    Schema only

    Terminal
    pg_dump --schema-only --no-owner --no-privileges \
    "postgresql://master:password@my-instance.region.rds.amazonaws.com:5432/mydb?sslmode=require" \
    > schema.sql

    Flags explained:

    • --no-owner — omits ALTER ... OWNER TO statements that reference RDS-specific roles (rds_superuser, rds_replication, etc.)
    • --no-privileges — omits GRANT/REVOKE statements for RDS role hierarchy
    • --no-comments — omits COMMENT ON statements

    Use plain SQL format (default). DB9 does not support pg_restore with the custom (-Fc) or directory (-Fd) formats — import via SQL text only.

    Exclude RDS-internal schemas

    If your dump includes RDS-specific schemas, exclude them:

    Terminal
    pg_dump --no-owner --no-privileges --no-comments \
    -N rdsadmin -N rds_tools \
    "postgresql://master:password@my-instance.region.rds.amazonaws.com:5432/mydb?sslmode=require" \
    > export.sql
  3. Clean the Export

    RDS exports may contain features DB9 does not support. Remove or comment out:

    • CREATE EXTENSION for extensions DB9 does not have — RDS supports 80+ extensions; DB9 supports 9 built-in. Remove any CREATE EXTENSION for extensions not in: http, uuid-ossp, hstore, fs9, pg_cron, parquet, zhparser, vector, embedding.
    • CREATE PUBLICATION / CREATE SUBSCRIPTION — DB9 does not support logical replication.
    • Row-level security policiesCREATE POLICY, ALTER TABLE ... ENABLE ROW LEVEL SECURITY.
    • Table partitioningPARTITION BY, CREATE TABLE ... PARTITION OF.
    • Foreign data wrappersCREATE SERVER, CREATE FOREIGN TABLE (common with postgres_fdw or mysql_fdw on RDS).

    A quick way to identify issues:

    Terminal
    # Check for unsupported extensions
    grep "CREATE EXTENSION" export.sql
    # Check for partitioning
    grep -i "PARTITION" export.sql
    # Check for RLS
    grep -i "ROW LEVEL SECURITY\|CREATE POLICY" export.sql
    # Check for replication
    grep -i "PUBLICATION\|SUBSCRIPTION" export.sql
    # Check for FDW
    grep -i "CREATE SERVER\|FOREIGN TABLE" export.sql
  4. Create the DB9 Database

    Terminal
    db9 create --name my-app --show-connection-string

    This returns immediately with the connection string and credentials.

  5. Import into DB9

    Option A: CLI import (recommended for most databases)

    Terminal
    db9 db sql my-app -f export.sql

    Suitable for databases up to the API import limits (50,000 rows or 16 MB per table).

    Option B: Direct psql import (for larger databases)

    Terminal
    psql "$(db9 db status my-app --json | jq -r .connection_string)" -f export.sql

    Streams SQL through pgwire without API size limits.

    Option C: COPY for bulk data

    Terminal
    # Import schema first
    psql "$(db9 db status my-app --json | jq -r .connection_string)" -f schema.sql
    # Then stream data directly from RDS into DB9
    pg_dump --data-only --no-owner \
    "postgresql://master:password@my-instance.region.rds.amazonaws.com:5432/mydb?sslmode=require" \
    | psql "$(db9 db status my-app --json | jq -r .connection_string)"

    DB9 supports COPY in CSV and TEXT formats over pgwire.

  6. Update Your Application

    Connection string

    Replace the RDS connection string with DB9’s:

    Diff
    DATABASE_URL=postgresql://master:password@my-instance.region.rds.amazonaws.com:5432/mydb?sslmode=require
    DATABASE_URL=postgresql://a1b2c3d4e5f6.admin@pg.db9.io:5433/postgres?sslmode=require

    Key differences:

    • Username: DB9 uses {tenant_id}.{role} format (e.g., a1b2c3d4e5f6.admin)
    • Port: 5433, not 5432
    • Database: Always postgres
    • Host: pg.db9.io (not region-specific endpoints)

    IAM authentication

    If you use RDS IAM authentication to generate temporary passwords, remove that logic. DB9 uses static credentials in the connection string.

    RDS Proxy

    If you use RDS Proxy for connection pooling, remove it and configure pooling at the application level:

    TypeScript
    const pool = new pg.Pool({
    connectionString: process.env.DATABASE_URL,
    max: 10,
    idleTimeoutMillis: 30000,
    });

    AWS SDK integrations

    If your application uses AWS SDK to interact with RDS (automated snapshots, instance management), those APIs will not apply to DB9. Use the DB9 CLI for database management instead.

    For ORMs, see the integration guides: Prisma, Drizzle, SQLAlchemy.

  7. Validate

    Check schema

    Terminal
    db9 db dump my-app --ddl-only

    Compare the output with your original schema to confirm all tables, indexes, and constraints were created.

    Check row counts

    Terminal
    db9 db sql my-app -q "SELECT count(*) FROM users"
    db9 db sql my-app -q "SELECT count(*) FROM orders"

    Compare row counts against the source RDS database.

    Run your test suite

    Terminal
    DATABASE_URL="$(db9 db status my-app --json | jq -r .connection_string)" npm test

    Check for unsupported features

    If your tests fail, check these common differences:

    • SERIALIZABLE isolation — DB9 does not support SERIALIZABLE and returns an error. Use REPEATABLE READ or READ COMMITTED instead
    • LISTEN/NOTIFY — not supported; use polling or an external message queue
    • Advisory locks — available, but coordination is node-local. For strict row-level coordination, use SELECT ... FOR UPDATE
    • GIN/GiST index performance — these index types are accepted but fall back to table scan

If you need to revert:

  1. Your RDS instance is unchanged — switch DATABASE_URL back to the RDS connection string.
  2. If you need to export data created in DB9 back to RDS:
Terminal
# Export from DB9
db9 db dump my-app -o db9-export.sql
# Import to RDS
psql "postgresql://master:password@my-instance.region.rds.amazonaws.com:5432/mydb?sslmode=require" \
-f db9-export.sql

The db9 db dump command outputs plain SQL (up to 50,000 rows or 16 MB per table). For larger databases, use psql to stream individual tables with COPY.

  • No zero-downtime migration — DB9 does not support logical replication, so you cannot use RDS logical replication to stream changes. Plan a maintenance window or accept a brief cutover period.
  • Extension gaps — RDS supports 80+ extensions; DB9 has 9 built-in. Check your CREATE EXTENSION statements carefully. Common RDS extensions not in DB9 include PostGIS, pg_trgm, pgcrypto, pg_stat_statements, ltree.
  • Dump size limits — The db9 db sql -f API import has limits (50,000 rows, 16 MB per table). For larger databases, use direct psql connection for import.
  • No read replicas — RDS supports read replicas for scaling reads. DB9 does not have read replicas. If your application relies on read/write splitting, consolidate to a single connection string.
  • No automated backups — RDS provides automated point-in-time recovery and snapshots. DB9 uses CLI-based backup (db9 db dump). Set up your own backup schedule.
  • AWS ecosystem loss — CloudWatch metrics, Performance Insights, and other AWS-integrated monitoring will not apply. Use DB9’s built-in monitoring.