Migrate from Neon
This guide walks through migrating a PostgreSQL database from Neon to DB9. The process uses standard PostgreSQL tooling (pg_dump for export) and the DB9 CLI for import.
What Changes and What Stays the Same
Section titled “What Changes and What Stays the Same”Stays the same
Section titled “Stays the same”- SQL compatibility — DB9 supports the same DML, DDL, joins, CTEs, window functions, and subqueries you use in Neon. Most queries work without changes.
- PostgreSQL drivers — Any driver that connects via pgwire (node-postgres, psycopg, pgx, JDBC) works with DB9.
- ORM compatibility — Prisma, Drizzle, SQLAlchemy, TypeORM, Sequelize, Knex, and GORM are tested and supported.
- Data types — Common types (TEXT, INTEGER, BIGINT, BOOLEAN, TIMESTAMPTZ, UUID, JSONB, arrays, vectors) work identically.
Changes
Section titled “Changes”| Area | Neon | DB9 |
|---|---|---|
| Connection string | postgresql://user:pass@ep-*.neon.tech/dbname | postgresql://tenant.role@pg.db9.io:5433/postgres |
| Connection pooling | Built-in PgBouncer (transaction mode) | No built-in pooler — use application-side pooling |
| Branching | Copy-on-write, instant for any size | Full data copy, async (seconds to minutes) |
| Compute | Autoscaling, scale-to-zero | Fixed per-database, always on |
| Serverless driver | @neondatabase/serverless (HTTP/WebSocket) | Standard pgwire + browser HTTP scoped support (phase-1) |
| Extensions | 40+ community extensions | 9 built-in (http, vector, fs9, pg_cron, embedding, hstore, uuid-ossp, parquet, zhparser) |
| Replication | Logical replication supported | Not supported |
| Row-level security | Supported | Browser HTTP scoped support (phase-1) |
| Table partitioning | Supported | Not supported |
| LISTEN/NOTIFY | Supported | Not supported |
| Port | 5432 | 5433 |
| Database name | Custom (e.g., neondb) | Always postgres |
Review the Compatibility Matrix for the full list of supported and unsupported features.
Prerequisites
Section titled “Prerequisites”- Access to your Neon database (direct/unpooled connection string)
pg_dumpinstalled locally (comes with PostgreSQL client tools)- DB9 CLI installed:
curl -fsSL https://db9.ai/install | sh - A DB9 account:
db9 create --name my-appto create your target database
-
Export from Neon
Use
pg_dumpwith Neon’s direct (unpooled) connection string. Do not use the pooled connection —pg_dumprequires a direct connection.Schema and data (plain SQL format)
Terminal pg_dump --no-owner --no-privileges --no-comments \"postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=require" \> export.sqlSchema only
Terminal pg_dump --schema-only --no-owner --no-privileges \"postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=require" \> schema.sqlFlags explained:
--no-owner— omitsALTER ... OWNER TOstatements that reference Neon-specific roles--no-privileges— omitsGRANT/REVOKEstatements--no-comments— omitsCOMMENT ONstatements that may reference Neon internals
Use plain SQL format (default). DB9 does not support
pg_restorewith the custom (-Fc) or directory (-Fd) formats — import via SQL text only. -
Clean the Export
The
pg_dumpoutput may contain statements that DB9 does not support. Remove or comment out:CREATE EXTENSIONfor extensions DB9 does not have — DB9 supports 9 built-in extensions. Remove anyCREATE EXTENSIONfor extensions not in:http,uuid-ossp,hstore,fs9,pg_cron,parquet,zhparser,vector,embedding.CREATE PUBLICATION/CREATE SUBSCRIPTION— DB9 does not support logical replication.- Row-level security policies —
CREATE POLICY,ALTER TABLE ... ENABLE ROW LEVEL SECURITY. - Table partitioning —
PARTITION BY,CREATE TABLE ... PARTITION OF. - Advisory lock calls —
pg_advisory_lock(),pg_try_advisory_lock(). - Custom types with WHILE loops in PL/pgSQL — DB9 supports basic PL/pgSQL but not
WHILE,EXECUTE, or exception handling. - Locale settings — DB9 accepts and ignores locale parameters from
pg_dump, so these are safe to leave in.
A quick way to identify issues:
Terminal # Check for unsupported extensionsgrep "CREATE EXTENSION" export.sql# Check for partitioninggrep -i "PARTITION" export.sql# Check for RLSgrep -i "ROW LEVEL SECURITY\|CREATE POLICY" export.sql# Check for replicationgrep -i "PUBLICATION\|SUBSCRIPTION" export.sql -
Create the DB9 Database
Terminal # Create a new databasedb9 create --name my-app --show-connection-stringThis returns immediately with the connection string and credentials. Save them for your application config.
-
Import into DB9
Option A: CLI import (recommended for most databases)
Terminal db9 db sql my-app -f export.sqlThis executes the SQL file against your DB9 database via the API. Suitable for databases up to the dump limits (50,000 rows or 16 MB per table).
Option B: Direct psql import (for larger databases)
For larger exports, use
psqlwith DB9’s connection string directly:Terminal psql "$(db9 db status my-app --json | jq -r .connection_string)" -f export.sqlThis streams the SQL through the pgwire protocol and handles larger files without the API dump limits.
Option C: COPY for bulk data
If your export is large and you split schema from data, you can use
COPYfor bulk loading:Terminal # Import schema firstpsql "$(db9 db status my-app --json | jq -r .connection_string)" -f schema.sql# Then import data via COPY (pg_dump with --data-only --inserts=off uses COPY by default)pg_dump --data-only --no-owner \"postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=require" \| psql "$(db9 db status my-app --json | jq -r .connection_string)"DB9 supports
COPYin CSV and TEXT formats over pgwire. -
Update Your Application
Connection string
Replace the Neon connection string with DB9’s:
Diff DATABASE_URL=postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=requireDATABASE_URL=postgresql://a1b2c3d4e5f6.admin@pg.db9.io:5433/postgres?sslmode=requireKey differences:
- Username: DB9 uses
{tenant_id}.{role}format (e.g.,a1b2c3d4e5f6.admin) - Port: 5433, not 5432
- Database: Always
postgres - Host:
pg.db9.io(not region-specific endpoints)
Neon serverless driver
If you use
@neondatabase/serverless, replace it with a standard PostgreSQL driver:Diff import { neon } from '@neondatabase/serverless';const sql = neon(process.env.DATABASE_URL);const result = await sql`SELECT * FROM users`;import pg from 'pg';const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });const result = await pool.query('SELECT * FROM users');DB9 uses standard pgwire (TCP), so
pg(node-postgres),psycopg,pgx, and other standard drivers work without modification.Connection pooling
Neon provides built-in PgBouncer. DB9 does not include a connection pooler. If your application opens many connections, configure pooling at the application level:
TypeScript // node-postgres poolconst pool = new pg.Pool({connectionString: process.env.DATABASE_URL,max: 10, // DB9 handles concurrent connections wellidleTimeoutMillis: 30000,});For ORMs, see the integration guides: Prisma, Drizzle, SQLAlchemy.
Edge Runtime
If you run code in edge/serverless environments (Cloudflare Workers, Vercel Edge Functions) that relied on Neon’s HTTP driver, move general database query workloads to a Node.js runtime over pgwire. DB9 has browser HTTP scoped support (phase-1), but does not provide full HTTP/WebSocket SQL parity with Neon’s serverless driver.
See the Next.js guide for patterns that work with both Server Components and API routes.
- Username: DB9 uses
-
Validate
Check schema
Terminal db9 db dump my-app --ddl-onlyCompare the output with your original schema to confirm all tables, indexes, and constraints were created.
Check row counts
Run a count on your key tables to verify data was imported:
Terminal db9 db sql my-app -q "SELECT count(*) FROM users"db9 db sql my-app -q "SELECT count(*) FROM orders"Compare row counts against the source Neon database.
Run your test suite
The most reliable validation is running your application’s existing test suite against the DB9 database. Update
DATABASE_URLin your test environment and run:Terminal DATABASE_URL="$(db9 db status my-app --json | jq -r .connection_string)" npm testCheck for unsupported features
If your tests fail, check these common differences:
- SERIALIZABLE isolation — DB9 does not support SERIALIZABLE and returns an error. Use REPEATABLE READ or READ COMMITTED instead
- LISTEN/NOTIFY — not supported; use polling or an external message queue
- Advisory locks — available, but coordination is node-local (not cross-process/global). For strict row-level coordination semantics, use
SELECT ... FOR UPDATE. - Row-level security — browser HTTP scoped support exists (phase-1), but full PostgreSQL-wide parity is not available yet
Rollback Plan
Section titled “Rollback Plan”If you need to revert:
- Your Neon database is unchanged — switch
DATABASE_URLback to the Neon connection string. - If you need to export data created in DB9 back to Neon:
# Export from DB9db9 db dump my-app -o db9-export.sql
# Import to Neon (use direct/unpooled connection)psql "postgresql://user:pass@ep-cool-name-123456.us-east-2.aws.neon.tech/neondb?sslmode=require" \ -f db9-export.sqlThe db9 db dump command outputs plain SQL (up to 50,000 rows or 16 MB per table). For larger databases, use psql to stream individual tables with COPY.
Caveats
Section titled “Caveats”- No zero-downtime migration — DB9 does not support logical replication, so you cannot stream changes from Neon in real time. Plan a maintenance window or accept a brief cutover period.
- Extension gaps — If your Neon database uses extensions not in DB9’s built-in set (e.g.,
PostGIS,pg_trgm,pgcrypto), those features will not be available. Check yourCREATE EXTENSIONstatements. - Dump size limits — The
db9 db sql -fAPI import has limits (50,000 rows, 16 MB per table). For larger databases, use directpsqlconnection for import. - Branching model — Neon branches are copy-on-write and instant. DB9 branches are full copies and take longer for large databases. Adjust CI workflows that depend on instant branching.
- Autoscaling — Neon can scale compute to zero when idle. DB9 databases are always on. This affects cost for rarely-used databases.
Next Pages
Section titled “Next Pages”- Compatibility Matrix — full list of supported and unsupported PostgreSQL features
- Connect — connection string format and authentication options
- Migrate from PostgreSQL — general PostgreSQL migration path
- Migrate from Supabase — Supabase-specific migration guide
- Production Checklist — deployment readiness