Skip to content
Discord Get Started

Migrate from Firebase

Firebase offers two NoSQL databases — Cloud Firestore (document model) and Realtime Database (JSON tree). DB9 is a relational (SQL) database. Migrating from Firebase means transforming your data from a document/JSON model into relational tables.

This guide covers exporting your Firebase data, designing a relational schema, transforming the data, and importing it into DB9. It focuses on the database layer only — Firebase Auth, Cloud Functions, Storage, and Hosting need separate replacements.

  • Cloud Firestore / Realtime Database — your application’s data storage layer
  • Firestore queries — replaced by SQL queries with joins, CTEs, aggregations, and window functions
Firebase featureWhat to use instead
Firebase AuthThird-party auth (Auth0, Clerk, Supabase Auth) or custom JWT
Cloud FunctionsCloudflare Workers, Vercel Functions, AWS Lambda
Cloud StorageAWS S3, GCS, or DB9’s fs9 extension for file-as-SQL workflows
HostingVercel, Netlify, Cloudflare Pages
Realtime listenersPolling, application WebSockets, or an external message queue
Security RulesEnforce access control in your application layer
Remote Config / AnalyticsThird-party equivalents (PostHog, Amplitude, LaunchDarkly)
AreaFirebase (Firestore)DB9
Data modelDocument/collection (NoSQL)Relational tables (SQL)
Query languageFirestore SDK / chained filtersStandard SQL (PostgreSQL-compatible)
JoinsNot supported (denormalized data)Full JOIN support — normalize your data
TransactionsSingle-document or cross-document (limited)Full ACID transactions with REPEATABLE READ
Real-time updatesBuilt-in onSnapshot listenersNot built-in — use polling or application WebSockets
IndexesAutomatic single-field, manual compositeManual (B-tree, HNSW for vectors)
ScalingAuto-scales reads/writesFixed per-database, always on
Offline supportBuilt-in client cacheNot built-in — implement at application level
ConnectionFirebase SDK (HTTPS)pgwire protocol (TCP) — standard PostgreSQL drivers
PricingPer-read/write/documentPer-database (fixed compute)
  • Access to your Firebase project (Firebase Console or firebase CLI)
  • Node.js installed (for the export/transform scripts)
  • DB9 CLI installed: curl -fsSL https://db9.ai/install | sh
  • A DB9 account: db9 create --name my-app to create your target database
  1. Export from Firebase

    Option A: Firestore — Firebase CLI

    Terminal
    # Install Firebase CLI if needed
    npm install -g firebase-tools
    firebase login
    # Export all collections
    gcloud firestore export gs://your-bucket/firestore-export
    # Then download from GCS
    gsutil -m cp -r gs://your-bucket/firestore-export ./firestore-export

    Option B: Firestore — custom script (recommended for transformation)

    Write a Node.js script to export each collection as JSON:

    JavaScript
    const admin = require('firebase-admin');
    const fs = require('fs');
    admin.initializeApp({ credential: admin.credential.applicationDefault() });
    const db = admin.firestore();
    async function exportCollection(name) {
    const snapshot = await db.collection(name).get();
    const docs = snapshot.docs.map(doc => ({ id: doc.id, ...doc.data() }));
    fs.writeFileSync(`${name}.json`, JSON.stringify(docs, null, 2));
    console.log(`Exported ${docs.length} docs from ${name}`);
    }
    // Export each collection
    async function main() {
    await exportCollection('users');
    await exportCollection('posts');
    await exportCollection('comments');
    // Add more collections as needed
    }
    main();

    Option C: Realtime Database

    Terminal
    # Export via REST API
    curl "https://your-project.firebaseio.com/.json?auth=YOUR_SECRET" > rtdb-export.json
    # Or use Firebase CLI
    firebase database:get / --project your-project > rtdb-export.json
  2. Design Your Relational Schema

    Firebase encourages denormalized data. For DB9, normalize into relational tables with foreign keys.

    Example: Firestore document model → relational schema

    Firestore structure:

    users/{userId}
    ├── name: "Alice"
    ├── email: "alice@example.com"
    └── posts (subcollection)
    └── {postId}
    ├── title: "Hello"
    ├── content: "World"
    ├── tags: ["dev", "db"]
    └── comments (subcollection)
    └── {commentId}
    ├── text: "Nice post"
    └── authorId: "user123"

    Relational schema:

    SQL
    CREATE TABLE users (
    id TEXT PRIMARY KEY, -- Firestore document ID
    name TEXT NOT NULL,
    email TEXT UNIQUE NOT NULL,
    created_at TIMESTAMPTZ DEFAULT now()
    );
    CREATE TABLE posts (
    id TEXT PRIMARY KEY, -- Firestore document ID
    user_id TEXT NOT NULL REFERENCES users(id),
    title TEXT NOT NULL,
    content TEXT,
    tags TEXT[], -- PostgreSQL array for tags
    created_at TIMESTAMPTZ DEFAULT now()
    );
    CREATE TABLE comments (
    id TEXT PRIMARY KEY, -- Firestore document ID
    post_id TEXT NOT NULL REFERENCES posts(id),
    author_id TEXT NOT NULL REFERENCES users(id),
    text TEXT NOT NULL,
    created_at TIMESTAMPTZ DEFAULT now()
    );
    CREATE INDEX idx_posts_user_id ON posts(user_id);
    CREATE INDEX idx_comments_post_id ON comments(post_id);
    CREATE INDEX idx_comments_author_id ON comments(author_id);

    Key decisions:

    • Document IDs → use as primary keys (TEXT) or generate new UUIDs
    • Subcollections → become separate tables with foreign keys
    • Nested objects → either flatten into columns or store as JSONB
    • Arrays → use PostgreSQL array types or normalize into junction tables
    • Timestamps → Firestore Timestamps → TIMESTAMPTZ
  3. Transform Data to SQL

    Write a script to convert exported JSON into SQL INSERT statements:

    JavaScript
    const fs = require('fs');
    function escapeSQL(val) {
    if (val === null || val === undefined) return 'NULL';
    if (typeof val === 'number') return String(val);
    if (typeof val === 'boolean') return val ? 'TRUE' : 'FALSE';
    if (Array.isArray(val)) return `ARRAY[${val.map(v => `'${String(v).replace(/'/g, "''")}'`).join(',')}]::TEXT[]`;
    return `'${String(val).replace(/'/g, "''")}'`;
    }
    // Transform users
    const users = JSON.parse(fs.readFileSync('users.json', 'utf-8'));
    let sql = '';
    sql += `-- Schema\n`;
    sql += `CREATE TABLE IF NOT EXISTS users (id TEXT PRIMARY KEY, name TEXT NOT NULL, email TEXT UNIQUE NOT NULL);\n\n`;
    sql += `-- Data\n`;
    for (const user of users) {
    sql += `INSERT INTO users (id, name, email) VALUES (${escapeSQL(user.id)}, ${escapeSQL(user.name)}, ${escapeSQL(user.email)});\n`;
    }
    // Repeat for other collections...
    fs.writeFileSync('import.sql', sql);
    console.log('Generated import.sql');

    For large datasets, use COPY format instead of individual INSERTs:

    JavaScript
    // Generate CSV for COPY
    const csv = users.map(u => `${u.id}\t${u.name}\t${u.email}`).join('\n');
    fs.writeFileSync('users.tsv', csv);
  4. Create the DB9 Database

    Terminal
    db9 create --name my-app --show-connection-string
  5. Import into DB9

    Import schema first, then data:

    Terminal
    # Import schema and data from the generated SQL file
    db9 db sql my-app -f import.sql

    For larger datasets, use COPY:

    Terminal
    # Import schema
    db9 db sql my-app -f schema.sql
    # Import data via COPY
    psql "$(db9 db status my-app --json | jq -r .connection_string)" \
    -c "\COPY users FROM 'users.tsv' WITH (FORMAT text)"
  6. Update Your Application

    Replace the Firebase SDK with a PostgreSQL driver

    Diff
    import { getFirestore, collection, getDocs, query, where } from 'firebase/firestore';
    const db = getFirestore(app);
    const q = query(collection(db, 'posts'), where('published', '==', true));
    const snapshot = await getDocs(q);
    const posts = snapshot.docs.map(doc => ({ id: doc.id, ...doc.data() }));
    import pg from 'pg';
    const pool = new pg.Pool({ connectionString: process.env.DATABASE_URL });
    const { rows: posts } = await pool.query('SELECT * FROM posts WHERE published = true');

    Or use an ORM: Prisma, Drizzle, SQLAlchemy.

    Replace Firestore queries with SQL

    FirestoreSQL
    where('age', '>', 18)WHERE age > 18
    orderBy('created', 'desc').limit(10)ORDER BY created DESC LIMIT 10
    doc('users/abc')SELECT * FROM users WHERE id = 'abc'
    Subcollection queryJOIN or subquery
    arrayContains('tags', 'dev')WHERE 'dev' = ANY(tags)
    Compound queries (limited in Firestore)Full SQL with multiple JOINs and conditions

    Replace real-time listeners

    Firestore’s onSnapshot has no direct DB9 equivalent. Alternatives:

    • Polling — query the database on an interval
    • Application WebSockets — push changes from your API when writes happen
    • Server-Sent Events — stream updates from your API layer

    Replace Security Rules

    Firestore Security Rules run at the database level. With DB9, enforce access control in your API:

    TypeScript
    // Before (Firestore Security Rules):
    // match /posts/{postId} { allow read: if request.auth != null; }
    // After (application-level):
    app.get('/api/posts', authMiddleware, async (req, res) => {
    const { rows } = await pool.query(
    'SELECT * FROM posts WHERE user_id = $1',
    [req.user.id]
    );
    res.json(rows);
    });
  7. Validate

    Check row counts

    Compare the number of documents in each Firestore collection with the row count in DB9:

    Terminal
    db9 db sql my-app -q "SELECT count(*) FROM users"
    db9 db sql my-app -q "SELECT count(*) FROM posts"
    db9 db sql my-app -q "SELECT count(*) FROM comments"

    Run your test suite

    Terminal
    DATABASE_URL="$(db9 db status my-app --json | jq -r .connection_string)" npm test

    Verify query results

    Test key queries that your application relies on and compare results with the original Firestore queries.

Your Firebase project is unchanged by the migration. To revert:

  1. Switch your application back to the Firebase SDK and restore the original Firestore configuration.
  2. If you need to export data created in DB9 back to Firestore, write a reverse transformation script that reads from DB9 and writes documents back to Firestore.
  • Data model redesign required — This is not a lift-and-shift migration. You must design a relational schema, which may require significant application changes.
  • No real-time listeners — DB9 does not have built-in real-time subscriptions like Firestore’s onSnapshot. Implement polling or WebSockets in your application.
  • No offline support — Firestore’s built-in offline cache and sync are not available. If your app needs offline support, implement it at the application level.
  • No automatic indexes — Firestore auto-indexes every field. In DB9, create indexes manually for your query patterns.
  • Security rules must move to application code — Firestore Security Rules are enforced at the database level. With DB9, enforce access control in your API layer.
  • Subcollection patterns change — Firestore subcollections become separate tables with foreign keys. Update all queries that traverse subcollections.
  • Pricing model change — Firebase charges per-read/write. DB9 charges per-database (fixed compute). This may be cheaper or more expensive depending on your access patterns.