I Replaced MongoDB with a Single Postgres Table

Look, I need to show you something that might hurt your feelings:
CREATE TABLE MongoDB (
_id UUID PRIMARY KEY,
data JSONB
);
Before you @ me in the comments, hear me out. What if I told you that 90% of your NoSQL use cases could be handled by this one weird Postgres trick that database admins don’t want you to know about?
The Problem: Why We Think We Need NoSQL
We’ve all been there. It’s 2 AM, you’re knee-deep in a schema migration, and you’re questioning every life choice that led you to this moment. Your product manager just asked for “just one more field” and now you’re writing migration scripts like it’s 2009.
“Maybe I should just use MongoDB,” you whisper to yourself. “Schema flexibility! No migrations! Document storage!”
But here’s the thing: You probably don’t need MongoDB. You need JSONB.
Enter JSONB: The Hero We Don’t Deserve
JSONB isn’t just JSON slapped into a Postgres column. Oh no, my friend. It’s JSON’s cooler, faster, more attractive older sibling who went to the gym and learned how to use indexes.
Here’s what makes JSONB special:
- Binary storage format (the B stands for Binary, not 🐝)
- GIN indexes that make queries stupid fast
- Native operators that would make a JavaScript developer weep with joy
- Full SQL power combined with NoSQL flexibility
It’s like MongoDB and Postgres had a baby, and that baby grew up to be a superhero.
Mind-Blowing Features Most Devs Don’t Know About
The Operators That Will Change Your Life
-- The containment operator @>
-- "Does this JSON contain this structure?"
SELECT * FROM users
WHERE preferences @> '{"theme": "dark"}';
-- The existence operator ?
-- "Does this key exist?"
SELECT * FROM products
WHERE attributes ? 'wireless';
-- The arrow operators -> and ->>
-- -> returns JSON, ->> returns text
SELECT
data->>'name' AS name,
data->'address'->>'city' AS city
FROM users;
-- The path operator #>
-- Navigate deep into nested JSON
SELECT * FROM events
WHERE data #> '{user,settings,notifications}' = 'true';
Indexing Specific JSON Paths (Wait, What?)
This is where things get spicy. You can create indexes on specific paths within your JSON:
-- Index a specific field
CREATE INDEX idx_user_email ON users ((data->>'email'));
-- Index for existence queries
CREATE INDEX idx_attributes ON products USING GIN (attributes);
-- Index for containment queries
CREATE INDEX idx_preferences ON users USING GIN (preferences);
Now your JSON queries are faster than your coworker who claims they “don’t need indexes because MongoDB handles it.”
Full-Text Search Inside JSON 🤯
Hold onto your keyboards:
-- Add full-text search to JSON fields
CREATE INDEX idx_content_search ON articles
USING GIN (to_tsvector('english', data->>'content'));
-- Search like a boss
SELECT * FROM articles
WHERE to_tsvector('english', data->>'content') @@ plainto_tsquery('postgres jsonb amazing');
Real Code Examples (The Meat)
Let’s start with something practical. Say you’re building a SaaS product (like UserJot - shameless plug for my feedback management tool) and you need to store user preferences:
-- The hybrid approach: structured + flexible
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ DEFAULT NOW(),
preferences JSONB DEFAULT '{}',
metadata JSONB DEFAULT '{}'
);
-- Insert a user with preferences
INSERT INTO users (email, preferences) VALUES (
'john@example.com',
'{
"theme": "dark",
"notifications": {
"email": true,
"push": false,
"frequency": "daily"
},
"features": {
"beta": true,
"advancedAnalytics": false
}
}'
);
-- Query users who have dark theme AND email notifications
SELECT email FROM users
WHERE preferences @> '{"theme": "dark", "notifications": {"email": true}}';
-- Update nested preferences
UPDATE users
SET preferences = jsonb_set(
preferences,
'{notifications,push}',
'true'
)
WHERE email = 'john@example.com';
The Event Log Pattern (Chef’s Kiss)
This is where JSONB absolutely shines:
CREATE TABLE events (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
event_type TEXT NOT NULL,
user_id UUID,
occurred_at TIMESTAMPTZ DEFAULT NOW(),
data JSONB NOT NULL
);
-- Index for fast event type + data queries
CREATE INDEX idx_events_type_data ON events (event_type)
WHERE event_type IN ('purchase', 'signup', 'feedback');
CREATE INDEX idx_events_data ON events USING GIN (data);
-- Insert different event types with different schemas
INSERT INTO events (event_type, user_id, data) VALUES
('signup', 'user-123', '{
"source": "google",
"campaign": "summer-2024",
"referrer": "blog-post"
}'),
('purchase', 'user-123', '{
"items": [
{"sku": "PROD-1", "quantity": 2, "price": 49.99},
{"sku": "PROD-2", "quantity": 1, "price": 19.99}
],
"discount": "SUMMER20",
"total": 99.97
}'),
('feedback', 'user-123', '{
"type": "feature_request",
"title": "Add dark mode",
"priority": "high",
"tags": ["ui", "accessibility"]
}');
-- Find all purchases with a specific discount
SELECT * FROM events
WHERE event_type = 'purchase'
AND data @> '{"discount": "SUMMER20"}';
-- Calculate total revenue from events
SELECT SUM((data->>'total')::NUMERIC) AS total_revenue
FROM events
WHERE event_type = 'purchase'
AND occurred_at >= NOW() - INTERVAL '30 days';
Product Catalog with Dynamic Attributes
This is the example that makes MongoDB developers question everything:
CREATE TABLE products (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
price NUMERIC(10,2) NOT NULL,
attributes JSONB DEFAULT '{}'
);
-- Insert products with completely different attributes
INSERT INTO products (name, price, attributes) VALUES
('iPhone 15', 999.00, '{
"brand": "Apple",
"storage": "256GB",
"color": "Blue",
"5g": true,
"screen": {
"size": "6.1 inches",
"type": "OLED",
"resolution": "2532x1170"
}
}'),
('Nike Air Max', 120.00, '{
"brand": "Nike",
"size": "10",
"color": "Black/White",
"material": "Mesh",
"style": "Running"
}'),
('The Pragmatic Programmer', 39.99, '{
"author": "David Thomas",
"isbn": "978-0135957059",
"pages": 352,
"publisher": "Addison-Wesley",
"edition": "2nd"
}');
-- Find all products with 5G
SELECT name, price FROM products WHERE attributes @> '{"5g": true}';
-- Find products by brand
SELECT * FROM products WHERE attributes->>'brand' = 'Apple';
-- Complex query: Find all products with screens larger than 6 inches
SELECT name, attributes->'screen'->>'size' AS screen_size
FROM products
WHERE (attributes->'screen'->>'size')::FLOAT > 6.0;
When JSONB Absolutely Destroys (Use Cases)
Here’s where you should absolutely use JSONB:
-
User Preferences/Settings: Every user wants different things. Don’t create 50 boolean columns.
-
Event Logs: Different events = different data. JSONB handles it like a champ.
-
Product Catalogs: Books have ISBNs, shoes have sizes, phones have screen resolutions. One schema to rule them all.
-
API Response Caching: Store that third-party API response without parsing it.
-
Form Submissions: Especially when you’re building something like UserJot where user feedback can have custom fields.
-
Feature Flags & Configuration:
CREATE TABLE feature_flags (
key TEXT PRIMARY KEY,
config JSONB
);
INSERT INTO feature_flags VALUES
('new_dashboard', '{
"enabled": true,
"rollout_percentage": 25,
"whitelist_users": ["user-123", "user-456"],
"blacklist_countries": ["XX"],
"start_date": "2024-01-01",
"end_date": null
}');
The Plot Twist: When You Still Need Real Columns
Let’s be real for a second. JSONB isn’t always the answer. Here’s when you should use regular columns:
- Foreign Keys: You can’t reference JSONB fields in foreign key constraints
- Heavy Aggregations: SUM, AVG, COUNT on JSONB fields are slower
- Frequent Updates: Updating a single JSONB field rewrites the entire JSON
- Type Safety: When you REALLY need that data to be an integer
The secret sauce? Hybrid approach:
CREATE TABLE orders (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id), -- Real FK
total NUMERIC(10,2) NOT NULL, -- For fast aggregations
status TEXT NOT NULL, -- For indexed lookups
created_at TIMESTAMPTZ DEFAULT NOW(),
line_items JSONB, -- Flexible item details
metadata JSONB -- Everything else
);
The Grand Finale: Migration Strategy
Here’s how to migrate from MongoDB to Postgres/JSONB:
# Pseudo-code for the brave
import psycopg2
from pymongo import MongoClient
# Connect to both
mongo = MongoClient('mongodb://localhost:27017/')
postgres = psycopg2.connect("postgresql://...")
# Migrate with style
for doc in mongo.mydb.mycollection.find():
postgres.execute(
"INSERT INTO my_table (id, data) VALUES (%s, %s)",
(str(doc['_id']), Json(doc))
)
Try This One Query and Tell Me It’s Not Magic
Here’s your homework. Create this table and run this query:
-- Create a table
CREATE TABLE magic (
id SERIAL PRIMARY KEY,
data JSONB
);
-- Insert nested, complex data
INSERT INTO magic (data) VALUES
('{"user": {"name": "Alice", "scores": [10, 20, 30], "preferences": {"level": "expert"}}}'),
('{"user": {"name": "Bob", "scores": [5, 15, 25], "preferences": {"level": "beginner"}}}');
-- Mind-blowing query: Find users with average score > 15 AND expert level
SELECT
data->'user'->>'name' AS name,
(SELECT AVG(value::INT) FROM jsonb_array_elements_text(data->'user'->'scores') AS value) AS avg_score
FROM magic
WHERE data @> '{"user": {"preferences": {"level": "expert"}}}'
AND (
SELECT AVG(value::INT)
FROM jsonb_array_elements_text(data->'user'->'scores') AS value
) > 15;
If that doesn’t make you reconsider your MongoDB addiction, I don’t know what will.
Bonus: The Ultimate JSONB Cheat Sheet
-- Operators
@> -- Contains
<@ -- Is contained by
? -- Key exists
?| -- Any key exists
?& -- All keys exist
|| -- Concatenate
- -- Delete key/element
#- -- Delete at path
-- Functions
jsonb_set() -- Update value at path
jsonb_insert() -- Insert value at path
jsonb_strip_nulls() -- Remove null values
jsonb_pretty() -- Format for humans
jsonb_agg() -- Aggregate into array
jsonb_object_agg() -- Aggregate into object
-- Performance tips
1. Use GIN indexes for @> and ? operators
2. Use btree indexes for ->> on specific fields
3. Partial indexes for common queries
4. Don't nest more than 3-4 levels deep
5. Keep JSONB documents under 1MB
The Real Talk
Look, I’m not saying MongoDB is bad. It has its place. But before you reach for a separate NoSQL database, ask yourself: Could JSONB do this?
9 times out of 10, the answer is yes. And you get to keep:
- ACID transactions
- Joins when you need them
- Your existing Postgres knowledge
- One less database to manage
- Money in your pocket (Postgres is free!)
At UserJot, we use JSONB extensively for storing user feedback metadata, custom fields, and integration configurations. It gives us MongoDB-like flexibility with Postgres reliability. Best of both worlds.
Now go forth and @>
all the things! Drop a comment with your wildest JSONB use case. I’ll be here, answering questions and probably making more bad database jokes.
P.S. - That MongoDB table at the beginning? It actually works. I’m not saying you should use it, but… you could. 😈
P.P.S. - If you’re collecting user feedback and want something better than a JSONB column (though honestly, JSONB would work), check out UserJot. We built it with lots of JSONB magic under the hood.
You might also like
Customer Retention Metrics for SaaS: The Complete Guide
Master SaaS retention metrics: Calculate churn rate, NRR, LTV, and 15+ KPIs. Get proven strategies, real benchmarks, and actionable frameworks to reduce churn by 50%.
Why Most Startup Founders Fail
Why founders fail: building without validation, poor distribution, and giving up too early. Plus a simple framework to avoid these mistakes.
Merchant of Record: The Complete Guide for SaaS Founders
A complete guide for SaaS founders explaining what a Merchant of Record is, how it impacts taxes, compliance, and billing, and how to choose between Stripe, Paddle, and other platforms.
The SaaS Marketing Playbook (2025 Edition)
A complete SaaS marketing guide: learn how modern software companies attract, activate, and retain users with real strategies, proven channels, and key metrics.