How to Debug Slow Database Queries in Node.js (Mongoose, Prisma, pg)
Slow database queries are responsible for the majority of API latency problems. This guide shows you how to find and fix slow queries across the most common Node.js database libraries — without instrumenting everything by hand.
Why Database Queries Are Almost Always the Bottleneck
When a Node.js API is slow, the culprit is almost always the database — not the JavaScript logic. Node.js is fast at I/O. It can handle thousands of concurrent requests without breaking a sweat. What it can't speed up is a database query that scans 2 million rows because there's no index, or a route that fires 40 separate queries in a loop.
A few numbers from typical production APIs:
- Pure JavaScript processing: 0.1–5ms
- DB query with good indexes: 1–20ms
- DB query with missing index on large table: 500–5000ms
- N+1 loop (20 queries): 40–400ms
If your route takes 800ms, there's a 90%+ chance the database is responsible for 700+ of those milliseconds.
The problem is visibility. Without instrumentation, you have no idea which query in which route is slow. This guide shows you how to get that visibility — both manually (for understanding) and automatically (for production).
The Manual Approach: console.time() Everywhere
Before reaching for a tool, understand how manual DB profiling works:
app.get('/api/dashboard', async (req, res) => {
console.time('fetch-user');
const user = await db.query('SELECT * FROM users WHERE id = $1', [req.params.id]);
console.timeEnd('fetch-user');
console.time('fetch-activity');
const activity = await db.query(
'SELECT * FROM activity WHERE user_id = $1 ORDER BY created_at DESC LIMIT 50',
[req.params.id]
);
console.timeEnd('fetch-activity');
console.time('fetch-stats');
const stats = await db.query('SELECT COUNT(*) FROM events WHERE user_id = $1', [req.params.id]);
console.timeEnd('fetch-stats');
res.json({ user: user.rows[0], activity: activity.rows, stats: stats.rows[0] });
});
Output:
fetch-user: 3ms
fetch-activity: 847ms ← problem
fetch-stats: 2ms
This works in development. In production, you'd need to:
- Add this to every route
- Ship the timings somewhere that's not a log file
- Aggregate across thousands of requests
- Remove it all before each release (or it pollutes your logs)
That's why we use middleware instead.
Automatic DB Query Profiling: One Line
npm install auto-api-observe
const { observe } = require('auto-api-observe');
app.use(observe({ apiKey: process.env.APILENS_KEY }));
The middleware automatically wraps your DB client library. You don't add timing code to individual routes. Every query across every route is measured in production and aggregated in the dashboard under the Database tab.
What you get per route:
- Total query count
- Total query duration
- Average query duration
- N+1 flag (if the same query pattern runs > 3 times in one request)
Debugging Slow Queries by Library
pg (node-postgres)
Common slow query patterns:
// ❌ Slow: full table scan (no index on email)
const user = await pool.query('SELECT * FROM users WHERE email = $1', [email]);
// ✓ Fast: indexed lookup
// Run once: CREATE INDEX idx_users_email ON users(email);
const user = await pool.query('SELECT id, name, email FROM users WHERE email = $1', [email]);
How to get the execution plan:
const plan = await pool.query('EXPLAIN ANALYZE SELECT * FROM users WHERE email = $1', [email]);
console.log(plan.rows.map(r => r['QUERY PLAN']).join('
'));
Look for:
- Seq Scan on large tables → add an index
- Hash Join on unindexed foreign keys → add index on the FK column
- Sort on large result sets → add index with ORDER BY direction
pg slow query log (in postgresql.conf):
log_min_duration_statement = 100 # log queries taking > 100ms
This is the most powerful debugging tool for persistent slow queries.
Mongoose (MongoDB)
Enable Mongoose debug mode in development:
mongoose.set('debug', (collectionName, method, query, doc) => {
console.log(`${collectionName}.${method}`, JSON.stringify(query));
});
Common slow patterns:
// ❌ No index on a frequently-queried field
const posts = await Post.find({ authorId: userId }).sort({ createdAt: -1 }).limit(20);
// Add index in your schema:
const postSchema = new mongoose.Schema({
authorId: { type: ObjectId, ref: 'User', index: true },
createdAt: { type: Date, index: -1 },
});
// Or compound index:
postSchema.index({ authorId: 1, createdAt: -1 });
Get MongoDB query explain:
const explanation = await Post.find({ authorId: userId }).explain('executionStats');
console.log(explanation.executionStats);
// Look for: totalDocsExamined >> nReturned → missing index
N+1 with populate:
// ❌ N+1 — 1 query for posts, N queries for authors
const posts = await Post.find({}).limit(20);
for (const post of posts) {
post._author = await User.findById(post.authorId);
}
// ✓ Single $lookup equivalent
const posts = await Post.find({}).populate('authorId', 'name avatar').limit(20);
Prisma
Prisma has excellent built-in query performance tooling.
Enable query logging:
const prisma = new PrismaClient({
log: [
{ level: 'query', emit: 'event' },
{ level: 'warn', emit: 'stdout' },
],
});
prisma.$on('query', (e) => {
if (e.duration > 100) { // log queries > 100ms
console.warn(`Slow query (${e.duration}ms): ${e.query}`);
}
});
Common Prisma performance mistakes:
// ❌ Over-selecting — fetches all relations
const users = await prisma.user.findMany({
include: {
posts: { include: { comments: { include: { author: true } } } },
},
});
// ✓ Select only what you need
const users = await prisma.user.findMany({
select: {
id: true,
name: true,
posts: {
select: { id: true, title: true, _count: { select: { comments: true } } },
take: 5,
orderBy: { createdAt: 'desc' },
},
},
});
Prisma + N+1 (the select in a loop problem):
// ❌ findFirst inside a loop
const enriched = await Promise.all(
orders.map(async (order) => ({
...order,
user: await prisma.user.findFirst({ where: { id: order.userId } }),
}))
);
// ✓ Batch with findMany + Map
const userIds = orders.map(o => o.userId);
const users = await prisma.user.findMany({ where: { id: { in: userIds } } });
const userMap = Object.fromEntries(users.map(u => [u.id, u]));
const enriched = orders.map(o => ({ ...o, user: userMap[o.userId] }));
Sequelize
Enable query logging:
const sequelize = new Sequelize(DATABASE_URL, {
logging: (sql, timing) => {
if (timing > 100) console.warn(`Slow query (${timing}ms): ${sql}`);
},
benchmark: true, // enables timing
});
Fix N+1 with eager loading:
// ❌ N+1
const orders = await Order.findAll({ limit: 20 });
for (const order of orders) {
order.user = await User.findByPk(order.userId);
}
// ✓ Eager loading (single JOIN query)
const orders = await Order.findAll({
limit: 20,
include: [{ model: User, attributes: ['name', 'email'] }],
});
Reading the Database Tab in APILens
After a few hours of production traffic, the Database tab shows you:
Top routes by query count: Which routes fire the most queries. A route with 30+ queries/request almost certainly has an N+1 issue.
Average query duration per route: Routes where the average DB time is high even with few queries — this points to missing indexes or large result sets.
N+1 flag: Automatically set when the same parameterized query pattern fires 3+ times in a single request. You can go directly to the route and fix the loop.
This gives you a prioritized list: fix the routes with the most queries first, then fix the ones with the slowest individual queries.
A Complete Debug Workflow
When you find a slow route in production, here's the step-by-step:
1. Dashboard → Routes tab → sort by p95 latency
2. Click the slow route → check DB query count
- > 10 queries: likely N+1 → go to route code, find the loop
- 1–3 queries but slow: likely missing index → EXPLAIN ANALYZE
3. Add EXPLAIN ANALYZE to the specific query in a dev console
4. Look for Seq Scan / high rows examined ratio
5. Add appropriate index
6. Re-deploy, watch Dashboard for latency improvement
The dashboard confirms your fix worked in real production traffic — not just in your dev environment with 10 rows.
Quick Reference: DB Performance Checklist
| Issue | Symptom | Fix |
|---|---|---|
| N+1 | High query count per request | Replace loop with JOIN/populate/include |
| Missing index | Seq Scan, slow single query | CREATE INDEX on filter/sort columns |
| Over-fetching | High data volume, slow serialization | SELECT specific columns, add LIMIT |
| No connection pool | 20–100ms added to every request | Use pg.Pool, mongoose connection pooling |
| Cartesian product | JOIN without limit returns huge result | Add WHERE/LIMIT, check JOIN conditions |
| Unbounded queries | Query returns all rows in large table | Always add LIMIT on list queries |
Summary
Slow database queries cause the majority of API performance problems in Node.js. Finding them doesn't require adding console.time() to every route — one line of middleware gives you query profiling across your entire API in production.
npm install auto-api-observe
app.use(observe({ apiKey: process.env.APILENS_KEY }));
Then use the Database tab to find routes with high query counts (N+1) and slow individual queries (missing indexes). Fix the worst offenders and watch p95 latency drop.
- Dashboard: apilens.rest/dashboard
- GitHub: github.com/rahhuul/auto-api-observe
- npm: auto-api-observe
Free during beta. Zero dependencies. Works with Express and Fastify.