How to Monitor a Fastify API in Production (Zero Config)
Fastify is fast — but fast and invisible is worse than slow and observable. Here's how to add full API monitoring to Fastify in under 2 minutes with zero config: latency, errors, DB queries, and a real-time dashboard.
Why Fastify Developers Skip Monitoring (And Pay for It Later)
Fastify is excellent. It's 2–3x faster than Express on raw throughput, has first-class TypeScript support, and its plugin system is genuinely well-designed. But there's a trap: because Fastify is fast, developers assume things are fine in production without actually measuring.
"Fast" and "observable" are different things. A Fastify API can be fast on average while silently returning 500 errors on specific routes, running N+1 queries in a tight loop, or having one slow endpoint that tanks p99 for your entire service.
You need to measure to know.
What Fastify's Built-in Logging Gives You (And What It Doesn't)
Fastify ships with pino as its default logger, which is great. Pino is fast and structured. But pino alone gives you:
- ✓ Individual request logs (method, url, status, responseTime)
- ✗ Aggregated latency per route (p50/p95/p99)
- ✗ Error rate per endpoint over time
- ✗ Database query profiling
- ✗ N+1 detection
- ✗ A dashboard to visualize any of it
You'd need to ship logs to a log aggregation service, write your own queries, build charts, and set up alerts. That's most of a day's work for something you can have in 2 minutes.
Adding Full Monitoring to Fastify in 2 Minutes
Step 1: Install
npm install auto-api-observe
Step 2: Register the plugin
auto-api-observe ships a native Fastify plugin — no adapter needed.
const Fastify = require('fastify');
const { observeFastify } = require('auto-api-observe');
const app = Fastify({ logger: true });
// Register monitoring plugin — one line
await app.register(observeFastify, { apiKey: process.env.APILENS_KEY });
app.get('/users', async (req, reply) => {
const users = await db.query('SELECT * FROM users LIMIT 20');
return users;
});
await app.listen({ port: 3000 });
That's it. Every route is now monitored. No decorators, no hooks to write manually, no schema changes.
TypeScript setup
import Fastify from 'fastify';
import { observeFastify } from 'auto-api-observe';
const app = Fastify({ logger: true });
await app.register(observeFastify, {
apiKey: process.env.APILENS_KEY!,
slowThreshold: 300, // flag routes slower than 300ms (default: 500ms)
sampleRate: 1.0, // capture 100% of requests (default)
});
What Gets Monitored Automatically
Once the plugin is registered, the following is instrumented without any further code changes:
Request metrics (per route)
- HTTP method + route pattern (e.g., GET /users/:id — not the actual ID value)
- Response status code
- Response time in milliseconds
- Request timestamp
Database queries (zero additional config)
The plugin automatically wraps these DB clients if they're present in your project:
pg (node-postgres) — wraps Client.prototype.query
mysql2 — wraps Connection.prototype.execute
mongoose — wraps Model.prototype methods
prisma — wraps $queryRaw, findMany, findFirst, etc.
knex — wraps knex.raw and builder methods
sequelize — wraps Model.findAll, findOne, create, etc.
ioredis — wraps Redis.prototype.call
better-sqlite3 — wraps Statement.prototype.run/get/all
For each DB call you get: query count, total duration, and which route triggered it.
Distributed tracing
The plugin reads x-trace-id from incoming request headers. If absent, it generates a UUID. The trace ID is available in Fastify's request context and propagated to outgoing HTTP calls if you use the built-in http/https modules.
Monitoring Fastify With Hooks vs Middleware
Fastify doesn't have Express-style middleware — it has lifecycle hooks (onRequest, preHandler, onSend, onResponse). The observeFastify plugin registers hooks on the right lifecycle events:
onRequest → capture start timestamp, read/generate traceId
onResponse → calculate duration, ship metric to buffer
onError → capture 5xx errors with route context
This means monitoring runs after routing (so you get the parameterized route pattern like /users/:id rather than /users/123) and has zero impact on your route handlers.
Reading Your Fastify Metrics
After deploying, go to apilens.rest/dashboard:
Overview: Total requests in the last 24h, error rate, average latency timeline.
Routes tab: Every Fastify route ranked by p95 latency. You can immediately see which routes are your bottlenecks. Click any route for its latency histogram, status code breakdown, and DB query count.
Slow tab: Routes that exceeded your slowThreshold. At 300ms, a route showing up here is a genuine user experience problem.
Errors tab: 4xx and 5xx grouped by route. If POST /payments has a 12% error rate, you'll see it here before a user tweets about it.
Fastify Monitoring Options Compared
| Approach | Setup Time | DB Profiling | Dashboard | Cost |
|---|---|---|---|---|
| Pino logs only | 0 min | ✗ | ✗ | Free |
| Pino + ELK stack | 2–4 hours | ✗ | Manual | Infrastructure cost |
| Datadog + dd-trace | 30–60 min | ✓ | ✓ | $23/host/mo |
| Prometheus + Grafana | 2–4 hours | Manual | Self-hosted | Free + ops time |
| APILens | < 2 min | ✓ Auto | ✓ Hosted | Free (beta) |
A Realistic Example: What You'll Find in Week One
Here's what developers typically discover within the first week of adding Fastify monitoring:
Day 1: One route (GET /feed) has p95 of 2.1s. Everything else is under 100ms. The feed was always slow but no one knew how slow.
Day 3: The GET /feed route fires 47 DB queries per request. Classic N+1 — the route fetches a list of posts, then queries for each post's author individually.
Day 5: After fixing the N+1 with a JOIN, GET /feed p95 drops to 120ms. The dashboard confirms the fix immediately.
This is the value of APM: not alerting you when things are on fire, but showing you the quiet fires you didn't know were burning.
Summary
Fastify's performance advantage means nothing if you don't know which routes are actually slow or which ones are silently erroring. Full Fastify monitoring takes under 2 minutes:
npm install auto-api-observe
await app.register(observeFastify, { apiKey: process.env.APILENS_KEY });
- Dashboard: apilens.rest
- GitHub: github.com/rahhuul/auto-api-observe
- Price: Free during beta
*Are you running Fastify in production? I'd love to hear what monitoring setup you use.*