Node.js API monitoring.
Without the setup overhead.
Datadog needs a host agent and a $23/month bill. New Relic wants pipelines and configuration. By the time you've set either up, the slow endpoint that cost you users last week is still undiagnosed.
auto-api-observe is one npm install. Paste one line before your first route and your entire API is instrumented: requests, database queries, outbound HTTP calls, and process metrics — automatically traced, masked, and shipped to your dashboard in 60 seconds.
How It Works
Install the package
One `npm install auto-api-observe`. No agents, no sidecars, no configuration files.
Add one line of middleware
Call `observe({ apiKey: process.env.APILENS_KEY })` before your first route. That's it.
Watch your dashboard fill up
Every request, query, outbound call, and memory snapshot appears in your cloud dashboard — in real time.
Request Tracking
Every HTTP request your server handles is automatically captured — no code changes to your route handlers. You get structured JSON with everything you need to debug latency spikes, track error rates, and trace calls across services.
Structured Request Logs
Every request emits a structured JSON entry: method, route pattern, status, latency, trace ID, IP, User-Agent — automatically. Route patterns (`/users/:id`) are normalised so `/users/1` and `/users/999` both roll up to the same metric.
{
"method": "GET",
"route": "/users/:id",
"path": "/users/42",
"status": 200,
"latency": 85,
"latencyMs": "85ms",
"traceId": "a1b2c3d4-..."
}Slow Request Detection
Requests over your configurable threshold are flagged `slow: true` and surfaced separately on your dashboard. P95 latency tracked per route pattern — not per raw URL — so you spot the slow endpoint, not just a slow request.
observe({
slowThreshold: 800, // ms — default 1000
})Distributed Trace IDs
A UUID trace ID is generated per request and injected into the `x-trace-id` response header. Any downstream service that receives that header can forward it, letting you correlate logs across a full call chain without any extra infrastructure.
const { getContext } = require('auto-api-observe');
// In any handler — same request, any depth:
const traceId = getContext()?.traceId;
console.log(traceId); // "a1b2c3d4-..."Route-Level Analytics
Metrics are aggregated by route pattern on the local `getMetrics()` store and on your cloud dashboard. Expose your own internal metrics endpoint or just watch the dashboard — both update in real time.
const { getMetrics } = require('auto-api-observe');
app.get('/internal/metrics', (req, res) => {
res.json(getMetrics());
});Database Profiling
auto-api-observe patches 9 database libraries at startup so every query your app runs is timed and attached to the request that triggered it. You see exactly how many queries each request fires, how long each one took, and which library ran it — without touching a single query in your codebase.
Auto DB Instrumentation
Monkey-patches 9 libraries at startup with no code changes: `pg`, `mysql2`, `mongoose`, `@prisma/client`, `knex`, `sequelize`, `ioredis`, `better-sqlite3`, and `node-redis`. Per-query timing, masked SQL, and source library name — all captured automatically.
// Patched at startup automatically:
// pg · mysql2 · mongoose · @prisma/client
// knex · sequelize · ioredis · better-sqlite3 · node-redis
observe({ autoInstrument: false }) // opt-out if neededN+1 Detection
See exactly how many DB queries each request fires. A list page that triggers 47 individual SELECT queries is immediately visible — the total call count, total time, and slowest individual query are all in the same log entry as the originating request.
"dbCalls": {
"calls": 47,
"totalTime": 380,
"slowestQuery": 45,
"queries": [
{ "query": "SELECT ...", "source": "pg", "queryTime": 8 },
...
]
}Manual DB Tracking
For databases not in the auto-instrumented list — ClickHouse, CockroachDB, custom data stores — wrap any query manually with `recordDbQuery()`. Works the same way: per-request timing, attached to the current trace.
const { recordDbQuery } = require('auto-api-observe');
const t = Date.now();
const rows = await customDb.exec(sql);
recordDbQuery({
query: sql,
source: 'clickhouse',
queryTime: Date.now() - t,
});Outbound HTTP Tracking
Most API latency problems aren't in your code — they're in the third-party APIs your server calls. auto-api-observe automatically captures every outbound HTTP call your server makes and attaches it to the triggering request, so you can see exactly which upstream service is adding latency.
Auto Outbound Tracking
Every `fetch`, `axios`, and `undici` call your server makes is captured per-request. The outbound URL, method, HTTP status, and latency are attached to the same log entry as the inbound request that triggered them — so you trace the full call chain in one log line.
observe({ autoInstrumentOutbound: true })
// Attached to the inbound request entry:
"outboundCalls": [
{
"method": "POST",
"url": "https://api.stripe.com/v1/charges",
"status": 200,
"latency": 340
}
]Outbound URL Masking
Sensitive query params — `token`, `api_key`, `password`, `secret`, and 15 more — are stripped from outbound URLs before they are logged or shipped to the dashboard. Your third-party API keys never leave your server in plaintext.
// Your code sends:
fetch('https://api.example.com/data?api_key=sk_live_secret')
// Logged as:
"url": "https://api.example.com/data"
// api_key stripped automatically, case-insensitiveSecurity & Privacy
Observability tools that ship raw request context to the cloud are a security liability. auto-api-observe redacts sensitive fields automatically before any data leaves your server — authorization headers, passwords, tokens, and 15+ other patterns are replaced with [REDACTED] at collection time.
Sensitive Field Masking
Any field attached via `addField()` with a key matching a sensitive pattern is replaced with `[REDACTED]` before shipping. The redaction list covers `authorization`, `password`, `token`, `api_key`, `cookie`, `secret`, `credit_card`, `ssn`, `private_key`, and 15 more — all case-insensitive.
addField('userId', 'u_123'); // shipped as-is
addField('authorization', 'Bearer x'); // → "[REDACTED]"
addField('password', 'hunter2'); // → "[REDACTED]"
addField('api_key', 'sk_live_'); // → "[REDACTED]"
// Also masked: token · cookie · secret · credit_card
// ssn · private_key · client_secret · access_tokenCustom Fields via AsyncLocalStorage
Attach any business context to the current request with `addField()`. Fields are stored in AsyncLocalStorage — completely isolated between concurrent requests, no risk of data leaking between users. Works the same on every framework.
const { addField } = require('auto-api-observe');
app.get('/orders', async (req, res) => {
addField('userId', req.user.id); // scoped to this request
addField('plan', req.user.plan);
addField('country', req.geo?.country);
const orders = await Order.findAll();
res.json(orders);
});Process Monitoring
Request tracing shows you what happened per-request. Process monitoring shows you what your Node.js process is doing over time — memory trends, CPU load, and startup metadata that let you correlate deploys with performance changes.
Memory & CPU Metrics
`rss`, `heapUsed`, `heapTotal`, `external`, CPU `user`/`system` times, load average, and free memory — sampled on a configurable interval and shipped alongside your request data. Useful for catching memory leaks before they cause OOM restarts.
observe({
processMetrics: 30_000, // every 30s — set false to disable
})
// Shipped automatically:
{
"method": "_process",
"route": "metrics",
"rss": 45678592,
"heapUsed": 23456789
}Startup Event
Node version, platform, architecture, hostname, PID, and total memory are shipped once on process start. When you see a latency spike in your dashboard, you can correlate it with a deploy that changed the Node version or moved to a different host.
// Shipped once on process start:
{
"method": "_process",
"route": "startup",
"nodeVersion": "v20.11.0",
"platform": "linux",
"hostname": "api-pod-3",
"pid": 1234
}Unhandled Error Capture
`uncaughtException` and `unhandledRejection` are caught and shipped before the process potentially crashes. You get the error name, message, and stack in your dashboard — so you know what went wrong even if the process didn't survive to log it.
observe({ captureUnhandledErrors: true })
// Shipped on crash:
{
"method": "_process",
"route": "uncaughtException",
"status": 500,
"errorName": "TypeError",
"errorMessage": "Cannot read properties of null"
}FAQ
Does auto-api-observe work with TypeScript?
Yes. The package ships full TypeScript types. Import it with `import observe from 'auto-api-observe'` in a `.ts` file and all options, return values, and helper functions are fully typed.
What is the performance overhead?
Negligible. Telemetry is collected synchronously within the existing request lifecycle (no extra async work on the hot path) and shipped asynchronously via a buffered queue. Median overhead on benchmarked Express apps is under 0.3ms per request.
What happens if the APILens cloud server is unreachable?
Requests are buffered in memory and retried with exponential backoff. If the buffer fills, the oldest entries are dropped. Your app continues to run — there is no connection requirement on the request path. The `getMetrics()` local store always works regardless of cloud connectivity.
Can I use this without sending data to the cloud?
Yes — omit the `apiKey` option and all data stays local. Use `getMetrics()` to read it programmatically, or expose it via your own internal route. The DB profiling, masking, and trace ID features all work without an API key.
Does database instrumentation slow down my queries?
No. The monkey-patch wraps each query in a `Date.now()` call before and after — that's it. There is no query interception, no connection pool changes, and no proxy layer. Overhead per query is under 0.05ms.
Is this a replacement for Datadog or New Relic?
For Node.js API monitoring specifically, yes — it covers request tracing, error tracking, latency analysis, DB profiling, and process metrics without agents, configuration, or a $23/host/month bill. It doesn't cover infrastructure monitoring, APM agents for other runtimes, or log aggregation pipelines.
Ready to see what your API is actually doing?
Free during beta. No credit card. No agent install. Pick your framework and be observing in under 60 seconds.