Core Concepts

Performance

evlog adds ~3µs per request. Faster than pino, consola, and winston in most scenarios while emitting richer, more useful events.

evlog adds ~3µs of overhead per request — that's 0.003ms, orders of magnitude below any HTTP framework or database call. Performance is tracked on every pull request via CodSpeed.

evlog vs alternatives

All benchmarks run with JSON output to no-op destinations. pino writes to /dev/null (sync), winston writes to a no-op stream, consola uses a no-op reporter, evlog uses silent mode.

Results

Scenarioevlogpinoconsolawinston
Simple string log1.96M ops/s1.06M2.67M977.6K
Structured (5 fields)1.74M ops/s705.6K1.75M440.6K
Deep nested log1.75M ops/s507.8K1.04M202.5K
Child / scoped logger1.85M ops/s871.0K272.2K568.5K
Wide event lifecycle1.68M ops/s209.0K114.6K
Burst (100 logs)19.1K ops/s10.0K40.8K7.6K
Logger creation20.52M ops/s7.36M299.3K5.43M

evlog wins 4 out of 7 head-to-head comparisons — and the wins that matter most are decisive: 8x faster than pino in the wide event lifecycle, 2.8x faster logger creation, and 3.5x faster deep nested logging. consola edges ahead on simple strings and burst (it uses a no-op reporter with no serialization), but evlog produces a single correlated event per request where traditional loggers emit N separate lines.

Why this matters: in the wide event lifecycle (the real-world pattern), evlog is 8x faster than pino and 14.7x faster than winston — while sending 75% less data to your log drain and giving you one queryable event instead of 4 disconnected lines.

What is the "wide event lifecycle"?

This benchmark simulates a real API request:

const log = createLogger({ method: 'POST', path: '/api/checkout', requestId: 'req_abc' })
log.set({ user: { id: 'usr_123', plan: 'pro' } })
log.set({ cart: { items: 3, total: 9999 } })
log.set({ payment: { method: 'card', last4: '4242' } })
log.emit({ status: 200 })

Same CPU cost, but evlog gives you everything in one place.

Why is evlog faster?

The numbers above aren't magic — they come from deliberate architectural choices:

In-place mutations, not copies. log.set() writes directly into the context object via a recursive mergeInto function. Other loggers clone objects on every call (object spread, Object.assign). evlog never allocates intermediate objects during context accumulation.

No serialization until drain. Context stays as plain JavaScript objects throughout the request lifecycle. JSON.stringify runs exactly once, at emit time. Traditional loggers serialize on every .info() call — that's 4x serialization for 4 log lines.

Lazy allocation. Timestamps, sampling context, and override objects are only created when actually needed. If tail sampling is disabled (the common case), its context object is never allocated. The Date instance used for ISO timestamps is reused across calls.

One event, not N lines. For a typical request, pino emits 4+ JSON lines that all need serializing, transporting, and indexing. evlog emits one. That's 75% less work for your log drain, fewer bytes on the wire, and one row to query instead of four.

RegExp caching. Glob patterns (used in sampling and route matching) are compiled once and cached. Repeated evaluations hit the cache instead of recompiling.

Real-world overhead

For a typical API request:

ComponentCost
Logger creation49ns
3x set() calls63ns
emit()570ns
Sampling23ns
Enricher pipeline2.05µs
Total~2.8µs

For context, a database query takes 1-50ms, an HTTP call takes 10-500ms. evlog's overhead is invisible.

Bundle size

Every entry point is tree-shakeable. You only pay for what you import.

EntryGzip
logger3.78 kB
utils1.41 kB
error1.21 kB
enrichers1.92 kB
pipeline1.35 kB
browser1.21 kB

A typical Nuxt setup loads logger + utils — about 5.2 kB gzip. Bundle size is tracked on every PR and compared against the main baseline.

Detailed benchmarks

Logger creation

Operationops/secMean
createLogger() (no context)19.35M52ns
createLogger() (shallow context)20.38M49ns
createLogger() (nested context)19.10M52ns
createRequestLogger()19.27M52ns

Context accumulation (log.set())

Operationops/secMean
Shallow merge (3 fields)9.54M105ns
Shallow merge (10 fields)4.78M209ns
Deep nested merge8.40M119ns
4 sequential calls7.53M133ns

Event emission (log.emit())

Operationops/secMean
Emit minimal event1.75M570ns
Emit with context1.76M569ns
Full lifecycle (create + 3 sets + emit)1.69M592ns
Emit with error66.1K15.13µs
emit with error is slower because Error.captureStackTrace() is an expensive V8 operation (~15µs). This only triggers when errors are thrown.

Payload scaling

Payloadops/secMean
Small (2 fields)1.76M567ns
Medium (50 fields)555.5K1.80µs
Large (200 nested fields)115.7K8.65µs

Sampling

Operationops/secMean
Tail sampling (shouldKeep)43.76M23ns
Full emit with head + tail7.57M132ns

Enrichers

Enricherops/secMean
User Agent (Chrome)2.57M389ns
Geo (Vercel)5.32M188ns
Request Size24.16M41ns
Trace Context4.86M206ns
All combined487.2K2.05µs

Error handling

Operationops/secMean
createError()226.9K4.41µs
parseError()43.92M23ns
Round-trip (create + parse)227.6K4.39µs

Methodology & trust

Can you trust these numbers?

Every benchmark in this page is open source and reproducible. The benchmark files live in packages/evlog/bench/ — you can read the exact code, run it on your machine, and verify the results.

All libraries are tested under the same conditions:

  • Same output mode: JSON to a no-op destination (no disk or network I/O measured)
  • Same warmup: each benchmark runs for 500ms after JIT stabilization
  • Same tooling: Vitest bench powered by tinybench
  • Same machine: when comparing libraries, all benchmarks run in the same process on the same hardware

CI regression tracking

Performance regressions are tracked on every pull request via two systems:

  • CodSpeed runs all benchmarks using CPU instruction counting (not wall-clock timing). This eliminates noise from shared CI runners and produces deterministic, reproducible results. Regressions are flagged directly on the PR.
  • Bundle size comparison measures all entry points against the main baseline and posts a size delta report as a PR comment.

Run it yourself

cd packages/evlog

bun run bench                          # all benchmarks
bunx vitest bench bench/comparison/    # vs alternatives only
bun bench/scripts/size.ts              # bundle size