Skip to content

Benchmarks

Comparative performance analysis of @oofp/core against fp-ts, Effect, neverthrow, purify-ts, OOP patterns, and imperative code.

Comparative performance benchmarks of @oofp/core against other TypeScript FP libraries and imperative patterns.

  • Runtime: Node.js 20, Apple Silicon (ARM64)
  • Tool: Vitest bench (built on tinybench)
  • Warm-up: Each benchmark runs a warm-up phase before measurement
  • Iterations: Vitest automatically determines iteration count for statistical significance
  • Metric: Operations per second (ops/sec), higher is better

All implementations perform identical work in each benchmark. The only difference is the abstraction used.

LibraryVersionStyleType
@oofp/coreworkspacePipe-based (data-last)Either<E, A> tagged union
fp-ts2.xPipe-based (data-last)Either<E, A> tagged union
Effect3.xPipe-based (dual API)Either<A, E> (reversed params)
neverthrow8.xMethod chainingResult<T, E> class
purify-ts2.xMethod chainingEither<L, R> class
OOP ResultN/AMethod chainingHand-rolled Result<T, E> class (theoretical baseline)
ImperativeN/Atry/catch, null checksPlain objects + exceptions

Note on OOP Result: The hand-rolled Result class is included as a theoretical performance ceiling — the fastest possible FP-style error handling with minimal abstraction. It is not a published library and lacks everything that makes a real FP library useful: no npm package, no ecosystem, no advanced type inference, no composable combinators beyond basic map/chain, and no native support for concurrency, fire-and-forget, error accumulation, middleware, or Reader-based dependency injection. Every advanced pattern must be manually reimplemented per project. It represents the answer to “how fast could this be if I hand-rolled everything?” — a useful baseline, but not a practical choice for production applications.

Terminal window
git clone https://github.com/thexpert507/oofp.git
cd oofp
pnpm install
pnpm --filter @oofp/benchmarks bench

Measures the cost of creating success and failure values.

Libraryops/secRelative
purify-ts Right(42)26,517K1.00x
effect Either.right(42)25,886K1.02x
imperative { ok: true }25,829K1.03x
OOP Result.ok(42)25,771K1.03x
@oofp/core E.right(42)20,686K1.28x
fp-ts E.right(42)12,340K2.15x
neverthrow ok(42)6,793K3.90x
Libraryops/secRelative
purify-ts Left(err)26,175K1.00x
effect Either.left(err)25,963K1.01x
imperative { ok: false }25,708K1.02x
OOP Result.err(err)25,282K1.04x
@oofp/core E.left(err)20,970K1.25x
fp-ts E.left(err)12,033K2.18x
neverthrow err(err)6,838K3.83x

All libraries create values in the tens of millions of ops/sec. The differences are negligible in practice — even at the “slowest” (neverthrow at 6.8M), you can create 6.8 million Result values per second.

The cost comes down to object shape:

  • purify-ts, effect, OOP, imperative: Simple object or class instantiation
  • @oofp/core, fp-ts: Tagged union with { tag: 'Right', value: 42 } — slightly more allocation than { _tag: 'Right', right: 42 } but still very fast
  • neverthrow: Class-based new Ok(42) with prototype chain — more overhead from constructor + prototype setup

A realistic 5-step pipeline: parse string to number -> validate range -> double -> validate even -> format to string.

Each step uses the library’s idiomatic map/chain/flatMap pattern.

Libraryops/secRelative
imperative31,935K1.00x
OOP Result29,618K1.08x
purify-ts27,866K1.15x
effect10,316K3.10x
@oofp/core6,159K5.19x
neverthrow2,815K11.35x
fp-ts963K33.16x
Libraryops/secRelativevs own success
OOP Result31,235K1.00x1.05x faster
purify-ts29,824K1.05x1.07x faster
effect11,258K2.77x1.09x faster
@oofp/core9,263K3.37x1.50x faster
neverthrow6,627K4.71x2.35x faster
fp-ts1,074K29.09x1.11x faster
imperative432K72.23x73.86x slower
Libraryops/secRelativevs own success
OOP Result28,607K1.00x0.97x
purify-ts27,058K1.06x0.97x
effect10,348K2.76x1.00x
@oofp/core7,690K3.72x1.25x faster
neverthrow3,665K7.81x1.30x faster
fp-ts1,009K28.36x1.05x faster
imperative433K66.14x73.75x slower

The central finding of these benchmarks: imperative try/catch collapses from 31.9M to 432K ops/sec when errors occur — a 74x slowdown. This is because JavaScript engines must construct a full stack trace for every throw.

All FP libraries maintain consistent throughput regardless of success or failure. @oofp/core’s error path is actually faster than its success path because chain and map short-circuit on Left values, skipping computation entirely.

The hand-rolled OOP Result tops the charts here because it’s a minimal class with direct method dispatch — no pipe overhead, no tagged union matching. But this speed comes from what it lacks: no pipe/flow composition, no type-class instances, no ecosystem. In practice, building a real application on a hand-rolled Result means reimplementing every combinator @oofp/core provides out of the box.

When errors are expected (validation, parsing, user input), FP error handling isn’t just safer — it’s dramatically faster.


Tests three scenarios: folding a success value, folding a failure value, and recovering from an error with a fallback.

Libraryops/secRelative
OOP Result23,777K1.00x
imperative23,586K1.01x
purify-ts22,816K1.04x
effect11,586K2.05x
@oofp/core7,439K3.20x
neverthrow3,880K6.13x
fp-ts1,538K15.46x
Libraryops/secRelative
OOP Result24,614K1.00x
purify-ts22,903K1.07x
effect11,548K2.13x
@oofp/core7,759K3.17x
neverthrow3,793K6.49x
fp-ts1,589K15.49x
imperative432K57.01x
Libraryops/secRelative
OOP Result25,496K1.00x
purify-ts25,109K1.02x
effect14,480K1.76x
@oofp/core8,419K3.03x
neverthrow2,661K9.58x
fp-ts1,623K15.71x
imperative433K58.85x

The pattern is consistent: imperative code is competitive on the success path but collapses 57-59x when errors occur. This makes try/catch a poor choice for any code path where errors are expected (validation, parsing, network calls, user input).

@oofp/core at ~8M ops/sec on error recovery is 19x faster than imperative try/catch (433K) for the same operation. The OOP Result is faster in raw numbers, but provides only match/orElse — @oofp/core adds chainLeft, tapLeft, bimap, sequence, traverse, and dozens of composable combinators that make complex error-handling pipelines practical without manual wiring.


Tests async operations using each library’s async type: TaskEither, ResultAsync, Effect, EitherAsync, and plain async/await.

Libraryops/secRelative
imperative async/await6,927K1.00x
OOP ResultAsync2,528K2.74x
@oofp/core TaskEither1,390K4.98x
purify-ts EitherAsync987K7.02x
effect Effect978K7.08x
neverthrow ResultAsync621K11.15x
fp-ts TaskEither440K15.73x
Libraryops/secRelative
OOP ResultAsync3,372K1.00x
@oofp/core TaskEither1,505K2.24x
purify-ts EitherAsync1,049K3.22x
neverthrow ResultAsync863K3.91x
fp-ts TaskEither483K6.99x
imperative async/await330K10.23x
effect Effect249K13.54x

In async code, the gap between libraries narrows because Promise resolution dominates the cost. However, the pattern holds:

  • Imperative async/await is fastest on success (6.9M) because there’s no wrapper overhead
  • Imperative collapses on error (330K) — 21x slower than its own success path
  • @oofp/core TaskEither is 3.2x faster than fp-ts TaskEither across both paths
  • @oofp/core is 4.5x faster than imperative on error paths
  • Effect is surprisingly slow on error paths (249K) due to its fiber/runtime machinery

The OOP ResultAsync is fast because it’s minimal: just a Promise<Result<T,E>> wrapper with no runtime overhead. However, this simplicity is also its limitation — it provides only flatMap, map, and match. It has no sequence, traverse, concurrency, tapTEAsync, chainLeft, or any combinator for composing async operations beyond linear chaining. In a real application, you would need to reimplement all of these — at which point you’re building @oofp/core from scratch, one bug at a time.


Real-world async orchestration patterns extracted from production applications. These benchmarks simulate the kind of composition patterns you find in backend services: sequential pipelines, parallel fan-out, rate-limited processing, error recovery chains, middleware wrappers, and fire-and-forget side effects.

All async operations use Promise.resolve() (microtask boundary) to measure orchestration overhead — the cost of composing async operations — not I/O time. In production, I/O dominates and these differences become negligible, but understanding the overhead reveals the efficiency of each library’s composition model.

Simulates a 7-step processing pipeline where each step depends on the previous: parse -> validate -> enrich -> transform -> save -> notify -> format. This is the most common pattern in service handlers.

LibraryAPIops/secRelative
OOP.flatMap x77,770K1.00x
neverthrow.andThen x75,498K1.41x
imperativeawait x71,614K4.81x
@oofp/coreTE.chain x7631K12.32x
purify-ts.chain x7311K24.96x
fp-tsTE.chain x7270K28.76x
effectEffect.flatMap x7220K35.25x

Analysis: OOP and neverthrow are extremely fast because they use lazy class-based chaining — no Promise is created until the chain is executed. @oofp/core’s TE.chain wraps each step in a thunk (() => Promise), which adds closure allocation overhead but enables lazy evaluation and referential transparency. fp-ts and Effect pay additional costs from their more complex type machinery. Note that OOP’s speed advantage only holds for simple linear chains; the moment you need branching, parallel composition, or error recovery, you must hand-write the logic that @oofp/core provides as composable one-liners.

5b. Parallel Execution — 5 independent fetches

Section titled “5b. Parallel Execution — 5 independent fetches”

Simulates fetching 5 independent resources simultaneously (user, permissions, settings, notifications, activity). Tests each library’s applicative/parallel combinator.

LibraryAPIops/secRelative
OOPResultAsync.all7,922K1.00x
neverthrowResultAsync.combine5,650K1.40x
imperativePromise.all1,814K4.37x
purify-tsEitherAsync.all465K17.04x
@oofp/coreTE.concurrency359K22.07x
fp-tssequenceT(ApplicativePar)300K26.41x
effectEffect.all (unbounded)22K360.09x

Analysis: Effect’s parallel execution is notably slow (22K ops/sec) because Effect.all with concurrency: "unbounded" must instantiate fibers and schedule them through the runtime. @oofp/core uses TE.concurrency (not TE.sequence, which is sequential) — internally it leverages apply, which runs both sides via Promise.all, giving it true parallel semantics. purify-ts edges ahead here because its EitherAsync.all defers to a simpler Promise.all wrapper with less type machinery overhead. The OOP ResultAsync.all is fast because it’s just Promise.all wrapped in a class — but it provides no type-safe heterogeneous tuple support. @oofp/core’s TE.concurrency correctly infers the result tuple type from inputs of different types, while OOP’s version requires manual as unknown[] casts.

5c. Controlled Concurrency — 20 items, max 3 concurrent

Section titled “5c. Controlled Concurrency — 20 items, max 3 concurrent”

Simulates processing a batch of 20 items with a concurrency limit of 3 — a common pattern for rate-limited API calls, database connections, or external service throttling. Only @oofp/core and Effect have native concurrency control; all others require manual batching.

LibraryAPIops/secRelativeNative?
imperativemanual batching353K1.00xNo
OOPmanual batching175K2.02xNo
purify-tsmanual batching72K4.89xNo
@oofp/coreTE.concurrency({concurrency:3})71K4.97xYes
neverthrowmanual batching52K6.83xNo
fp-tsmanual batching31K11.25xNo
effectEffect.forEach({concurrency:3})17K21.08xYes

Analysis: This is where @oofp/core’s native TE.concurrency shines for ergonomics. While imperative manual batching is fastest (no abstraction overhead), @oofp/core provides a one-liner API that matches manually-batched purify-ts performance and is 4.2x faster than Effect’s native Effect.forEach. The manual batching implementations require 10-15 lines of boilerplate that @oofp/core eliminates.

5d. Error Recovery — double failure + fallback + continue

Section titled “5d. Error Recovery — double failure + fallback + continue”

Simulates a scenario where a primary service fails, a secondary service also fails, but a fallback succeeds and the pipeline continues. Tests double recovery (chainLeft/orElse x2) followed by continuation.

LibraryAPIops/secRelative
OOP.orElse x2 + .flatMap7,691K1.00x
neverthrow.orElse x2 + .andThen5,899K1.30x
imperativenested try/catch934K8.23x
@oofp/coreTE.chainLeft x2 + chain682K11.28x
fp-tsTE.orElse x2 + chain416K18.48x
purify-ts.chainLeft x2 + .chain401K19.19x
effectEffect.catchAll x2 + flatMap299K25.71x

Analysis: @oofp/core’s TE.chainLeft provides declarative error recovery that’s 1.6x faster than fp-ts’s TE.orElse and 1.7x faster than purify-ts’s .chainLeft. The imperative approach (nested try/catch) is surprisingly competitive here because the simulated failures use Promise.resolve (no actual exception throwing). In production with real exceptions, the FP advantage would be much larger (as shown in the sync error-handling benchmarks).

5e. Middleware Wrapper — credits check/deduct/rollback

Section titled “5e. Middleware Wrapper — credits check/deduct/rollback”

Simulates a middleware pattern from production: check if the user has enough credits, execute the main operation, then deduct credits on success or rollback on failure. This is the _consumeCredits pattern from real applications.

LibraryAPIops/secRelative
OOPmethod chaining7,648K1.00x
neverthrowmethod chaining5,830K1.31x
imperativetry/finally2,877K2.66x
@oofp/corepipe composition722K10.59x
purify-tsmethod chaining451K16.96x
effectEffect composition319K24.00x
fp-tspipe composition287K26.69x

Analysis: @oofp/core’s withCredits higher-order function composes cleanly with pipe — the middleware is a reusable function that wraps any TaskEither pipeline. It’s 2.5x faster than fp-ts and 2.3x faster than Effect for the same pattern. The OOP and neverthrow versions are faster in raw throughput but require defining the middleware as a class method or closure manually for every use case. @oofp/core’s approach is composable: withCredits is a generic HOF that can wrap any TaskEither pipeline without modification, while the OOP version is tightly coupled to the specific ResultAsync chain it wraps.

5f. Fire-and-Forget — pipeline + 2 detached side effects

Section titled “5f. Fire-and-Forget — pipeline + 2 detached side effects”

Simulates a pipeline that completes its main work and then fires off non-critical side effects (analytics tracking, audit logging) without waiting for them. Only @oofp/core (TE.tapTEAsync) and Effect (Effect.fork) have native fire-and-forget; all others require manual promise.catch(() => {}) patterns.

LibraryAPIops/secRelativeNative?
OOP.tap + manual fire8,318K1.00xNo
neverthrow.andThen + manual fire6,122K1.36xNo
imperativepromise.catch(() => {})1,602K5.19xNo
@oofp/coreTE.tapTEAsync (native)564K14.75xYes
purify-ts.ifRight + manual fire329K25.27xNo
fp-tschainFirst + manual fire318K26.18xNo
effectEffect.tap + fork (native)106K78.65xYes

Analysis: @oofp/core’s TE.tapTEAsync provides a clean declarative API for fire-and-forget that’s 5.3x faster than Effect’s fork. Without native support, developers must write error-swallowing wrappers manually — code that’s easy to get wrong (forgotten .catch leads to unhandled rejections). @oofp/core eliminates this class of bugs while maintaining good performance.

Comparing @oofp/core against real, published FP libraries (the OOP baseline is a hand-rolled class, not a usable library — see below):

Scenario@oofp/core rank (among published libs)vs fp-tsvs Effectvs neverthrowNotable
Sequential chain1st2.3x faster2.9x fasterNeverthrow faster (lazy) but no native patterns@oofp/core fastest pipe-based lib
Parallel execution3rd1.2x faster16x fasterNeverthrow faster (lazy); purify-ts faster (simpler wrapper)purify-ts edges out @oofp/core due to less type machinery overhead
Controlled concurrency1st2.3x faster4.2x fasterNeverthrow slower (manual)@oofp/core has native API; neverthrow/purify-ts don’t
Error recovery1st1.6x faster2.3x fasterNeverthrow faster (lazy)Declarative chainLeft x2 composition
Middleware wrapper1st2.5x faster2.3x fasterNeverthrow faster but tightly coupled@oofp/core composable HOF via pipe
Fire-and-forget1st1.8x faster5.3x fasterNeverthrow faster but manual@oofp/core has native API; neverthrow/purify-ts don’t

Key takeaways:

  1. @oofp/core is the fastest published pipe-based FP library across 5 of 6 orchestration patterns — 1.2-2.5x faster than fp-ts and 2.3-16x faster than Effect. In parallel execution, purify-ts edges ahead (465K vs 359K) due to its simpler Promise.all wrapper with less type machinery overhead.
  2. @oofp/core is the fastest published FP library with native orchestration support. TE.concurrency and TE.tapTEAsync are one-liner APIs that neverthrow, purify-ts, and fp-ts simply don’t have. Effect has native equivalents but runs 4-16x slower.
  3. Native APIs matter for correctness, not just ergonomics. A forgotten .catch() in manual fire-and-forget causes unhandled rejections in production. Manual concurrency batching is 10-15 lines of error-prone code per call site. @oofp/core eliminates both classes of bugs.
  4. The OOP/neverthrow raw speed advantage is synthetic. Their lazy class-based chaining avoids Promise allocation until execution, producing impressive benchmark numbers. But neverthrow lacks native concurrency and fire-and-forget, and the OOP Result is a ~50-line class with no ecosystem — neither is a practical replacement for @oofp/core’s full combinator set.
  5. For real production backends, the orchestration overhead is negligible compared to I/O (database queries take 1-50ms; the difference between 631K and 7,770K ops/sec is 0.001ms vs 0.0001ms per operation). Choose based on API ergonomics, type safety, and native pattern support — areas where @oofp/core excels.

Ranking real, published npm packages across all 16 benchmark scenarios (lower is better):

RankLibraryStrengthsWeaknesses
1@oofp/coreFastest pipe-based FP lib in 5/6 orchestration scenarios, zero deps, native concurrency + fire-and-forget + Reader DI, 1.2-2.5x faster than fp-ts, 2.3-16x faster than EffectSlower than method-chaining libs on sync hot paths; purify-ts edges ahead on parallel execution
2neverthrowFast lazy chaining, simple API, good TS integrationNo native concurrency/fire-and-forget, slower creation, no pipe composition
3purify-tsFast sync speed, method chainingSmaller ecosystem, no native orchestration patterns, limited async combinators
4effectRich ecosystem, native concurrency, scheduler16-360x slower than others on parallel/fire-and-forget, large bundle, steep learning curve
5fp-tsMature, extensive type classes15-33x slower than OOP, no native orchestration, effectively EOL
BaselineRole in benchmarksWhy it’s not a practical choice
OOP Result (hand-rolled)Performance ceiling — shows maximum possible speedNot a library. No npm package, no docs, no ecosystem, no type-safe combinators. Every pattern beyond map/chain must be reimplemented manually. See Why not hand-roll your own?
Imperative (try/catch)Native JS baselineCatastrophic on error paths (57-74x slower). No composable error handling, no type safety for errors
  • CPU-bound hot loops with no errors: Use imperative code. The overhead of any FP wrapper is measurable.
  • Application logic with expected errors: Use any FP library. Even the “slowest” (fp-ts at ~1M ops/sec for pipelines) is fast enough for virtually all applications. The consistent error-path performance is worth far more than the happy-path overhead.
  • Backend services with async orchestration: @oofp/core offers the best balance — native TE.concurrency, TE.tapTEAsync, and TE.chainLeft cover the most common patterns without manual boilerplate, at 2.3-16x faster than Effect.
  • New projects choosing an FP library: @oofp/core offers a good balance of performance, API simplicity, and zero dependencies. purify-ts is faster on sync paths if you prefer method chaining. Effect is the most feature-rich but comes with a large bundle and complex runtime.
  • Migrating from fp-ts: @oofp/core has a similar pipe-based API and is 1.6-6x faster across all benchmarks, with native support for patterns fp-ts lacks entirely.
  • “I’ll just hand-roll my own Result class”: You’ll get great benchmark numbers on simple chains, but as your application grows you’ll need concurrency control, fire-and-forget, error recovery, middleware, Reader-based DI, traversals, and dozens of combinators. You’ll end up building a library — untested, undocumented, and maintained by your team alone. See the next section.

The hand-rolled OOP Result tops nearly every benchmark. So why not just copy the ~50-line class and use it?

Because benchmarks measure the simplest case, and real applications don’t stay simple.

  • map, flatMap, match (fold)
  • Result.ok(value), Result.err(error)
  • ResultAsync with all (Promise.all wrapper)

That’s it. ~50 lines of code, ~3 methods per class.

What @oofp/core provides that a hand-rolled Result doesn’t

Section titled “What @oofp/core provides that a hand-rolled Result doesn’t”
Capability@oofp/coreHand-rolled OOP
chain, map, flatMapYesYes
chainLeft / error recoveryYesMust implement
tapTEAsync / fire-and-forgetYes (native)Must implement + handle .catch()
concurrency({concurrency: N})Yes (native)Must implement batching logic
sequence / parallel with type-safe tuplesYesMust implement + cast to unknown[]
traverse / mapping + sequencingYesMust implement
tryCatch / safe Promise wrappingYesMust implement
fromNullable, fromPredicateYesMust implement
bimap, mapLeftYesMust implement
Reader monad (dependency injection)RTE<R, E, A>Not available
Reader + concurrency (RTE.concurrency)YesNot available
Reader + fire-and-forget (RTE.tapRTEAsync)YesNot available
pipe, flow, composeYesNot available
Maybe, State, IO, Task monadsYesNot available
Sub-path exports / tree-shakingYesN/A
Published npm package with semverYesNo
DocumentationYesNo
Test suiteYes (296+ tests)No
TypeScript type inference for combinatorsFully testedWhatever you write

A hand-rolled Result class works great on day 1. By month 3, your team has:

  1. Reimplemented 15+ combinatorschainLeft, tapTE, sequence, traverse, tryCatch, fromNullable, bimap, mapLeft, etc. Each one is 5-20 lines, each one needs tests, each one has edge cases.
  2. Written manual concurrency batching in 8 different files with subtle bugs (off-by-one in batch slicing, missing error propagation, no backpressure).
  3. Forgotten .catch() on fire-and-forget in 2 places, causing intermittent unhandled rejection crashes in production.
  4. No dependency injection — passed config and logger as function arguments through 6 layers of calls instead of using Reader.
  5. No documentation — new team members spend 2 days understanding the custom Result class and its undocumented combinators.

@oofp/core is 0 dependencies, tree-shakeable, and tested. The performance gap between 722K ops/sec (@oofp/core middleware) and 7,648K ops/sec (hand-rolled OOP) is 0.001ms vs 0.0001ms per operation — invisible in any application where a single database query takes 1-50ms.

The question isn’t “can I hand-roll something faster?” — it’s “do I want to maintain my own FP library?”


All numbers are from a single benchmark run. Vitest reports ±RME (relative margin of error) for each measurement. For the most accurate comparison, run the benchmarks yourself:

Terminal window
pnpm --filter @oofp/benchmarks bench

Source code: packages/benchmarks/comparison/


Performance and maintainability comparison of @oofp/focal against plain imperative code, across four scenarios extracted from a real normalized-store API response (LinkedIn Voyager Dash format).

  • Runtime: Node.js 20, Apple Silicon (ARM64)
  • Tool: Vitest bench
  • Fixture: A NormalizedStoreResponse with ~22 entities across 8 types (Profile, Position, PositionGroup, Skill, Certification, Language, Education, plus unknown entities as noise)
  • Candidates: Three implementations per scenario — Focal API (high-level), optics (low-level Lens/Prism/Traversal), and imperative

The fixture models the exact shape of a normalized API response: a heterogeneous included[] array where each entry is discriminated by a $type string — the same pattern as Redux normalized state or JSON:API.

Terminal window
pnpm --filter @oofp/benchmarks exec vitest bench --run comparison/focal

Source code: packages/benchmarks/comparison/focal/


Task: extract firstName from a ProfileEntity.

ImplementationRelative
imperative profile.firstName1.0x (reference)
optics Lens.prop('firstName')~1.7x slower
Focal API from().prop().get()~6x slower

Direct property access will always be the fastest for simple reads — that is expected. What this measures is the constant overhead of the abstraction itself, not a scaling cost. Optic chains do not get slower as nesting depth increases; they compose once.


Task: produce a new NormalizedStoreResponse where every ProfileEntity’s firstName is replaced — three levels of nesting deep.

ImplementationRelative
imperative map + spread chain1.0x (reference)
optics Traversal.compose + modify~4.7x slower
Focal API from().elements().match().prop().modify()~8x slower

Both do the same structural traversal. The imperative approach uses two levels of spread ({ ...response, included: response.included.map(...) }). The Focal API expresses the entire path as a single declarative pipe with no manual spread — the optic machinery handles the immutable reconstruction.


Task: collect only SkillEntity items from a heterogeneous IncludedEntity[].

ImplementationRelative
imperative array.filter(isSkill)1.0x (reference)
optics Traversal.each + Prism.match + collect~5.6x slower
Focal API from().elements().match().collect()~8.2x slower

This is the scenario where the maintainability argument is strongest. The imperative version requires a manually-written type guard (function isSkill(e): e is SkillEntity) per variant — 7 guards for 7 types, each a potential source of bugs (wrong $type string, missing field check). Focal.match derives discrimination from the TypeScript type directly: adding a new variant means one new match(...) call, not a new guard function.


Task: transform NormalizedStoreResponse → typed CandidateProfile domain object, discriminating and collecting all 7 entity types, extracting specific fields, and handling optional values.

ImplementationRelative
imperative filter/map/find + optional chaining1.0x (reference)
optics (Lens + Prism + Traversal compositions)~7.3x slower
Focal API (pipe chains from root)~13x slower

This is the most realistic scenario. The imperative version has 7 type guards, 9 .filter() calls, and optional chaining scattered across multiple locations. When the schema changes, every one of those call sites is a potential update target.

The Focal API is slowest here because each pipe(Focal.from<T>(), ...) is constructed at call time — this is the idiomatic pattern (maximum composability, no shared module-level state). The low-level optics version pre-builds its compositions as module-level constants and reuses them, which accounts for roughly half the gap.


The raw numbers tell one story. The time per call tells a different one:

Implementationops/sec (approx.)Time per call
imperative~1,170,000~0.86 µs
optics (pure)~160,000~6.2 µs
Focal API~89,000~11.2 µs

A typical HTTP response from a backend service has 50,000–200,000 µs of network latency. The difference between the imperative and Focal API implementations of domainMapping — a transformation that runs once per API response — is ~10 µs. That is < 0.02% of the total response time budget.

This is exactly the scenario Donald Knuth had in mind: “premature optimization is the root of all evil”. The full quote adds: “yet we should not pass up our opportunities in that critical 3%”. A one-per-response transformation is not the critical 3%.

Beyond runtime performance, the three implementations can be compared on objective code-quality signals derived from static analysis of their source files:

Metricimperativeoptics (pure)Focal API
Type guards (function isXxx(): e is T)700
Lines with spread operators (...)210
Schema coupling points (hard-coded $type strings + field names)72122
.filter() call sites911
Reusable optic/focal constants at module level0107

Reading the table:

  • Type guards (lower is better): The imperative approach requires one manually-written guard per union variant — 7 for 7 types. Both optic approaches derive discrimination from the TypeScript type: zero guards. Each guard in the imperative version is a potential bug (wrong string literal, wrong field check) that the compiler cannot catch.
  • Spread operators (lower is better): The Focal API produces zero — modify + run handles the full immutable reconstruction. The imperative version’s 2 spreads represent the manual nesting that grows with schema depth.
  • Schema coupling points: Counterintuitively, both optic approaches score higher than imperative here because field names appear as string arguments to prop("firstName"), prop("title"), etc. The key difference is centralization: those strings appear in one definition that all consumers share, rather than being duplicated across every call site.
  • Filter calls (lower is better): The imperative version calls .filter() 9 times across the four scenarios. Both optic approaches reduce this to 1 (a single .filter() to eliminate undefined values from an optional field).
  • Reusable compositions (higher is better): The low-level optics version defines 10 module-level constants (prisms and traversals) that can be imported and composed anywhere in the codebase. The Focal API has 7 (the T_* string constants). The imperative version has none — every function rebuilds its logic from scratch.

Does the Focal API overhead grow with collection size, or is it a constant multiplier?

Fixtures were generated by repeating the base included[] array at four scales (×1 = 22 entities, ×10 = 220, ×50 = 1,100, ×250 = 5,500). The table shows the imperative/Focal API speed ratio at each scale — if the abstraction had super-linear cost, the ratio would grow. If the overhead is a constant multiplier, it would stay flat. What actually happens is neither: the ratio shrinks.

filterByType — imperative / Focal API ratio by scale:

ScaleEntitiesRatio
×1227.6x
×102206.1x
×501,1005.8x
×2505,5002.6x

deepUpdate:

ScaleEntitiesRatio
×1228.5x
×102206.8x
×501,1006.2x
×2505,5003.9x

domainMapping:

ScaleEntitiesRatio
×12212.8x
×102208.2x
×501,1007.5x
×2505,5007.4x

Analysis: the gap closes as collection size grows. The reason is that the Focal API has a fixed cost at call time — constructing the pipe — on top of the O(n) traversal cost that both approaches share. As n increases, the traversal dominates and the fixed construction cost becomes a smaller fraction of the total. There is no asymptotic penalty: the Focal API scales at least as well as imperative code for large collections. The overhead is a constant that matters less and less as the data grows.


If the fixed cost is pipe construction, pre-building the route as a module-level constant and only applying the terminator at call time should recover most of that cost:

// Idiomatic — full pipe constructed on every call
pipe(Focal.from<ProfileEntity>(), Focal.prop("firstName"), Focal.get(profile))
// Pre-built — route is a module-level constant, only terminator varies
const firstNameFocal = pipe(Focal.from<ProfileEntity>(), Focal.prop("firstName"));
pipe(firstNameFocal, Focal.get(profile))

Four candidates were measured: Focal idiomatic, Focal pre-built, optics pre-built (low-level Lens/Prism/Traversal), and imperative.

Read simple (get firstName):

CandidateRelative to idiomatic
Focal API — idiomatic1.0x (reference)
Focal API — pre-built2.7x faster
Optics — pre-built3.8x faster
Imperative5.5x faster

Collect (filterByType):

CandidateRelative to idiomatic
Focal API — idiomatic1.0x (reference)
Focal API — pre-built1.28x faster
Optics — pre-built1.37x faster
Imperative7.3x faster

Modify + run (deepUpdate):

CandidateRelative to idiomatic
Focal API — idiomatic1.0x (reference)
Focal API — pre-built1.46x faster
Optics — pre-built1.64x faster
Imperative8.5x faster

Analysis: pre-building a route always helps, but how much depends on what dominates. For simple reads, construction is the main cost — pre-building gives a 2.7x improvement. For traversals over larger collections, the iteration cost dominates and pre-building gives only 1.3–1.5x. In every case, Focal pre-built practically matches optics pre-built — the residual gap is just the Focal wrapper over the underlying optic, which is negligible.

The practical implication: if a Focal route appears in a measured hot path, extract it to a module-level constant. For all other application code, the idiomatic per-call pattern is perfectly sufficient.


The ~8-13x gap is irrelevant for:

  • Any transformation that runs once per API response or user action
  • Any code path where network I/O, database queries, or rendering dominate
  • Any throughput below ~1M calls/sec in a hot loop

The gap does matter for:

  • Hot paths with measured CPU bottlenecks (a profiler shows this function in the top 3%)
  • Streaming parsers processing millions of records per second
  • WebGL / game loops running at 60hz with tight per-frame budgets

For everything else — which is virtually all application code — the Fowler principle applies: code is read far more often than it is written, and the cost of change is the metric that matters in production.