Benchmarks
Comparative performance analysis of @oofp/core against fp-ts, Effect, neverthrow, purify-ts, OOP patterns, and imperative code.
Comparative performance benchmarks of @oofp/core against other TypeScript FP libraries and imperative patterns.
Methodology
Section titled “Methodology”- Runtime: Node.js 20, Apple Silicon (ARM64)
- Tool: Vitest bench (built on tinybench)
- Warm-up: Each benchmark runs a warm-up phase before measurement
- Iterations: Vitest automatically determines iteration count for statistical significance
- Metric: Operations per second (ops/sec), higher is better
All implementations perform identical work in each benchmark. The only difference is the abstraction used.
Libraries tested
Section titled “Libraries tested”| Library | Version | Style | Type |
|---|---|---|---|
| @oofp/core | workspace | Pipe-based (data-last) | Either<E, A> tagged union |
| fp-ts | 2.x | Pipe-based (data-last) | Either<E, A> tagged union |
| Effect | 3.x | Pipe-based (dual API) | Either<A, E> (reversed params) |
| neverthrow | 8.x | Method chaining | Result<T, E> class |
| purify-ts | 2.x | Method chaining | Either<L, R> class |
| OOP Result | N/A | Method chaining | Hand-rolled Result<T, E> class (theoretical baseline) |
| Imperative | N/A | try/catch, null checks | Plain objects + exceptions |
Note on OOP Result: The hand-rolled Result class is included as a theoretical performance ceiling — the fastest possible FP-style error handling with minimal abstraction. It is not a published library and lacks everything that makes a real FP library useful: no npm package, no ecosystem, no advanced type inference, no composable combinators beyond basic
map/chain, and no native support for concurrency, fire-and-forget, error accumulation, middleware, or Reader-based dependency injection. Every advanced pattern must be manually reimplemented per project. It represents the answer to “how fast could this be if I hand-rolled everything?” — a useful baseline, but not a practical choice for production applications.
How to reproduce
Section titled “How to reproduce”git clone https://github.com/thexpert507/oofp.gitcd oofppnpm installpnpm --filter @oofp/benchmarks bench1. Creation
Section titled “1. Creation”Measures the cost of creating success and failure values.
Success value
Section titled “Success value”| Library | ops/sec | Relative |
|---|---|---|
purify-ts Right(42) | 26,517K | 1.00x |
effect Either.right(42) | 25,886K | 1.02x |
imperative { ok: true } | 25,829K | 1.03x |
OOP Result.ok(42) | 25,771K | 1.03x |
@oofp/core E.right(42) | 20,686K | 1.28x |
fp-ts E.right(42) | 12,340K | 2.15x |
neverthrow ok(42) | 6,793K | 3.90x |
Failure value
Section titled “Failure value”| Library | ops/sec | Relative |
|---|---|---|
purify-ts Left(err) | 26,175K | 1.00x |
effect Either.left(err) | 25,963K | 1.01x |
imperative { ok: false } | 25,708K | 1.02x |
OOP Result.err(err) | 25,282K | 1.04x |
@oofp/core E.left(err) | 20,970K | 1.25x |
fp-ts E.left(err) | 12,033K | 2.18x |
neverthrow err(err) | 6,838K | 3.83x |
Analysis
Section titled “Analysis”All libraries create values in the tens of millions of ops/sec. The differences are negligible in practice — even at the “slowest” (neverthrow at 6.8M), you can create 6.8 million Result values per second.
The cost comes down to object shape:
- purify-ts, effect, OOP, imperative: Simple object or class instantiation
- @oofp/core, fp-ts: Tagged union with
{ tag: 'Right', value: 42 }— slightly more allocation than{ _tag: 'Right', right: 42 }but still very fast - neverthrow: Class-based
new Ok(42)with prototype chain — more overhead from constructor + prototype setup
2. Pipeline (5 Steps)
Section titled “2. Pipeline (5 Steps)”A realistic 5-step pipeline: parse string to number -> validate range -> double -> validate even -> format to string.
Each step uses the library’s idiomatic map/chain/flatMap pattern.
Success path (input "42")
Section titled “Success path (input "42")”| Library | ops/sec | Relative |
|---|---|---|
| imperative | 31,935K | 1.00x |
| OOP Result | 29,618K | 1.08x |
| purify-ts | 27,866K | 1.15x |
| effect | 10,316K | 3.10x |
| @oofp/core | 6,159K | 5.19x |
| neverthrow | 2,815K | 11.35x |
| fp-ts | 963K | 33.16x |
Failure at parse (input "abc")
Section titled “Failure at parse (input "abc")”| Library | ops/sec | Relative | vs own success |
|---|---|---|---|
| OOP Result | 31,235K | 1.00x | 1.05x faster |
| purify-ts | 29,824K | 1.05x | 1.07x faster |
| effect | 11,258K | 2.77x | 1.09x faster |
| @oofp/core | 9,263K | 3.37x | 1.50x faster |
| neverthrow | 6,627K | 4.71x | 2.35x faster |
| fp-ts | 1,074K | 29.09x | 1.11x faster |
| imperative | 432K | 72.23x | 73.86x slower |
Failure at validation (input "5000")
Section titled “Failure at validation (input "5000")”| Library | ops/sec | Relative | vs own success |
|---|---|---|---|
| OOP Result | 28,607K | 1.00x | 0.97x |
| purify-ts | 27,058K | 1.06x | 0.97x |
| effect | 10,348K | 2.76x | 1.00x |
| @oofp/core | 7,690K | 3.72x | 1.25x faster |
| neverthrow | 3,665K | 7.81x | 1.30x faster |
| fp-ts | 1,009K | 28.36x | 1.05x faster |
| imperative | 433K | 66.14x | 73.75x slower |
Key insight
Section titled “Key insight”The central finding of these benchmarks: imperative try/catch collapses from 31.9M to 432K ops/sec when errors occur — a 74x slowdown. This is because JavaScript engines must construct a full stack trace for every throw.
All FP libraries maintain consistent throughput regardless of success or failure. @oofp/core’s error path is actually faster than its success path because chain and map short-circuit on Left values, skipping computation entirely.
The hand-rolled OOP Result tops the charts here because it’s a minimal class with direct method dispatch — no pipe overhead, no tagged union matching. But this speed comes from what it lacks: no pipe/flow composition, no type-class instances, no ecosystem. In practice, building a real application on a hand-rolled Result means reimplementing every combinator @oofp/core provides out of the box.
When errors are expected (validation, parsing, user input), FP error handling isn’t just safer — it’s dramatically faster.
3. Error Handling
Section titled “3. Error Handling”Tests three scenarios: folding a success value, folding a failure value, and recovering from an error with a fallback.
Success path (fold/match)
Section titled “Success path (fold/match)”| Library | ops/sec | Relative |
|---|---|---|
| OOP Result | 23,777K | 1.00x |
| imperative | 23,586K | 1.01x |
| purify-ts | 22,816K | 1.04x |
| effect | 11,586K | 2.05x |
| @oofp/core | 7,439K | 3.20x |
| neverthrow | 3,880K | 6.13x |
| fp-ts | 1,538K | 15.46x |
Failure path (fold/match)
Section titled “Failure path (fold/match)”| Library | ops/sec | Relative |
|---|---|---|
| OOP Result | 24,614K | 1.00x |
| purify-ts | 22,903K | 1.07x |
| effect | 11,548K | 2.13x |
| @oofp/core | 7,759K | 3.17x |
| neverthrow | 3,793K | 6.49x |
| fp-ts | 1,589K | 15.49x |
| imperative | 432K | 57.01x |
Error recovery (orElse/chainLeft)
Section titled “Error recovery (orElse/chainLeft)”| Library | ops/sec | Relative |
|---|---|---|
| OOP Result | 25,496K | 1.00x |
| purify-ts | 25,109K | 1.02x |
| effect | 14,480K | 1.76x |
| @oofp/core | 8,419K | 3.03x |
| neverthrow | 2,661K | 9.58x |
| fp-ts | 1,623K | 15.71x |
| imperative | 433K | 58.85x |
Analysis
Section titled “Analysis”The pattern is consistent: imperative code is competitive on the success path but collapses 57-59x when errors occur. This makes try/catch a poor choice for any code path where errors are expected (validation, parsing, network calls, user input).
@oofp/core at ~8M ops/sec on error recovery is 19x faster than imperative try/catch (433K) for the same operation. The OOP Result is faster in raw numbers, but provides only match/orElse — @oofp/core adds chainLeft, tapLeft, bimap, sequence, traverse, and dozens of composable combinators that make complex error-handling pipelines practical without manual wiring.
4. Async Pipeline
Section titled “4. Async Pipeline”Tests async operations using each library’s async type: TaskEither, ResultAsync, Effect, EitherAsync, and plain async/await.
Success path (input "42")
Section titled “Success path (input "42")”| Library | ops/sec | Relative |
|---|---|---|
| imperative async/await | 6,927K | 1.00x |
| OOP ResultAsync | 2,528K | 2.74x |
| @oofp/core TaskEither | 1,390K | 4.98x |
| purify-ts EitherAsync | 987K | 7.02x |
| effect Effect | 978K | 7.08x |
| neverthrow ResultAsync | 621K | 11.15x |
| fp-ts TaskEither | 440K | 15.73x |
Failure path (input "-5")
Section titled “Failure path (input "-5")”| Library | ops/sec | Relative |
|---|---|---|
| OOP ResultAsync | 3,372K | 1.00x |
| @oofp/core TaskEither | 1,505K | 2.24x |
| purify-ts EitherAsync | 1,049K | 3.22x |
| neverthrow ResultAsync | 863K | 3.91x |
| fp-ts TaskEither | 483K | 6.99x |
| imperative async/await | 330K | 10.23x |
| effect Effect | 249K | 13.54x |
Analysis
Section titled “Analysis”In async code, the gap between libraries narrows because Promise resolution dominates the cost. However, the pattern holds:
- Imperative async/await is fastest on success (6.9M) because there’s no wrapper overhead
- Imperative collapses on error (330K) — 21x slower than its own success path
- @oofp/core TaskEither is 3.2x faster than fp-ts TaskEither across both paths
- @oofp/core is 4.5x faster than imperative on error paths
- Effect is surprisingly slow on error paths (249K) due to its fiber/runtime machinery
The OOP ResultAsync is fast because it’s minimal: just a Promise<Result<T,E>> wrapper with no runtime overhead. However, this simplicity is also its limitation — it provides only flatMap, map, and match. It has no sequence, traverse, concurrency, tapTEAsync, chainLeft, or any combinator for composing async operations beyond linear chaining. In a real application, you would need to reimplement all of these — at which point you’re building @oofp/core from scratch, one bug at a time.
5. Orchestration Scenarios
Section titled “5. Orchestration Scenarios”Real-world async orchestration patterns extracted from production applications. These benchmarks simulate the kind of composition patterns you find in backend services: sequential pipelines, parallel fan-out, rate-limited processing, error recovery chains, middleware wrappers, and fire-and-forget side effects.
All async operations use Promise.resolve() (microtask boundary) to measure orchestration overhead — the cost of composing async operations — not I/O time. In production, I/O dominates and these differences become negligible, but understanding the overhead reveals the efficiency of each library’s composition model.
5a. Sequential Pipeline — 7 async steps
Section titled “5a. Sequential Pipeline — 7 async steps”Simulates a 7-step processing pipeline where each step depends on the previous: parse -> validate -> enrich -> transform -> save -> notify -> format. This is the most common pattern in service handlers.
| Library | API | ops/sec | Relative |
|---|---|---|---|
| OOP | .flatMap x7 | 7,770K | 1.00x |
| neverthrow | .andThen x7 | 5,498K | 1.41x |
| imperative | await x7 | 1,614K | 4.81x |
| @oofp/core | TE.chain x7 | 631K | 12.32x |
| purify-ts | .chain x7 | 311K | 24.96x |
| fp-ts | TE.chain x7 | 270K | 28.76x |
| effect | Effect.flatMap x7 | 220K | 35.25x |
Analysis: OOP and neverthrow are extremely fast because they use lazy class-based chaining — no Promise is created until the chain is executed. @oofp/core’s TE.chain wraps each step in a thunk (() => Promise), which adds closure allocation overhead but enables lazy evaluation and referential transparency. fp-ts and Effect pay additional costs from their more complex type machinery. Note that OOP’s speed advantage only holds for simple linear chains; the moment you need branching, parallel composition, or error recovery, you must hand-write the logic that @oofp/core provides as composable one-liners.
5b. Parallel Execution — 5 independent fetches
Section titled “5b. Parallel Execution — 5 independent fetches”Simulates fetching 5 independent resources simultaneously (user, permissions, settings, notifications, activity). Tests each library’s applicative/parallel combinator.
| Library | API | ops/sec | Relative |
|---|---|---|---|
| OOP | ResultAsync.all | 7,922K | 1.00x |
| neverthrow | ResultAsync.combine | 5,650K | 1.40x |
| imperative | Promise.all | 1,814K | 4.37x |
| purify-ts | EitherAsync.all | 465K | 17.04x |
| @oofp/core | TE.concurrency | 359K | 22.07x |
| fp-ts | sequenceT(ApplicativePar) | 300K | 26.41x |
| effect | Effect.all (unbounded) | 22K | 360.09x |
Analysis: Effect’s parallel execution is notably slow (22K ops/sec) because Effect.all with concurrency: "unbounded" must instantiate fibers and schedule them through the runtime. @oofp/core uses TE.concurrency (not TE.sequence, which is sequential) — internally it leverages apply, which runs both sides via Promise.all, giving it true parallel semantics. purify-ts edges ahead here because its EitherAsync.all defers to a simpler Promise.all wrapper with less type machinery overhead. The OOP ResultAsync.all is fast because it’s just Promise.all wrapped in a class — but it provides no type-safe heterogeneous tuple support. @oofp/core’s TE.concurrency correctly infers the result tuple type from inputs of different types, while OOP’s version requires manual as unknown[] casts.
5c. Controlled Concurrency — 20 items, max 3 concurrent
Section titled “5c. Controlled Concurrency — 20 items, max 3 concurrent”Simulates processing a batch of 20 items with a concurrency limit of 3 — a common pattern for rate-limited API calls, database connections, or external service throttling. Only @oofp/core and Effect have native concurrency control; all others require manual batching.
| Library | API | ops/sec | Relative | Native? |
|---|---|---|---|---|
| imperative | manual batching | 353K | 1.00x | No |
| OOP | manual batching | 175K | 2.02x | No |
| purify-ts | manual batching | 72K | 4.89x | No |
| @oofp/core | TE.concurrency({concurrency:3}) | 71K | 4.97x | Yes |
| neverthrow | manual batching | 52K | 6.83x | No |
| fp-ts | manual batching | 31K | 11.25x | No |
| effect | Effect.forEach({concurrency:3}) | 17K | 21.08x | Yes |
Analysis: This is where @oofp/core’s native TE.concurrency shines for ergonomics. While imperative manual batching is fastest (no abstraction overhead), @oofp/core provides a one-liner API that matches manually-batched purify-ts performance and is 4.2x faster than Effect’s native Effect.forEach. The manual batching implementations require 10-15 lines of boilerplate that @oofp/core eliminates.
5d. Error Recovery — double failure + fallback + continue
Section titled “5d. Error Recovery — double failure + fallback + continue”Simulates a scenario where a primary service fails, a secondary service also fails, but a fallback succeeds and the pipeline continues. Tests double recovery (chainLeft/orElse x2) followed by continuation.
| Library | API | ops/sec | Relative |
|---|---|---|---|
| OOP | .orElse x2 + .flatMap | 7,691K | 1.00x |
| neverthrow | .orElse x2 + .andThen | 5,899K | 1.30x |
| imperative | nested try/catch | 934K | 8.23x |
| @oofp/core | TE.chainLeft x2 + chain | 682K | 11.28x |
| fp-ts | TE.orElse x2 + chain | 416K | 18.48x |
| purify-ts | .chainLeft x2 + .chain | 401K | 19.19x |
| effect | Effect.catchAll x2 + flatMap | 299K | 25.71x |
Analysis: @oofp/core’s TE.chainLeft provides declarative error recovery that’s 1.6x faster than fp-ts’s TE.orElse and 1.7x faster than purify-ts’s .chainLeft. The imperative approach (nested try/catch) is surprisingly competitive here because the simulated failures use Promise.resolve (no actual exception throwing). In production with real exceptions, the FP advantage would be much larger (as shown in the sync error-handling benchmarks).
5e. Middleware Wrapper — credits check/deduct/rollback
Section titled “5e. Middleware Wrapper — credits check/deduct/rollback”Simulates a middleware pattern from production: check if the user has enough credits, execute the main operation, then deduct credits on success or rollback on failure. This is the _consumeCredits pattern from real applications.
| Library | API | ops/sec | Relative |
|---|---|---|---|
| OOP | method chaining | 7,648K | 1.00x |
| neverthrow | method chaining | 5,830K | 1.31x |
| imperative | try/finally | 2,877K | 2.66x |
| @oofp/core | pipe composition | 722K | 10.59x |
| purify-ts | method chaining | 451K | 16.96x |
| effect | Effect composition | 319K | 24.00x |
| fp-ts | pipe composition | 287K | 26.69x |
Analysis: @oofp/core’s withCredits higher-order function composes cleanly with pipe — the middleware is a reusable function that wraps any TaskEither pipeline. It’s 2.5x faster than fp-ts and 2.3x faster than Effect for the same pattern. The OOP and neverthrow versions are faster in raw throughput but require defining the middleware as a class method or closure manually for every use case. @oofp/core’s approach is composable: withCredits is a generic HOF that can wrap any TaskEither pipeline without modification, while the OOP version is tightly coupled to the specific ResultAsync chain it wraps.
5f. Fire-and-Forget — pipeline + 2 detached side effects
Section titled “5f. Fire-and-Forget — pipeline + 2 detached side effects”Simulates a pipeline that completes its main work and then fires off non-critical side effects (analytics tracking, audit logging) without waiting for them. Only @oofp/core (TE.tapTEAsync) and Effect (Effect.fork) have native fire-and-forget; all others require manual promise.catch(() => {}) patterns.
| Library | API | ops/sec | Relative | Native? |
|---|---|---|---|---|
| OOP | .tap + manual fire | 8,318K | 1.00x | No |
| neverthrow | .andThen + manual fire | 6,122K | 1.36x | No |
| imperative | promise.catch(() => {}) | 1,602K | 5.19x | No |
| @oofp/core | TE.tapTEAsync (native) | 564K | 14.75x | Yes |
| purify-ts | .ifRight + manual fire | 329K | 25.27x | No |
| fp-ts | chainFirst + manual fire | 318K | 26.18x | No |
| effect | Effect.tap + fork (native) | 106K | 78.65x | Yes |
Analysis: @oofp/core’s TE.tapTEAsync provides a clean declarative API for fire-and-forget that’s 5.3x faster than Effect’s fork. Without native support, developers must write error-swallowing wrappers manually — code that’s easy to get wrong (forgotten .catch leads to unhandled rejections). @oofp/core eliminates this class of bugs while maintaining good performance.
Orchestration summary
Section titled “Orchestration summary”Comparing @oofp/core against real, published FP libraries (the OOP baseline is a hand-rolled class, not a usable library — see below):
| Scenario | @oofp/core rank (among published libs) | vs fp-ts | vs Effect | vs neverthrow | Notable |
|---|---|---|---|---|---|
| Sequential chain | 1st | 2.3x faster | 2.9x faster | Neverthrow faster (lazy) but no native patterns | @oofp/core fastest pipe-based lib |
| Parallel execution | 3rd | 1.2x faster | 16x faster | Neverthrow faster (lazy); purify-ts faster (simpler wrapper) | purify-ts edges out @oofp/core due to less type machinery overhead |
| Controlled concurrency | 1st | 2.3x faster | 4.2x faster | Neverthrow slower (manual) | @oofp/core has native API; neverthrow/purify-ts don’t |
| Error recovery | 1st | 1.6x faster | 2.3x faster | Neverthrow faster (lazy) | Declarative chainLeft x2 composition |
| Middleware wrapper | 1st | 2.5x faster | 2.3x faster | Neverthrow faster but tightly coupled | @oofp/core composable HOF via pipe |
| Fire-and-forget | 1st | 1.8x faster | 5.3x faster | Neverthrow faster but manual | @oofp/core has native API; neverthrow/purify-ts don’t |
Key takeaways:
- @oofp/core is the fastest published pipe-based FP library across 5 of 6 orchestration patterns — 1.2-2.5x faster than fp-ts and 2.3-16x faster than Effect. In parallel execution, purify-ts edges ahead (465K vs 359K) due to its simpler
Promise.allwrapper with less type machinery overhead. - @oofp/core is the fastest published FP library with native orchestration support.
TE.concurrencyandTE.tapTEAsyncare one-liner APIs that neverthrow, purify-ts, and fp-ts simply don’t have. Effect has native equivalents but runs 4-16x slower. - Native APIs matter for correctness, not just ergonomics. A forgotten
.catch()in manual fire-and-forget causes unhandled rejections in production. Manual concurrency batching is 10-15 lines of error-prone code per call site. @oofp/core eliminates both classes of bugs. - The OOP/neverthrow raw speed advantage is synthetic. Their lazy class-based chaining avoids Promise allocation until execution, producing impressive benchmark numbers. But neverthrow lacks native concurrency and fire-and-forget, and the OOP Result is a ~50-line class with no ecosystem — neither is a practical replacement for @oofp/core’s full combinator set.
- For real production backends, the orchestration overhead is negligible compared to I/O (database queries take 1-50ms; the difference between 631K and 7,770K ops/sec is 0.001ms vs 0.0001ms per operation). Choose based on API ergonomics, type safety, and native pattern support — areas where @oofp/core excels.
Overall Rankings
Section titled “Overall Rankings”Among published FP libraries
Section titled “Among published FP libraries”Ranking real, published npm packages across all 16 benchmark scenarios (lower is better):
| Rank | Library | Strengths | Weaknesses |
|---|---|---|---|
| 1 | @oofp/core | Fastest pipe-based FP lib in 5/6 orchestration scenarios, zero deps, native concurrency + fire-and-forget + Reader DI, 1.2-2.5x faster than fp-ts, 2.3-16x faster than Effect | Slower than method-chaining libs on sync hot paths; purify-ts edges ahead on parallel execution |
| 2 | neverthrow | Fast lazy chaining, simple API, good TS integration | No native concurrency/fire-and-forget, slower creation, no pipe composition |
| 3 | purify-ts | Fast sync speed, method chaining | Smaller ecosystem, no native orchestration patterns, limited async combinators |
| 4 | effect | Rich ecosystem, native concurrency, scheduler | 16-360x slower than others on parallel/fire-and-forget, large bundle, steep learning curve |
| 5 | fp-ts | Mature, extensive type classes | 15-33x slower than OOP, no native orchestration, effectively EOL |
Including theoretical baselines
Section titled “Including theoretical baselines”| Baseline | Role in benchmarks | Why it’s not a practical choice |
|---|---|---|
| OOP Result (hand-rolled) | Performance ceiling — shows maximum possible speed | Not a library. No npm package, no docs, no ecosystem, no type-safe combinators. Every pattern beyond map/chain must be reimplemented manually. See Why not hand-roll your own? |
| Imperative (try/catch) | Native JS baseline | Catastrophic on error paths (57-74x slower). No composable error handling, no type safety for errors |
When to choose what
Section titled “When to choose what”- CPU-bound hot loops with no errors: Use imperative code. The overhead of any FP wrapper is measurable.
- Application logic with expected errors: Use any FP library. Even the “slowest” (fp-ts at ~1M ops/sec for pipelines) is fast enough for virtually all applications. The consistent error-path performance is worth far more than the happy-path overhead.
- Backend services with async orchestration: @oofp/core offers the best balance — native
TE.concurrency,TE.tapTEAsync, andTE.chainLeftcover the most common patterns without manual boilerplate, at 2.3-16x faster than Effect. - New projects choosing an FP library: @oofp/core offers a good balance of performance, API simplicity, and zero dependencies. purify-ts is faster on sync paths if you prefer method chaining. Effect is the most feature-rich but comes with a large bundle and complex runtime.
- Migrating from fp-ts: @oofp/core has a similar pipe-based API and is 1.6-6x faster across all benchmarks, with native support for patterns fp-ts lacks entirely.
- “I’ll just hand-roll my own Result class”: You’ll get great benchmark numbers on simple chains, but as your application grows you’ll need concurrency control, fire-and-forget, error recovery, middleware, Reader-based DI, traversals, and dozens of combinators. You’ll end up building a library — untested, undocumented, and maintained by your team alone. See the next section.
Why not hand-roll your own?
Section titled “Why not hand-roll your own?”The hand-rolled OOP Result tops nearly every benchmark. So why not just copy the ~50-line class and use it?
Because benchmarks measure the simplest case, and real applications don’t stay simple.
What the OOP Result provides
Section titled “What the OOP Result provides”map,flatMap,match(fold)Result.ok(value),Result.err(error)ResultAsyncwithall(Promise.all wrapper)
That’s it. ~50 lines of code, ~3 methods per class.
What @oofp/core provides that a hand-rolled Result doesn’t
Section titled “What @oofp/core provides that a hand-rolled Result doesn’t”| Capability | @oofp/core | Hand-rolled OOP |
|---|---|---|
chain, map, flatMap | Yes | Yes |
chainLeft / error recovery | Yes | Must implement |
tapTEAsync / fire-and-forget | Yes (native) | Must implement + handle .catch() |
concurrency({concurrency: N}) | Yes (native) | Must implement batching logic |
sequence / parallel with type-safe tuples | Yes | Must implement + cast to unknown[] |
traverse / mapping + sequencing | Yes | Must implement |
tryCatch / safe Promise wrapping | Yes | Must implement |
fromNullable, fromPredicate | Yes | Must implement |
bimap, mapLeft | Yes | Must implement |
| Reader monad (dependency injection) | RTE<R, E, A> | Not available |
Reader + concurrency (RTE.concurrency) | Yes | Not available |
Reader + fire-and-forget (RTE.tapRTEAsync) | Yes | Not available |
pipe, flow, compose | Yes | Not available |
| Maybe, State, IO, Task monads | Yes | Not available |
| Sub-path exports / tree-shaking | Yes | N/A |
| Published npm package with semver | Yes | No |
| Documentation | Yes | No |
| Test suite | Yes (296+ tests) | No |
| TypeScript type inference for combinators | Fully tested | Whatever you write |
The real cost of hand-rolling
Section titled “The real cost of hand-rolling”A hand-rolled Result class works great on day 1. By month 3, your team has:
- Reimplemented 15+ combinators —
chainLeft,tapTE,sequence,traverse,tryCatch,fromNullable,bimap,mapLeft, etc. Each one is 5-20 lines, each one needs tests, each one has edge cases. - Written manual concurrency batching in 8 different files with subtle bugs (off-by-one in batch slicing, missing error propagation, no backpressure).
- Forgotten
.catch()on fire-and-forget in 2 places, causing intermittent unhandled rejection crashes in production. - No dependency injection — passed
configandloggeras function arguments through 6 layers of calls instead of using Reader. - No documentation — new team members spend 2 days understanding the custom Result class and its undocumented combinators.
@oofp/core is 0 dependencies, tree-shakeable, and tested. The performance gap between 722K ops/sec (@oofp/core middleware) and 7,648K ops/sec (hand-rolled OOP) is 0.001ms vs 0.0001ms per operation — invisible in any application where a single database query takes 1-50ms.
The question isn’t “can I hand-roll something faster?” — it’s “do I want to maintain my own FP library?”
Raw data
Section titled “Raw data”All numbers are from a single benchmark run. Vitest reports ±RME (relative margin of error) for each measurement. For the most accurate comparison, run the benchmarks yourself:
pnpm --filter @oofp/benchmarks benchSource code: packages/benchmarks/comparison/
@oofp/focal — Optics vs Imperative
Section titled “@oofp/focal — Optics vs Imperative”Performance and maintainability comparison of @oofp/focal against plain imperative code, across four scenarios extracted from a real normalized-store API response (LinkedIn Voyager Dash format).
Methodology
Section titled “Methodology”- Runtime: Node.js 20, Apple Silicon (ARM64)
- Tool: Vitest bench
- Fixture: A
NormalizedStoreResponsewith ~22 entities across 8 types (Profile, Position, PositionGroup, Skill, Certification, Language, Education, plus unknown entities as noise) - Candidates: Three implementations per scenario — Focal API (high-level), optics (low-level Lens/Prism/Traversal), and imperative
The fixture models the exact shape of a normalized API response: a heterogeneous included[] array where each entry is discriminated by a $type string — the same pattern as Redux normalized state or JSON:API.
How to reproduce
Section titled “How to reproduce”pnpm --filter @oofp/benchmarks exec vitest bench --run comparison/focalSource code: packages/benchmarks/comparison/focal/
Scenario 1: Read access
Section titled “Scenario 1: Read access”Task: extract firstName from a ProfileEntity.
| Implementation | Relative |
|---|---|
imperative profile.firstName | 1.0x (reference) |
optics Lens.prop('firstName') | ~1.7x slower |
Focal API from().prop().get() | ~6x slower |
Direct property access will always be the fastest for simple reads — that is expected. What this measures is the constant overhead of the abstraction itself, not a scaling cost. Optic chains do not get slower as nesting depth increases; they compose once.
Scenario 2: Deep immutable update
Section titled “Scenario 2: Deep immutable update”Task: produce a new NormalizedStoreResponse where every ProfileEntity’s firstName is replaced — three levels of nesting deep.
| Implementation | Relative |
|---|---|
imperative map + spread chain | 1.0x (reference) |
optics Traversal.compose + modify | ~4.7x slower |
Focal API from().elements().match().prop().modify() | ~8x slower |
Both do the same structural traversal. The imperative approach uses two levels of spread ({ ...response, included: response.included.map(...) }). The Focal API expresses the entire path as a single declarative pipe with no manual spread — the optic machinery handles the immutable reconstruction.
Scenario 3: Discriminated union filter
Section titled “Scenario 3: Discriminated union filter”Task: collect only SkillEntity items from a heterogeneous IncludedEntity[].
| Implementation | Relative |
|---|---|
imperative array.filter(isSkill) | 1.0x (reference) |
optics Traversal.each + Prism.match + collect | ~5.6x slower |
Focal API from().elements().match().collect() | ~8.2x slower |
This is the scenario where the maintainability argument is strongest. The imperative version requires a manually-written type guard (function isSkill(e): e is SkillEntity) per variant — 7 guards for 7 types, each a potential source of bugs (wrong $type string, missing field check). Focal.match derives discrimination from the TypeScript type directly: adding a new variant means one new match(...) call, not a new guard function.
Scenario 4: Full domain mapping
Section titled “Scenario 4: Full domain mapping”Task: transform NormalizedStoreResponse → typed CandidateProfile domain object, discriminating and collecting all 7 entity types, extracting specific fields, and handling optional values.
| Implementation | Relative |
|---|---|
imperative filter/map/find + optional chaining | 1.0x (reference) |
| optics (Lens + Prism + Traversal compositions) | ~7.3x slower |
| Focal API (pipe chains from root) | ~13x slower |
This is the most realistic scenario. The imperative version has 7 type guards, 9 .filter() calls, and optional chaining scattered across multiple locations. When the schema changes, every one of those call sites is a potential update target.
The Focal API is slowest here because each pipe(Focal.from<T>(), ...) is constructed at call time — this is the idiomatic pattern (maximum composability, no shared module-level state). The low-level optics version pre-builds its compositions as module-level constants and reuses them, which accounts for roughly half the gap.
Performance vs maintainability
Section titled “Performance vs maintainability”The raw numbers tell one story. The time per call tells a different one:
| Implementation | ops/sec (approx.) | Time per call |
|---|---|---|
| imperative | ~1,170,000 | ~0.86 µs |
| optics (pure) | ~160,000 | ~6.2 µs |
| Focal API | ~89,000 | ~11.2 µs |
A typical HTTP response from a backend service has 50,000–200,000 µs of network latency. The difference between the imperative and Focal API implementations of domainMapping — a transformation that runs once per API response — is ~10 µs. That is < 0.02% of the total response time budget.
This is exactly the scenario Donald Knuth had in mind: “premature optimization is the root of all evil”. The full quote adds: “yet we should not pass up our opportunities in that critical 3%”. A one-per-response transformation is not the critical 3%.
Static maintainability metrics
Section titled “Static maintainability metrics”Beyond runtime performance, the three implementations can be compared on objective code-quality signals derived from static analysis of their source files:
| Metric | imperative | optics (pure) | Focal API |
|---|---|---|---|
Type guards (function isXxx(): e is T) | 7 | 0 | 0 |
Lines with spread operators (...) | 2 | 1 | 0 |
Schema coupling points (hard-coded $type strings + field names) | 7 | 21 | 22 |
.filter() call sites | 9 | 1 | 1 |
| Reusable optic/focal constants at module level | 0 | 10 | 7 |
Reading the table:
- Type guards (lower is better): The imperative approach requires one manually-written guard per union variant — 7 for 7 types. Both optic approaches derive discrimination from the TypeScript type: zero guards. Each guard in the imperative version is a potential bug (wrong string literal, wrong field check) that the compiler cannot catch.
- Spread operators (lower is better): The Focal API produces zero —
modify + runhandles the full immutable reconstruction. The imperative version’s 2 spreads represent the manual nesting that grows with schema depth. - Schema coupling points: Counterintuitively, both optic approaches score higher than imperative here because field names appear as string arguments to
prop("firstName"),prop("title"), etc. The key difference is centralization: those strings appear in one definition that all consumers share, rather than being duplicated across every call site. - Filter calls (lower is better): The imperative version calls
.filter()9 times across the four scenarios. Both optic approaches reduce this to 1 (a single.filter()to eliminateundefinedvalues from an optional field). - Reusable compositions (higher is better): The low-level optics version defines 10 module-level constants (prisms and traversals) that can be imported and composed anywhere in the codebase. The Focal API has 7 (the
T_*string constants). The imperative version has none — every function rebuilds its logic from scratch.
Scenario 5: Scaling
Section titled “Scenario 5: Scaling”Does the Focal API overhead grow with collection size, or is it a constant multiplier?
Fixtures were generated by repeating the base included[] array at four scales (×1 = 22 entities, ×10 = 220, ×50 = 1,100, ×250 = 5,500). The table shows the imperative/Focal API speed ratio at each scale — if the abstraction had super-linear cost, the ratio would grow. If the overhead is a constant multiplier, it would stay flat. What actually happens is neither: the ratio shrinks.
filterByType — imperative / Focal API ratio by scale:
| Scale | Entities | Ratio |
|---|---|---|
| ×1 | 22 | 7.6x |
| ×10 | 220 | 6.1x |
| ×50 | 1,100 | 5.8x |
| ×250 | 5,500 | 2.6x |
deepUpdate:
| Scale | Entities | Ratio |
|---|---|---|
| ×1 | 22 | 8.5x |
| ×10 | 220 | 6.8x |
| ×50 | 1,100 | 6.2x |
| ×250 | 5,500 | 3.9x |
domainMapping:
| Scale | Entities | Ratio |
|---|---|---|
| ×1 | 22 | 12.8x |
| ×10 | 220 | 8.2x |
| ×50 | 1,100 | 7.5x |
| ×250 | 5,500 | 7.4x |
Analysis: the gap closes as collection size grows. The reason is that the Focal API has a fixed cost at call time — constructing the pipe — on top of the O(n) traversal cost that both approaches share. As n increases, the traversal dominates and the fixed construction cost becomes a smaller fraction of the total. There is no asymptotic penalty: the Focal API scales at least as well as imperative code for large collections. The overhead is a constant that matters less and less as the data grows.
Scenario 6: Route reuse
Section titled “Scenario 6: Route reuse”If the fixed cost is pipe construction, pre-building the route as a module-level constant and only applying the terminator at call time should recover most of that cost:
// Idiomatic — full pipe constructed on every callpipe(Focal.from<ProfileEntity>(), Focal.prop("firstName"), Focal.get(profile))
// Pre-built — route is a module-level constant, only terminator variesconst firstNameFocal = pipe(Focal.from<ProfileEntity>(), Focal.prop("firstName"));pipe(firstNameFocal, Focal.get(profile))Four candidates were measured: Focal idiomatic, Focal pre-built, optics pre-built (low-level Lens/Prism/Traversal), and imperative.
Read simple (get firstName):
| Candidate | Relative to idiomatic |
|---|---|
| Focal API — idiomatic | 1.0x (reference) |
| Focal API — pre-built | 2.7x faster |
| Optics — pre-built | 3.8x faster |
| Imperative | 5.5x faster |
Collect (filterByType):
| Candidate | Relative to idiomatic |
|---|---|
| Focal API — idiomatic | 1.0x (reference) |
| Focal API — pre-built | 1.28x faster |
| Optics — pre-built | 1.37x faster |
| Imperative | 7.3x faster |
Modify + run (deepUpdate):
| Candidate | Relative to idiomatic |
|---|---|
| Focal API — idiomatic | 1.0x (reference) |
| Focal API — pre-built | 1.46x faster |
| Optics — pre-built | 1.64x faster |
| Imperative | 8.5x faster |
Analysis: pre-building a route always helps, but how much depends on what dominates. For simple reads, construction is the main cost — pre-building gives a 2.7x improvement. For traversals over larger collections, the iteration cost dominates and pre-building gives only 1.3–1.5x. In every case, Focal pre-built practically matches optics pre-built — the residual gap is just the Focal wrapper over the underlying optic, which is negligible.
The practical implication: if a Focal route appears in a measured hot path, extract it to a module-level constant. For all other application code, the idiomatic per-call pattern is perfectly sufficient.
When the performance penalty matters
Section titled “When the performance penalty matters”The ~8-13x gap is irrelevant for:
- Any transformation that runs once per API response or user action
- Any code path where network I/O, database queries, or rendering dominate
- Any throughput below ~1M calls/sec in a hot loop
The gap does matter for:
- Hot paths with measured CPU bottlenecks (a profiler shows this function in the top 3%)
- Streaming parsers processing millions of records per second
- WebGL / game loops running at 60hz with tight per-frame budgets
For everything else — which is virtually all application code — the Fowler principle applies: code is read far more often than it is written, and the cost of change is the metric that matters in production.