High Performance Validation Benchmarks for JavaScript
OTHER License
High Performance Benchmarks for JavaScript Validation Libraries
This project benchmarks a variety of JavaScript runtime type checking libraries to adequately assess their validation performance across a wide range of data structures. This project also seeks to compare JIT to AOT by measuring static JavaScript validation routines against those dynamically evaluated at runtime.
These benchmarks measure validation throughput for a number of common JavaScript data structures. The project provides two datasets, one correct
the other incorrect
(where-in the incorrect
dataset is used to trip error handling paths within each validator and test early return performance). Data for each test is hardcoded as not to introduce unnecessary variance in the results as well as to provide high visibility to the actual data being benchmarked. Additionally, each benchmark is run in within an isolated Node process to avoid previous benchmarks breaking optimizations for subsequent benchmarks. All benchmarks can be inspected under the benchmark/validators directory.
The following JavaScript validation packages are benchmarked.
Package | Identifier | Compilation | Assertion | Description |
---|---|---|---|---|
ts-runtime-checks | tsrc | AOT | Structural | A typescript transformer which automatically generates validation code from your types. |
typescript-is | tsis | AOT | Structural | TypeScript transformer that generates run-time type-checks. |
typia | typia | AOT | Structural | Super-fast Runtime validator (type checker) with only one line. |
typebox | typebox | JIT + AOT | Json Schema | JSON Schema Type Builder with Static Type Resolution for TypeScript. |
ajv | ajv | JIT + AOT | Json Schema | The fastest JSON Schema Validator. |
zod | zod | Dynamic | Structural | TypeScript-first schema validation with static type inference |
The following commands are available following an npm install
$ hammer task <lib-identifier> <iteration-count> # runs a specific benchmark with the given iteration
# count. Can be used for testing benchmarks without
# running the complete suite.
$ npm run benchmark <iteration-count> # Runs all benchmarks with an optional iteration
# count. If not specified the default is 10 million
# iterations. Running the benchmark will write
# results to reporting/results/<lib>/<test>.json.
$ npm run reporting # Builds and minifies the reporting website and serves
# it on port 5000. This task will also capture the
# current websites benchmark results which is written
# to the project root (screenshot.png)
$ npm run format # Runs a prettier pass over the project.
$ npm run clean # Remove the target build directory.
The following shows the comparative performance results for the correct
dataset. Results show as estimated operations per second. The image below was generated on the following hardware profile.
Component | Description |
---|---|
Processor | AMD Ryzen 7 3700X 8-Core Processor, 3600 Mhz |
Memory | 16GB |
Operating System | Window 10 |
Node | v16.17.1 |
Full results can be located here
This project is open to community contribution.