Handlebars - the next generation
Benchmarks

Benchmarks

The following benchmarks show the performance of the original Handlebars.js implementation and the implementation in this repo.

These benchmarks are generated as part of the CI pipeline.

Disclaimer

The benchmarks look like the Handlebars NG is much faster, but keep in mind that we have only implemented very basic features so far. I expect it to become slower when more features are implemented.

Please also be aware that micro-benchmarks like this may not be expressive. I am not sure how valuable the results really are since the standard-deviation is very high in some tests.

How to read the data

  • The numbers in the table are x̄="mean" p99="99th percentile value" of the runtime of each task
  • The charts show the range ”[ mean, p99 ]“.

Parser

Runner

Contributing tests

Although this project is still in a very early stage, it would be nice to have some real-life templates to run the benchmarks. If you want to contribute tests, feel free to create an MR. Just add a subfolder to this folder and add files there.

The template of the test must be released under an Open Source license, so do not use templates from your private projects. Otherwise, have a look at generate-benchmarks.ts

You can use the same tools to run benchmarks yourself (when the project has advanced a little and was published to npm…)