@bcoe @novemberborn The self-coverage stuff is actually a pretty interesting profiling tool when used in conjunction with npm link
. Since we automatically use index.covered.js
if it exists, we can get a better idea how our code behaves IRL. All these screenshots were generated running the AVA test suite:
First run, no cache hits:
We can see that out of 50 forked processes, it only becomes necessary to create an instrumenter in 3 of those processes (meaning that 47 forked processes simply pulled from the cache - even on the first run). This makes sense with the speedups I am seeing. You still get the majority of the benefit from caching even on your first run. The second run may be faster, but imperceptibly so.
Also notice that the instrumenter is only accessed 14 times. There are 14 files instrumented in AVA. This means we never had a race condition for the cache (two threads seeing a cache miss, instrumenting and then racing to write the result back). Since tap
executes tests synchronously and serially, this makes sense. However, it means we aren't exercising a potential failure mode here. We will definitely need to find a way to simulate race conditions as part of our test suite.
Second run, no cache misses:
We can see that the instrumenter is never created. Just a validation that two subsequent runs with identical content never has a cache miss.
Also, using self coverage on a real world test suite helps create a heat map and identify the best places to focus optimization efforts.
Interesting approach using Istanbul to find hot code paths!