In the original test, the eval generator would use a random value for the job ID, resulting in an unxercised code path for duplicate blocked evals.
* add metrics for blocked eval resources * docs: add new blocked_evals metrics * fix to call `pruneStats` instead of `stats.prune` directly