Only tangentially related to the post, but I don't see it mentioned there: what do people use to run benchmarks on CI? If I understand correctly, standard OSS GH Actions/Azure Pipelines runners aren't going to be uniform enough to provide useful benchmark results. What does the rust project use? What do other projects use?
> what do people use to run benchmarks on CI?
Typically, you purchase/rent a server that does nothing but sequentially run queued benchmarks (and the size/performance of this server doesn't really matter, as long as the performance is consistent), then sends the report somewhere for hosting and processing. Of course, this could be triggered by something running in CI, and the CI job could wait for the results, if benchmarking is an important part of your workflow. Or if your CI setup allows it, you tag one of the nodes as a "benchmarking" node which only run jobs tagged as "benchmark", but I don't think a lot of the hosted setups allow this, mostly seen this in self-hosted CI setups.
But CI and benchmarks really shouldn't be run on the same host.
> What does the rust project use?
It's not clear exactly where the Rust benchmark "perf-runner" is hosted, but here are the specifications of the machine at least: https://github.com/rust-lang/rustc-perf/blob/414230abc695bd7...
> What do other projects use?
Essentially what I described above, a dedicated machine that runs benchmarks. The Rust project seems to do it via GitHub comments (as I understand https://github.com/rust-lang/rustc-perf/tree/master/collecto...), others have API servers that respond to HTTP requests done from CI/chat, others have remote GUIs that triggers the runs. I don't think there is a single solution that everyone/most are using.
Do I really need dedicated hardware? How bad is a VPS? I mean it makes sense but has anyone measure how big the variance is on a VPS?
Dedicated hardware doesn't need to be expensive! Hetzner has dedicated servers for like 40 EUR/month, Vultr has it for 30 EUR/month.
VPS's kind of doesn't make sense because of noisy neighbors, and since that has a lot of fluctuations, because neighbors come and go, I don't think there is a measure you can take that applies everywhere.
For example, you could rent a VPS at AWS and start measuring variance, which looks fine for two months but suddenly it doesn't, because that day you got a noisy neighbor. Then you try VPS at Google Cloud and that's noisy from day one.
You really don't know until you allocate the VPS and leave it running, but the day could always come, and the benchmarking results are something you really need to be able to trust that they're accurate.
Rust uses a dedicated consistent server that runs exclusively benchmark loads, so that nothing else is interfering with the benchmark results.
A solution is mentioned in the article, but perhaps obliquely:
> while I also wanted to measure hardware counters
As I understand it, hardware counters would remain consistent in the face of the normal noisy CI runner.
The article talks about using Cachegrind (via the iai crate) and Linux perf events.
I use iai in one of my projects to run performance diffs for each commit.
> As I understand it, hardware counters would remain consistent in the face of the normal noisy CI runner.
With cloud CI runners you'd still have issues with hardware differences, e.g. different CPUs counting slightly differently. memcpy behavior is hardware-dependent! And if you're measuring multi-threaded programs then concurrent algorithms may be sensitive to timing. And that's just instruction counts. Other metrics such as cycle counts, cache misses or wall-time are far more sensitive.
To make sure we're not slowly accumulating <1% regressions hidden in the noise and to be able to attribute regressions to a specific commit we need really low noise levels.
So for reliable, comparable benchmarks dedicated is needed.
> With cloud CI runners you'd still have issues with hardware differences
For my project it really is the diff of each commit, which means that I start from a parent commit that isn’t part of the PR and re-measure that, then for each new commit. This should avoid accounting for changes in hardware as well as things like Rust versions (if those aren’t locked in via rustup).
The rest of your points are valid of course, but this was a good compromise for my OSS project where I don’t wish to spend extra money.
The thing is that things like Cachegrind are supposed to be used as complements to time-based profilers, not to replace them.
If you're getting +-20% different for each time based benchmark, it might just be noisy neighbors but could also be some other problem that actually manifests for users too.
> used as complements to time-based profilers, not to replace them
Sure. I also use hyperfine to run a bigger test as a user would see the system. I cross reference that with the instruction counts. I use these hardware metrics in a free CI runner, and hyperfine locally.
On my workplace we use self-hosted GitLab and GitLab CI. The CI allows you to allocate dedicated server instances to specific CI tasks. We run a e2e test battery on CI, and it's quite resource heavy compared to normal tests, so we have some dedicated instances for this. I'd imagine the same strategy would work for benchmarks, but I'm not sure whether cloud instances fit the bill. I think that the CI also allows you to bring your own hardware although I don't have experience taking it that far.
> I'd imagine the same strategy would work for benchmarks, but I'm not sure whether cloud instances fit the bill. I think that the CI also allows you to bring your own hardware although I don't have experience taking it that far.
Typically you use the solution between cloud hosted VPS and your own hardware, dedicated servers :)
I know a lot of places tend to use things like AWS spot instances for their CI runners but they obviously provide inconsistent performance. As others have noted you can always measure performance through other metrics like certain counters, not just the absolute runtime duration.
I can recommend getting your company a dedicated instance from OVH or Hetzner since they are dirt-cheap compared to cloud offerings. Setup some simple runner containers with properly constrained CPU and memory that would be similar to production environment, hook them up to your Gitlab or Github and you are good to go. You don't really need high availability for development things like CI runners.