In general I agree with you: Let's first gather data
about code-paths in our system that are typically
used (eg. using telemetry), then write benchmarks to
measure performance of such paths, then use these to
optimize our code. Finally, complement this with real-
world benchmarks which implement compound operations
(more like system-level tests).

However, in the particular case of enabling disk-cache
on Android (which I'm working towards) I want to see
how the disk on a device performs under stress and I
don't really need telemetry-data for that - I just want
to access the disk. Synthetic benchmarks are IMO fine
for this, and I don't have to spend time on implementing
and waiting for telemetry.

In fact, Geoff got some interesting results a few weeks
ago suggesting that the disk actually performs ok under
normal operations - it is creating the cache which really
blows the numbers up (which seems to happen when clearing
the cache). This is IMO worth to verify or falsify, and
then perhaps use telemetry to get data of how often a user
clears/creates the cache in order to evaluate the impact
of this in real life.

Thoughts?
_______________________________________________
dev-tech-network mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-tech-network

Reply via email to