I've updated the code coverage statistics at
<http://www.tjhsst.edu/~jcranmer/m-ccov/> to mozilla-central revision
e0c5ab3acc4e, or about November 29. (Yeah, it's a bit old, but
developing these statistics takes me a few days, given that my scripts
have invariably broken in the meantime).
Notable notes:
* Code coverage is as measured on Linux {32,64} {opt,debug}
* As always, not all tests are reflected. Linux32 debug tests in
particular have this annoying habit of timing out. Every testsuite was
successfully run and recorded on at least one build configuration
* In a change from prior attempts, I've started coalescing our chunked
tests. This is mostly due to my computer choking on the data if I don't
do this.
* Branch coverage is now recorded for mozilla-central! (Probably another
factor in causing my computer to choke)
* Test coverage is now uploaded via mozharness (thanks, armenzg for
adding in the ability to test custom versions of mozharness). Every test
that gets run with desktop_unittest.py is now accounted for
** This means Jetpack, Marionette, and Web platform tests are not
accounted for.
** Of course, neither is Talos.
* I've made improvements to the code coverage treemap to fix a few bugs,
as well as radically altered the color scale (thanks Color Brewer!)
** People have requested that I add in a depth-limiting feature. I have
one prototyped, but it's not present in this version because I don't
have UI for it and the animations were sloppy.
It's worth reflecting on how this process has changed... I've been doing
these periodic code coverage builds since 2010, originally (and still
for Thunderbird) via hand-rolled scripts on my personal computer, and
(since 2012) via pushing custom builds to try. My original try
experiments required cat'ing the contents of tarballs in base64 to
stdout and scraping the logs; thankfully, blobber has eliminated that
process. Now, the fact that we use mozharness for our test suites makes
it far easier to have a single point of reference to extract the .gcda
data, instead of having to modify each test harness individually.
The changes needed to run code coverage via a try push, as well as the
script to post-process the data into lcov form, are located here, as
well as the : <https://github.com/jcranmer/m-c-tools-code-coverage>. It
relies on my replacement for lcov (since that was way too slow), found
here: <https://github.com/jcranmer/mozilla-coverage>. Be forewarned, if
you try to do this yourself, that post-processing takes about 3 minutes
per test... which doesn't seem long, until you realize that there are
over 100 tests being run (106 in this iteration, and there's 16 tests
not covered and 11 tests which didn't upload anything).
There's always room for improvement, and if you have the ability and are
willing to do so, I would love to have help!
* Code coverage on other platforms:
** OS X opt fails due to some linking issue...
** OS X debug crashes in precompile packaging. Not owning a mac, I have
no hope of solving this myself...
** ISTR that I got Android builds running with code coverage, but they
failed to upload the data (I don't know how to get it off the device).
** B2G should hopefully follow Android
** Windows is a completely separate beast, since it doesn't use gcc or
clang and hence doesn't use gcov-like processing for code coverage. But
that doesn't mean I don't want Windows code coverage!
* JS code coverage support would be WONDERFUL. And also probably require
a complete retooling of my post-post-processing steps since the memory
requirements would probably break the 16GB of RAM I have...
** I may have to rewrite the post-post-processing in C++ (or Rust!)
instead of python simply to keep memory usage down
* The code coverage treemap could use more work...
** Better color scheme (I'm not happy with 50% being bright and 0% and
100% being dark...)
** Depth-limiting of tree
** Better working tooltips (seriously, there's no decent HTML tooltip
library that I've found).
There is an effort underway to build period code coverage builds and
upload the results somewhere. It would be useful to have a web-service
for viewing those results. The output of my work is mostly a static list
of files since I don't have access to dynamic webhosting servers.
Turning it into something that can be served dynamically shouldn't be
too hard, though, given that the static webpages are largely the same
HTML template with some JS data injected statically.
Another thing I'd like to see is historical code coverage stats being
available somewhere. I'm currently working on a project which would
benefit from having very frequent (like every day) details of code
coverage. I'll not spoil it for you beyond mentioning that my working
name for it is "Historonoi."
--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform