> Is there a way to include pre-packaged workloads analysis? I realise
> we'd have to regenerate these somehow possible for each compiler update
> (not sure how the files look).

What a "workload" means to the compiler is all the results of all the
conditional branches in the compiled code.  What sites there are to have
data points and what the association between those and any high-level
notion of "workload" (i.e. all forms of input to the program) changes not
only with compiler differences, but with every source code change.

It's pretty hard to imagine what you could preserve across builds of
nontrivially nonidentical source trees that would continue to line up at
the basic block level where it's meaningful to the compiler.  Perhaps you
could do something reduced to terms of source line locations, or number of
basic blocks into a named function, or something.  But it sounds very iffy.

> This would allow us for some thing like firefox or openoffice to run
> some stuff offline on a packagers desktop and then use those files in a
> koji run later.

Those are both examples of big multi-component things that probably have
their own build infrastructure for exercising components in various ways.
Internal ways to drive those with representative synthetic workloads are
probably the wisest thing in the long run.

Those are also both examples of GUI things.  For those things, a
representative workload could be recorded as something like a dogtail test
suite (I don't really know anything about such tools, but they exist).
That could perhaps be substantially automated by some magic in mock/koji to
do a full rpmbuild, then a test suite run and mine out its .gcda files, and
then a final rpmbuild with those results poked into its gcc runs.


Thanks,
Roland
-- 
devel mailing list
devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/devel

Reply via email to