The tsvc tests take just too long on simulators, particularly if there is little or no vectorization of the test because of compiler limitations, target limitations, or the chosen options. Having 151 tests time out at a quarter of an hour is not fun, and making the time out go away by upping the timeout might make for better looking results, but not for better turn-around times.

So, I though to just change the iteration count (which is currently defined as 10000 in tsvc.h, resulting in billions of operations for a single test) to something small, like 10.

This requires new expected results, but there were pretty straightforward to auto-generate. The lack of a separate number for s3111 caused me some puzzlement, but it can indeed share a value with s31111.

But then if I want to specifically change the iteration count for simulators, I have to change 151 individual test files to add another dg-additional-options stanza. I can leave the job to grep / bash / ed,
but then I get 151 locally changed files, which is a pain to merge.
So I wonder if tsvc.h shouldn't really default to a low iteration count.
Is there actually any reason to run the regression tests with an iteration count of 10000 on any host? I mean, if you wanted to get some regression check on performance, you'd really want to have something more exact that wall clock time doesn't exceed whatever timeout is set. You could test set a ulimit for cpu time and fine tune that for proper benchmark regression test - but for the purposes of an ordinary gcc regression test, you generally just want the optimizations perfromed (like in the dump file tests present) and the computation be performed correctly. And for these, it makes little difference how many iterations you use for the test, as long as you convince GCC that the code is 'hot'.

Reply via email to