Robert Haas <robertmh...@gmail.com> writes: > There is no need to collect years of data in order to tell whether or > not the time to run the tests has increased by as much on developer > machines as it has on prairiedog. You showed the time going from 3:36 > to 8:09 between 2014 and the present. That is a 2.26x increase. It > is obvious from the numbers I posted before that no such increase has > taken place in the time it takes to run 'make check' on my relatively > modern laptop. Whatever difference exists is measured in > milliseconds.
I may be wasting my breath here, but in one more attempt to convince you that "time make check" on your laptop is not the only number that anyone should be interested in, here are some timings off my development workstation. These are timings off current tip of each release branch, all the same build options etc, so very comparable: 9.4 [ don't remember the equivalent to top-level make temp-install here ] top-level make check 24.725s installcheck-parallel 15.383s installcheck 27.560s 9.5 make temp-install 3.702s initdb 2.328s top-level make check 24.709s installcheck-parallel 16.632s installcheck 32.427s 9.6 make temp-install 3.971s initdb 2.178s top-level make check 24.048s installcheck-parallel 15.889s installcheck 32.775s 10 make temp-install 4.051s initdb 1.363s top-level make check 21.784s installcheck-parallel 15.209s installcheck 31.938s HEAD make temp-install 4.048s initdb 1.361s top-level make check 24.027s installcheck-parallel 16.914s installcheck 35.745s I copied-and-pasted the "real time" results of time(1) for each of these, not bothering to round them off; but the numbers are only reproducible to half a second or so, so there's no significance in the last couple digits. Most numbers above are the minimum of 2 or more runs. What I take away here is that there's been a pretty steep cost increase for the regression tests since v10, and that is *not* in line with the historical average. In fact, in most years we've bought enough speedup through performance improvements to pay for the test cases we added. This is masked if you just eyeball "make check" compared to several years ago. But to do that, you have to ignore the fact that we made substantial improvements in the runtime of initdb as well as the regression tests proper circa v10, and we've now thrown that away and more. So I remain dissatisfied with these results, particularly because in my own work habits, the time for "make installcheck-parallel" is way more interesting than "make check". I avoid redoing installs and initdbs if I don't need them. > ... Even if that meant that you had > to wait 1 extra second every time you run 'make check', I would judge > that worthwhile. I think this is a bad way of looking at it. Sure, in terms of one developer doing one test run, a second or two means nothing. But for instance, if you want to do 100 test runs in hope of catching a seldom-reproduced bug, it adds up. It also adds up when you consider the aggregate effort expended by the buildfarm, or the time you have to wait to see buildfarm results. > Another thing you could do is consider applying the patch Thomas > already posted to reduce the size of the tables involved. Yeah. What I thought this argument was about was convincing *you* that that would be a reasonable patch to apply. It seems from my experiment on gaur that that patch makes the results unstable, so if we can do it at all it will need more work. But I do think it's worth putting in some more sweat here. In the long run the time savings will add up. regards, tom lane