I have a serious, serious dislike for tests that seem to work until
they're run on a heavily loaded machine.

I'm not that sure the error message was because of that.

No, this particular failure (probably) wasn't.  But now that I've realized
that this test case is timing-sensitive, I'm worried about what will
happen when it's run on a sufficiently slow or loaded machine.

I would not necessarily object to doing something in the code that
would guarantee that, though.

Hmmm. Interesting point.

It could be as simple as putting the check-for-done at the bottom of the
loop not the top, perhaps.

I agree that it is best if tests should work in all reasonable conditions, including a somehow overloaded host...

I'm going to think about it, but I'm not sure of the best approach. In the mean time, ISTM that the issue has not been encountered (yet), so this is not a pressing issue. Maybe under -T > --aggregate-interval pgbench could go on over the limit if the log file has not been written at all, but that would be some kind of kludge for this specific test...

Note that to get test coverage for -T and have to assume that maybe a loaded host would not be able to generate just a one line log every second during that time is kind of a hard assumption...

Maybe some test could be "warnings", i.e. it could be acceptable to accept a failure once in a while in specific conditions, if this is rare enough and documented. ISTM that there is such a test for random output.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to