The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature: not tested
Spec compliant: not tested
Documentation:not tested
This causes the pgbench tests to fail (consistently) with
not ok 194 - p
Hello Tom,
# progress: 2.6 s, 6.9 tps, lat 0.000 ms stddev 0.000, lag 0.000 ms, 18 skipped
# progress: 3.0 s, 0.0 tps, lat -nan ms stddev -nan, lag -nan ms, 0 skipped
# progress: 4.0 s, 1.0 tps, lat 2682.730 ms stddev 0.000, lag 985.509 ms, 0
skipped
(BTW, the "-nan" bits suggest an actual p
Fabien COELHO writes:
>> [...] After another week of buildfarm runs, we have a few more cases of
>> 3 rows of output, and none of more than 3 or less than 1. So I went
>> ahead and pushed your patch. I'm still suspicious of these results, but
>> we might as well try to make the buildfarm gree
[...] After another week of buildfarm runs, we have a few more cases of
3 rows of output, and none of more than 3 or less than 1. So I went
ahead and pushed your patch. I'm still suspicious of these results, but
we might as well try to make the buildfarm green pending investigation
of how t
Fabien COELHO writes:
>> It could be as simple as putting the check-for-done at the bottom of the
>> loop not the top, perhaps.
> I agree that it is best if tests should work in all reasonable conditions,
> including a somehow overloaded host...
> I'm going to think about it, but I'm not sure o
I have a serious, serious dislike for tests that seem to work until
they're run on a heavily loaded machine.
I'm not that sure the error message was because of that.
No, this particular failure (probably) wasn't. But now that I've realized
that this test case is timing-sensitive, I'm worri
Fabien COELHO writes:
>> I have a serious, serious dislike for tests that seem to work until
>> they're run on a heavily loaded machine.
> I'm not that sure the error message was because of that.
No, this particular failure (probably) wasn't. But now that I've realized
that this test case is ti
I have a serious, serious dislike for tests that seem to work until
they're run on a heavily loaded machine.
I'm not that sure the error message was because of that. ISTM that it was
rather finding 3 seconds in two because it started just at the right time,
or maybe because of slowness induc
Fabien COELHO writes:
> By definition, parallelism induces non determinism. When I put 2 seconds,
> the intention was that I would get a non empty trace with a "every second"
> aggregation. I would rather take a longer test rather than allowing an
> empty file: the point is to check that someth
Apparently, one of the threads ran 3 transactions where the test script
expects it to run at most 2. Is this a pgbench bug, or is the test
being overoptimistic about how exact the "-T 2" cutoff is?
Probably both? It seems that cutting off on time is not a precise science,
so I suggest to acc
Fabien COELHO writes:
>> Apparently, one of the threads ran 3 transactions where the test script
>> expects it to run at most 2. Is this a pgbench bug, or is the test
>> being overoptimistic about how exact the "-T 2" cutoff is?
> Probably both? It seems that cutting off on time is not a precise
francolin just showed a non-reproducing failure in the new pgbench tests:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2017-09-12%2014%3A00%3A02
not ok 211 - transaction count for 001_pgbench_log_1.31583 (3)
# Failed test 'transaction count for 001_pgbench_log_1.3
12 matches
Mail list logo