Hello Fujii-san,
ISTM that the patch changes pgbench so that it can skip counting
some skipped transactions here even for realistic rates under -T.
Of course, which would happen very rarely. Is this understanding right?
Yes. The point is to get out of the scheduling loop when time has expire
I would think we should leave as it is for pg13 and before to not surprise
users.
Ok. Thank you for your opinion. I also agree with not changing the behavior of
long-stable branches, and I think this is the same opinion as Fujii-san.
Attached is the patch to fix to measure disconnection del
Ok. That makes sense. The output reports "including connections
establishing" and "excluding connections establishing" regardless with
-C, so we should measure delays in the same way.
On second thought, it's more reasonable and less confusing not to
measure the disconnection delays at all? Si
Hello David,
round-robin and random make sense. I am wondering how round-robin
would work with -C, though? Would you just reuse the same connection
string as the one chosen at the starting point.
Well, not necessarily, but this is debatable.
My expectation for such a behavior would be tha
Bonjour Michaël,
Good. I was thinking of adding such capability, possibly for handling
connection errors and reconnecting…
round-robin and random make sense. I am wondering how round-robin
would work with -C, though? Would you just reuse the same connection
string as the one chosen at the s
[...] Thoughts?
For pgbench it is definitely ok to add the exit. For others the added
exits look reasonable, but I do not know them intimately enough to be sure
that it is the right thing to do in all cases.
All that does not seem to enter into the category of things worth
back-patching,
Hello,
I do not understand your disagreement. Do you disagree about the
expected>> semantics of fatal? Then why provide fatal if it should not
be used? What is the expected usage of fatal?
I disagree about the fact that pgbench uses pg_log_fatal() in ways
that other binaries don't do.
Sure
Bonjour Michaël-san,
The semantics for fatal vs error is that an error is somehow handled while a
fatal is not. If the log message is followed by an cold exit, ISTM that
fatal is the right call, and I cannot help if other commands do not do that.
ISTM more logical to align other commands to fat
Bonjour Michaël,
My 0.02€:
- pgbench has its own parsing routines for int64 and double, with
an option to skip errors. That's not surprising in itself, but, for
strtodouble(), errorOK is always true, meaning that the error strings
are dead.
Indeed. However, there are "atof" calls for option
I tested manually for the pager feature, which mostly work, althoug
"pspg --stream" does not seem to expect two tables, or maybe there is
a way to switch between these that I have not found.
pspg doesn't support this feature.
Sure. Note that it is not a feature yet:-)
ISTM that having som
Ok. I noticed. The patch got significantly broken by the watch pager
commit. I also have to enhance the added tests (per Peter request).
I wrote a test to check psql query cancel support. I checked that it fails
against the patch that was reverted. Maybe this is useful.
Here is the update
Hello,
Minimally for PSQL_WATCH_PAGER, the pager should exit after some time, but
before it has to repeat data reading. Elsewhere the psql will hang.
Sure. The "pager.pl" script I sent exits after reading a few lines.
can be solution to use special mode for psql, when psql will do write to
Hello Pavel,
The newly added PSQL_WATCH_PAGER feature which broke the patch does not
seem to be tested anywhere, this is tiring:-(
Do you have any idea how this can be tested?
The TAP patch sent by Peter on this thread is a very good start.
It requires some pager that doesn't use blocking
The patch does not apply on Head anymore, could you rebase and post a
patch. I'm changing the status to "Waiting for Author".
Ok. I noticed. The patch got significantly broken by the watch pager
commit. I also have to enhance the added tests (per Peter request).
I wrote a test to check psql
I attached the updated patch.
# About pgbench error handling v15
Patches apply cleanly. Compilation, global and local tests ok.
- v15.1: refactoring is a definite improvement.
Good, even if it is not very useful (see below).
While restructuring, maybe predefined variables could be ma
Hello Yugo-san,
[...] One way to avoid these errors is to send Parse messages before
pipeline mode starts. I attached a patch to fix to prepare commands at
starting of a script instead of at the first execution of the command.
What do you think?
ISTM that moving prepare out of command ex
The patch does not apply on Head anymore, could you rebase and post a
patch. I'm changing the status to "Waiting for Author".
Ok. I noticed. The patch got significantly broken by the watch pager
commit. I also have to enhance the added tests (per Peter request).
--
Fabien.
Attached v4 simplifies the format and fixes this one.
I think this goes way way overboard in terms of invasiveness. There's no
need to identify individual call sites of PSQLexec. [...]
ISTM that having the information was useful for the user who actually
asked for psql to show hidden queri
Hello Thomas,
I committed the code change without the new TAP tests, because I
didn't want to leave the open item hanging any longer.
Ok. Good.
As for the test, ... [...]
Argh, so there are no tests that would have caught the regressions:-(
... I know it can fail, and your v18 didn't f
Hello,
Of course, users themselves should be careful of problematic script, but it
would be better that pgbench itself avoids problems if pgbench can beforehand.
Or, we should terminate the last cycle of benchmark regardless it is
retrying or not if -T expires. This will make pgbench behaves
Hello Vignesh,
I am changing the status to "Needs review" as the review is not
completed for this patch and also there are some tests failing, that
need to be fixed:
test test_extdepend ... FAILED 50 ms
Indeed,
Attached v4 simplifies the format and fixes this one.
I ran c
Hello again,
I hoped we were done here but I realised that your check for 1-3 log
lines will not survive the harsh environment of the build farm.
Adding sleep(2) before the final doLog() confirms that. I had two
ideas:
So I think we should do 1 for now. Objections or better ideas?
At lea
Works for me: patch applies, global and local check ok. I'm fine with it.
I hoped we were done here but I realised that your check for 1-3 log
lines will not survive the harsh environment of the build farm.
Adding sleep(2) before the final doLog() confirms that. I had two
ideas:
1. Give up
Hello Thomas,
Isn't it better if we only have to throw away the first one?).
This should be the user decision to drop it or not, not the tool
producing it, IMO.
Let me try this complaint again. [...]
I understand your point.
For me removing silently the last bucket is not right because
Hello Hannu,
I'm not sure we have transaction lasts for very short time that
nanoseconds matters.
Nanoseconds may not matter yet, but they could be handy when for
example we want to determine the order of parallel query executions.
We are less than an order of magnitude away from being able
Hello Thomas,
Thanks! This doesn't seem to address the complaint, though. Don't
you need to do something like this? (See also attached.)
+initStats(&aggs, start - (start + epoch_shift) % 100);
ISTM that this is: (start + epoch_shift) / 100 * 100
That should reproduce wha
Hello Yura,
Given 99.99% cases will be in the likely case, branch predictor should
eliminate decision cost.
Hmmm. ISTM that a branch predictor should predict that unknown < small
should probably be false, so a hint should be given that it is really
true.
Why? Branch predictor is history ba
Finally, I think it would be better to treat the upper bound of the
range as inclusive.
This bothered me as well, but the usual approach seems to use range as the
number of values, so I was hesitant to depart from that. I'm still
hesitant to go that way.
Yeah, that bothered me too.
For exam
Hello Dean,
Whilst it has been interesting learning and discussing all these
different techniques, I think it's probably best to stick with the
bitmask method, rather than making the code too complex and difficult
to follow.
Yes.
The bitmask method has the advantage of being very simple, eas
Hello Yura,
However, I'm not enthousiastic at combining two methods depending on
the range, the function looks complex enough without that, so I would
suggest not to take this option. Also, the decision process adds to
the average cost, which is undesirable.
Given 99.99% cases will be in the
Hello Yura,
I believe most "range" values are small, much smaller than UINT32_MAX.
In this case, according to [1] fastest method is Lemire's one (I'd take
original version from [2]) [...]
Yep.
I share your point that the range is more often 32 bits.
However, I'm not enthousiastic at combin
The important property of determinism is that if I set a seed, and then
make an identical set of calls to the random API, the results will be
identical every time, so that it's possible to write tests with
predictable/repeatable results.
Hmmm… I like my stronger determinism definition more t
Now suppose we want a random number in the range [0,6). This is what
happens with your algorithm for each of the possible prng() return
values:
prng() returns 0 -- OK
prng() returns 1 -- OK
prng() returns 2 -- OK
prng() returns 3 -- OK
prng() returns 4 -- OK
prng() returns 5 -- OK
prng()
Hello Dean,
- moves the stuff to common and fully removes random/srandom (Tom)
- includes a range generation function based on the bitmask method (Dean)
but iterates with splitmix so that the state always advances once (Me)
At the risk of repeating myself: do *not* invent your own sc
Hello Tatsuo-san,
So overall gain by the patch is around 15%, whereas the last test before
the commit was 14%. It seems the patch is still beneficial after the
commit.
Yes, that's good!
I had a quick look again, and about the comment:
/*
* If partitioning is not enabled and server ver
1. PostgreSQL source uses `uint64` and `uint32`, but not
`uint64_t`/`uint32_t`
Indeed you are right. Attached v6 does that as well.
--
Fabien.diff --git a/contrib/file_fdw/file_fdw.c b/contrib/file_fdw/file_fdw.c
index 2c2f149fb0..146b524076 100644
--- a/contrib/file_fdw/file_fdw.c
+++ b/cont
Hello Yura,
1. PostgreSQL source uses `uint64` and `uint32`, but not
`uint64_t`/`uint32_t`
2. I don't see why pg_prng_state could not be `typedef uint64
pg_prng_state[2];`
It could, but I do not see that as desirable. From an API design point of
view we want something clean and abstract, a
Here is a v4, which:
- moves the stuff to common and fully removes random/srandom (Tom)
- includes a range generation function based on the bitmask method (Dean)
but iterates with splitmix so that the state always advances once (Me)
And a v5 where an unused test file does also compile if we
Hello Dean & Tom,
Here is a v4, which:
- moves the stuff to common and fully removes random/srandom (Tom)
- includes a range generation function based on the bitmask method (Dean)
but iterates with splitmix so that the state always advances once (Me)
--
Fabien.diff --git a/contrib/file_fd
Hello Dean,
It may be true that the bias is of the same magnitude as FP multiply,
but it is not of the same nature. With FP multiply, the
more-likely-to-be-chosen values are more-or-less evenly distributed
across the range, whereas modulo concentrates them all at one end,
making it more lik
"-- # QUERY\n%s\n\n"
Attached an attempt along those lines. I found another duplicate of the
ascii-art printing in another function.
Completion queries seems to be out of the echo/echo hidden feature.
Incredible, there is a (small) impact on regression tests for the \gexec
case. All oth
Hello Tom,
I went to commit this, figuring that it was a trivial bit of code
consolidation, but as I looked around in common.c I got rather
unhappy with the inconsistent behavior of things. Examining
the various places that implement "echo"-related logic, we have
the three places this patch pr
Hello Dean,
I haven't looked at the patch in detail, but one thing I object to is
the code to choose a random integer in an arbitrary range.
Thanks for bringing up this interesting question!
Currently, this is done in pgbench by getrand(), which has its
problems.
Yes. That is one of the m
Hello Tom,
Indeed. Here is a blind attempt at fixing the build, I'll check later to
see whether it works. It would help me if the cfbot results were
integrated into the cf app.
Hmm, not there yet per cfbot, not sure why not.
I'll investigate.
Anyway, after taking a very quick look at the
Although this patch is marked RFC, the cfbot shows it doesn't
even compile on Windows. I think you missed updating Mkvcbuild.pm.
Indeed. Here is a blind attempt at fixing the build, I'll check later to
see whether it works. It would help me if the cfbot results were
integrated into the cf a
Hello Greg,
Some quick feedback about the patch and the arguments.
Filling: having an empty string/NULL has been bothering me for some time.
However there is a significant impact on the client/server network stream
while initializing or running queries, which means that pgbench older
perfor
Hello David,
This patch adds the concept of "multiconnect" to pgbench (better
terminology welcome).
Good. I was thinking of adding such capability, possibly for handling
connection errors and reconnecting…
The basic idea here is to allow connections made with pgbench to use
different auth
Hello Thomas,
After looking at it again, here is an update which ensure 64 bits on
epoch_shift computation.
The code in pgbench 13 aggregates into buckets that begin on the
boundaries of wall clock seconds, because it is triggered by changes
in time_t. In the current patch, we aggregate data
Hello Yugo-san,
Thanks for the update!
Patch seems to apply cleanly with "git apply", but does not compile on my
host: "undefined reference to `conditional_stack_reset'".
However it works better when using the "patch". I'm wondering why git
apply fails silently…
Hmm, I don't know why your c
Fabien, thanks for the updated patch, I'm looking at it.
After looking at it again, here is an update which ensure 64 bits on
epoch_shift computation.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 4aeccd93af..7750b5d660 100644
--- a/src/bin/pgbench/pgben
Hello Thomas,
I've added an entry on the open item on the wiki. I'm unsure about who the
owner should be.
There is already an item: "Incorrect time maths in pgbench".
Argh *shoot*, I went over the list too quickly, looking for "log" as a
keyword.
Fabien, thanks for the updated patch, I
Bonjour Michaël,
Okay, I have extracted this part from your patch, and back-patched
this fix down to 11. The comments were a good addition, so I have
kept them. I have also made the second regex of check_pgbench_logs()
pickier with the client ID value expected, as it can only be 0.
Attached
Hello Tom,
The failure still represents a gcc bug, because we're using -fwrapv which
should disable that assumption.
Ok, I'll report it.
Done at https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101254
Fixed at r12-1916-ga96d8d67d0073a7031c0712bc3fb7759417b2125
https://gcc.gnu.org/git/gitweb.
The failure still represents a gcc bug, because we're using -fwrapv which
should disable that assumption.
Ok, I'll report it.
Done at https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101254
--
Fabien.
+# Check for functions that libpq must not call.
+# (If nm doesn't exist or doesn't work on shlibs, this test will silently
+# do nothing, which is fine.)
+.PHONY: check-libpq-refs
+check-libpq-refs: $(shlib)
+ @! nm -A -g -u $< 2>/dev/null | grep -e abort -e exit
"abort" and "exit" cou
Hello Tom,
moonjelly just reported an interesting failure [1].
I noticed. I was planning to have a look at it, thanks for digging!
It seems that with the latest bleeding-edge gcc, this code is
misoptimized:
else if (imax - imin < 0 || (imax - imin) + 1 < 0)
However, if slurp_file fails it raises an exception and aborts the
whole TAP unexpectedly, which is pretty unclean. So I'd suggest to
keep the eval, as attached. I tested it by changing the file name so
that the slurp fails.
Seem quite unnecessary. We haven't found that to be an issue elsewhe
Seem quite unnecessary. We haven't found that to be an issue elsewhere
in the code where slurp_file is used. And in the present case we know
the file exists because we got its name from list_files().
Agreed. That's an exchange between a hard failure mid-test and a
failure while letting the w
Hello Andrew & Michaël,
My 0.02€:
There's a whole lot wrong with this code. To start with, why is that
unchecked eval there.
Yep. The idea was that other tests would go on being collected eg if the
file is not found, but it should have been checked anyway.
And why is it reading in log fil
Hello Yugo-san,
# About v12.2
## Compilation
Patch seems to apply cleanly with "git apply", but does not compile on my
host: "undefined reference to `conditional_stack_reset'".
However it works better when using the "patch". I'm wondering why git
apply fails silently…
When compiling ther
Hello Yugo-san,
I'm wondering whether we could use "vars" instead of "variables" as a
struct field name and function parameter name, so that is is shorter and
more distinct from the type name "Variables". What do you think?
The struct "Variables" has a field named "vars" which is an array of
Seawasp should turn green on its next run.
It did!
--
Fabien.
Hello Thomas,
Seawasp should turn green on its next run.
Hopefully.
It is not scheduled very soon because Tom complained about the induced
noise in one buildfarm report, so I put the check to once a week.
I changed it to start a run in a few minutes. I've rescheduled to once a
day after
(What were we thinking in allowing this in the first place?)
Temporary debug leftovers that got through, I'd say.
Thanks Michaël for the clean up!
--
Fabien.
Hello Simon,
Indeed.
There is already a "ready" patch in the queue, see:
https://commitfest.postgresql.org/33/3034/
--
Fabien.
Bonjour Michaël,
Using grep() with "$re" results in all the fields matching. Using on
the contrary "/$re/" in grep(), like list_files(), would only match
the first one, which is correct.
Ok, good catch. Perl is kind of a strange language.
With this issue fixed, I have bumped into what looks
Ola Álvaro,
... or, actually, even better would be to use a TODO block, so that the
test is run and reports its status, but if it happens not to succeed it
will not cause the whole test to fail. That way you'll accumulate some
evidence that may serve to improve the test in the future until it
Bonjour Michaël,
Could it be possible to document the intention of the test and its
coverage then? With the current patch, one has to guess what's the
intention behind this case.
Ok, see attached.
+check_pgbench_logs($bdir, '001_pgbench_log_1', $nthreads, 1, 3,
+ qr{^\d{10,} \d{1,2}
Hello Yugo-san:
# About v12.1
This is a refactoring patch, which creates a separate structure for
holding variables. This will become handy in the next patch. There is also
a benefit from a software engineering point of view, so it has merit on
its own.
## Compilation
Patch applies cleanl
Hello,
+# note: this test is time sensitive, and may fail on a very
+# loaded host.
+# note: --progress-timestamp is not tested
+my $delay = pgbench(
+ '-T 2 -P 1 -l --aggregate-interval=1 -S -b se@2'
+ . ' --rate=20 --latency-limit=1000 -j ' . $nthreads
+ . ' -c 3 -r',
Hello Yugo-san,
Thanks a lot for continuing this work started by Marina!
I'm planning to review it for the July CF. I've just added an entry there:
https://commitfest.postgresql.org/33/3194/
--
Fabien.
Bonjour Michaël,
If this were core server code threatening data integrity I would be
inclined to be more strict, but after all pg_bench is a utility program,
and I think we can allow a little more latitude.
+1. Let's be flexible here. It looks better to not rush a fix, and
we still have som
Upon reflection, that amounts to the same thing really, so yeah,
scratch that plan. I'll defer until after that (and then I'll be
leaning more towards the revert option).
Sigh. I do not understand anything about the decision processus.
If you do revert, please consider NOT reverting the tps
There was a silent API breakage (same API, different behavior, how nice…)
in llvm main that Andres figured out, which will have to be fixed at some
point, so this is reminder that it is still a todo…
If it were *our* todo, that would be one thing; but it isn't.
Over on the other thread[1] we
Hello Tom,
Could you please just shut down the animal until that's dealt with?
The test is failing because there is a problem, and shuting down the test
to improve a report does not in any way help to fix it, it just helps to
hide it.
Our buildfarm is run for the use of the Postgres projec
Hello Tom,
So I'm not very confident that the noise will go away quickly, sorry.
Could you please just shut down the animal until that's dealt with?
Hmmm… Obviously I can.
However, please note that the underlying logic of "a test is failing,
let's just remove it" does not sound right to m
Note that if no connections are available, then you do not get the
version, which may be a little bit strange. Attached v3 prints out the
local version in that case. Not sure whether it is worth the effort.
I'm inclined to think that the purpose of that output is mostly
to report the server v
It'd sure be nice if seawasp stopped spamming the buildfarm failure log,
too.
There was a silent API breakage (same API, different behavior, how nice…)
in llvm main that Andres figured out, which will have to be fixed at some
point, so this is reminder that it is still a todo… Not sure when
Hello Tom,
Why not move the printVersion call right after the connection is
created, at line 6374?
I started with that, and one of the 001_pgbench_with_server.pl
tests fell over --- it was expecting no stdout output before a
"Perhaps you need to do initialization" failure. If you don't
mind
Hello Tom,
One point here is that printing the server version requires
access to a connection, which printResults() hasn't got
because we already closed all the connections by that point.
I solved that by printing the banner during the initial
connection that gets the scale factor, does vacuum
/* must be something wrong */
pg_log_error("%s() failed: %m", SOCKET_WAIT_METHOD);
goto done;
Should say such like "thread %d aborted: %s() failed: ...".
After having a lookg, there are already plenty such cases. I'd say not to
change anything for beta, and think of it
Hello,
Doing this means we regard any state other than CSTATE_FINISHED as
aborted. So, the current goto's to done in threadRun are effectively
aborting a part or the all clients running on the thread. So for
example the following place:
pgbench.c:6713
/* must be something wrong */
Hello Thomas,
I prepared a draft revert patch for discussion, just in case it comes
in handy. This reverts "pgbench: Improve time logic.", but "pgbench:
Synchronize client threads." remains (slightly rearranged).
I had a quick look.
I had forgotten that this patch also fixed the long-runni
Hello Greg,
I think the only thing you and I disagree on is that you see a "first
issue in a corner case" where I see a process failure that is absolutely
vital for me to improve.
Hmmm. I agree that improvements are needed, but for me there is simply a
few missing (removed) tap tests which
Is there an identified issue beyond the concrete example Greg gave of
the timestamps?
AFAICS, there is a patch which fixes all known issues linked to pgbench
logging. Whether there exists other issues is possible, but the "broken"
area was quite specific. There are also some TAP tests on pgb
I found you forgot to fix printProgressReport().
Indeed.
Also, according to the document, interval_start in Aggregated Logging
seems to be printed in seconds instead of ms.
Indeed. I'm unsure about what we should really want there, but for a beta
bug fix I agree that it must simply compl
Second, currently the *only* function to change the client state is
advanceConnectionState, so it can be checked there and any bug is only
there. We had issues before when several functions where doing updates,
and it was a mess to understand what was going on. I really want that it
stays that
Hello Yugo-san,
By the way, the issue of inital connection erros reported in this thread
will be fixed by the patch attached in my previous post (a major part are
written by you :-)
That does not, on its own, ensure that it is bug free:-)
). Is this acceptable for you?
I disagree on two
Wouldn't it be better to put all those fixes into the same bag?
Attached.
Even better if the patch is not empty.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index d7479925cb..3df92bdd2b 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@
Wouldn't it be better to put all those fixes into the same bag?
Attached.
--
Fabien.
Hello Thomas,
Before I could get to startup timing I noticed the pgbench logging
output was broken via commit 547f04e7 "Improve time logic":
https://www.postgresql.org/message-id/E1lJqpF-00064e-C6%40gemulon.postgresql.org
It does suck that we broke the logging and that it took 3 months for
a
Hello,
I cannot say that I'm thrilled by having multiple tv stuff back in several
place. I can be okay with one, though. What about the attached? Does it
make sense?
+1 The patch rounds down sd->start_time from ms to s but it seems to
me a degradation.
Yes, please we should not use time.
Hello Yugo-san,
I think we can fix this issue by using gettimeofday for logging as before
this was changed. I attached the patch.
I cannot say that I'm thrilled by having multiple tv stuff back in several
place. I can be okay with one, though. What about the attached? Does it
make sense?
At
pg_time_now(). This uses INSTR_TIME_SET_CURRENT in it, but this macro
can call clock_gettime(CLOCK_MONOTONIC[_RAW], ) instead of gettimeofday
or clock_gettime(CLOCK_REALTIME, ). When CLOCK_MONOTONIC[_RAW] is used,
clock_gettime doesn't return epoch time. Therefore, we can use
INSTR_TIME_SET_CURR
Hello Greg,
I have a lot of community oriented work backed up behind this right now, so
I'm gonna be really honest. This time rework commit in its current form
makes me uncomfortable at this point in the release schedule. The commit
has already fought through two rounds of platform specific
Hello Yugo-san,
When connection break while the bench has already started,
maybe it makes more sense to proceed,
The result would be incomplete also in this case. However, the reason why
it is worth to proceed is that such information is still useful for users,
or we don't want to waste the
+ * The function behaviors changes depending on sample_rate (a fraction of
+ * transaction is reported) and agg_interval (transactions are aggregated
+ * over the interval and reported once).
The first part of this sentence has an incorrect grammar.
Indeed. v5 attempts to improve comments.
Michaël-san, Yugo-san,
I am fine with this version, but I think it would be better if we have
a comment explaining what "tx" is for.
Yes. Done.
Also, how about adding Assert(tx) instead of using "else if (tx)" because
we are assuming that tx is always true when agg_interval is not used, rig
Attached a v3 which adds a boolean to distinguish recording vs flushing.
Better with the attachement… sorry for the noise.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index d7479925cb..3b27ffebf8 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgb
Hello Michaël,
I think we don't have to call doLog() before logAgg(). If we call doLog(),
we will count an extra transaction that is not actually processed because
accumStats() is called in this.
Yes, calling both is weird.
The motivation to call doLog is to catch up zeros on slow rates, so
101 - 200 of 1345 matches
Mail list logo