It find it a little annoying that there is no change in tests, it
means that the format is not checked at all:-(
Yeah. Perhaps it's a little bit hard to perform this kind of tests in
the TAP test?
Not really. I'll look into it.
--
Fabien.
>>> Ok. Fine with me. Possibly at some point there was the idea that there
>>> could be other failures counted, but there are none. Also, there has
>>> been questions about the failures detailed option, or whether the
>>> reports should always be detailed, and the result may be some kind of
>>> not
Ok. Fine with me. Possibly at some point there was the idea that there
could be other failures counted, but there are none. Also, there has
been questions about the failures detailed option, or whether the
reports should always be detailed, and the result may be some kind of
not convincing comp
>> One thing that this doesn't fix is that the existing text appears
>> to suggest that the "failures" column is something different from
>> the sum of the serialization_failures and deadlock_failures
>> columns, which it's obvious from the code is not so. If this isn't
>> a code bug then I think
Fabien COELHO writes:
> While looking at the html outpout, the "pgbench" command line just below
> wraps strangely:
>pgbench --aggregate-interval=10 --time=20 --client=10 --log --rate=1000
>--latency-limit=10 --failures-detailed --max-tries=10 test
> ISTM that there should be no nl in t
Hello Tom,
The buildfarm is still complaining about the synopsis being too
wide for PDF format. I think what we ought to do is give up on
using a for log lines at all, and instead convert the
documentation into a tabular list of fields. Proposal attached,
which also fixes a couple of outrigh
> The buildfarm is still complaining about the synopsis being too
> wide for PDF format. I think what we ought to do is give up on
> using a for log lines at all, and instead convert the
> documentation into a tabular list of fields. Proposal attached,
> which also fixes a couple of outright err
Tatsuo Ishii writes:
> Patch pushed. Thanks.
The buildfarm is still complaining about the synopsis being too
wide for PDF format. I think what we ought to do is give up on
using a for log lines at all, and instead convert the
documentation into a tabular list of fields. Proposal attached,
whic
>> I would suggest to reorder the last chunk to:
>>
>>... retried retries failures serfail dlfail
>>
>> because I intend to add connection failures handling at some point,
>> and it would make more sense to add the corresponding count at the end
>> with other fails.
>
> Ok, I have adjusted t
Hi Fabien,
> Hello Tatsuo-san,
>
>> interval_start num_transactions sum_latency sum_latency_2 min_latency
>> max_latency
>> sum_lag sum_lag_2 min_lag max_lag skipped
>> failures serialization_failures deadlock_failures retried retries
>
> I would suggest to reorder the last chunk to:
>
>...
Hello Tatsuo-san,
interval_start num_transactions sum_latency sum_latency_2 min_latency
max_latency
sum_lag sum_lag_2 min_lag max_lag skipped
failures serialization_failures deadlock_failures retried retries
I would suggest to reorder the last chunk to:
... retried retries failures serf
>> I think it's easier to just say "if feature X is not enabled, then
>> columns XYZ are always zeroes".
>
> Ok, I will come up with a patch in this direction.
Please find attached patch for this.
With the patch, the log line is as follows (actually no line foldings of
course):
interval_start
Alvaro Herrera writes:
> I think it's easier to just say "if feature X is not enabled, then
> columns XYZ are always zeroes".
+1, that's pretty much what I was thinking.
regards, tom lane
>> My 0.02€:
>>
>> I agree that it would be better to have a more deterministic aggregated log
>> format.
>>
>> ISTM that it should skip failures and lags if no fancy options has been
>> selected, i.e.:
>>
>> [ fails ... retries [ sum_lag ... [ skipped ] ] ?
>
> I think it's easier to just sa
On 2022-Apr-03, Fabien COELHO wrote:
> > What about this? (a log line is not actually folded)
> > interval_start num_transactions sum_latency sum_latency_2 min_latency
> > max_latency
> > failures serialization_failures deadlock_failures retried retries [ sum_lag
> > sum_lag_2 min_lag max_lag [
>>> Or those three columns always, sum_lag sum_lag_2, min_lag max_lag,
>>> skipped, retried retries?
>>
>> What about this? (a log line is not actually folded)
>> interval_start num_transactions sum_latency sum_latency_2 min_latency
>> max_latency
>> failures serialization_failures deadlock_failure
Or those three columns always, sum_lag sum_lag_2, min_lag max_lag,
skipped, retried retries?
What about this? (a log line is not actually folded)
interval_start num_transactions sum_latency sum_latency_2 min_latency
max_latency
failures serialization_failures deadlock_failures retried retries
>> I think the problem is not merely one of documentation, but one of
>> bad design. Up to now it was possible to tell what was what from
>> counting the number of columns in the output; but with this design,
>> that is impossible. That should be fixed. The first thing you have
>> got to do is d
Or those three columns always, sum_lag sum_lag_2, min_lag max_lag,
skipped, retried retries?
Anyway now that current CF is closing, it will not be possible to
change those logging design soon. Or can we change the logging design
even after CF is closed?
My 0.02€: I'm not sure how the offici
> Alvaro Herrera writes:
>>> After:
>>> interval_start num_transactions sum_latency sum_latency_2 min_latency
>>> max_latency
>>> { failures | serialization_failures deadlock_failures } [ sum_lag sum_lag_2
>>> min_lag max_lag [ skipped ] ] [ retried retries ]
>
>> I think that the explanatory p
> Hello,
>
> On 2022-Mar-27, Tatsuo Ishii wrote:
>
>> After:
>> interval_start num_transactions sum_latency sum_latency_2 min_latency
>> max_latency
>> { failures | serialization_failures deadlock_failures } [ sum_lag
>> sum_lag_2 min_lag max_lag [ skipped ] ] [ retried retries ]
>
> You're
Alvaro Herrera writes:
>> After:
>> interval_start num_transactions sum_latency sum_latency_2 min_latency
>> max_latency
>> { failures | serialization_failures deadlock_failures } [ sum_lag sum_lag_2
>> min_lag max_lag [ skipped ] ] [ retried retries ]
> I think that the explanatory paragraph i
Hello,
On 2022-Mar-27, Tatsuo Ishii wrote:
> After:
> interval_start num_transactions sum_latency sum_latency_2 min_latency
> max_latency
> { failures | serialization_failures deadlock_failures } [ sum_lag sum_lag_2
> min_lag max_lag [ skipped ] ] [ retried retries ]
You're showing an indent
>>> > Even applying this patch, "make postgres-A4.pdf" arises the warning on my
>>> > machine. After some investigations, I found that previous document had a
>>> > break
>>> > after 'num_transactions', but it has been removed due to this commit.
>>>
>>> Yes, your patch removed "&zwsp;".
>>>
>>>
>> > Even applying this patch, "make postgres-A4.pdf" arises the warning on my
>> > machine. After some investigations, I found that previous document had a
>> > break
>> > after 'num_transactions', but it has been removed due to this commit.
>>
>> Yes, your patch removed "&zwsp;".
>>
>> > So,
>
On Mon, 28 Mar 2022 12:17:13 +0900 (JST)
Tatsuo Ishii wrote:
> > Even applying this patch, "make postgres-A4.pdf" arises the warning on my
> > machine. After some investigations, I found that previous document had a
> > break
> > after 'num_transactions', but it has been removed due to this comm
> Even applying this patch, "make postgres-A4.pdf" arises the warning on my
> machine. After some investigations, I found that previous document had a break
> after 'num_transactions', but it has been removed due to this commit.
Yes, your patch removed "&zwsp;".
> So,
> I would like to get back t
On Sun, 27 Mar 2022 15:28:41 +0900 (JST)
Tatsuo Ishii wrote:
> > This patch has caused the PDF documentation to fail to build cleanly:
> >
> > [WARN] FOUserAgent - The contents of fo:block line 1 exceed the available
> > area in the inline-progression direction by more than 50 points. (See
> >
> This patch has caused the PDF documentation to fail to build cleanly:
>
> [WARN] FOUserAgent - The contents of fo:block line 1 exceed the available
> area in the inline-progression direction by more than 50 points. (See
> position 125066:375)
>
> It's complaining about this:
>
>
> interval_
Tatsuo Ishii writes:
> Thanks. Patch pushed.
This patch has caused the PDF documentation to fail to build cleanly:
[WARN] FOUserAgent - The contents of fo:block line 1 exceed the available area
in the inline-progression direction by more than 50 points. (See position
125066:375)
It's complain
>> Oops. Thanks. New patch attached. Test has passed on my machine.
>
> This patch works for me. I think it is ok to use \N instead of \gN.
Thanks. Patch pushed.
Best reagards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
> I reproduced the failure on another machine with perl 5.8.8,
> and I can confirm that this patch fixes it.
Thank you for the test. I have pushed the patch.
Best reagards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
On Fri, 25 Mar 2022 09:14:00 +0900 (JST)
Tatsuo Ishii wrote:
> > Note that the \\g2 just above also needs to be changed.
>
> Oops. Thanks. New patch attached. Test has passed on my machine.
This patch works for me. I think it is ok to use \N instead of \gN.
Regards,
Yugo Nagata
--
Yugo NAGAT
Tatsuo Ishii writes:
> Oops. Thanks. New patch attached. Test has passed on my machine.
I reproduced the failure on another machine with perl 5.8.8,
and I can confirm that this patch fixes it.
regards, tom lane
> Note that the \\g2 just above also needs to be changed.
Oops. Thanks. New patch attached. Test has passed on my machine.
Best reagards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
diff --git a/src/bin/pgbench/t/001_pgbench_w
Tatsuo Ishii writes:
> I don't see a reason to use "\gN" either. Actually after applying
> attached patch, my machine is still happy with pgbench test.
Note that the \\g2 just above also needs to be changed.
regards, tom lane
>> My machine (Ubuntu 20) did not complain either. Maybe perl version
>> difference? Any way, the fix pushed. Let's see how prairiedog feels.
>
> Still not happy. After some digging in man pages, I believe the
> problem is that its old version of Perl does not understand "\gN"
> backreferences.
Tatsuo Ishii writes:
>> My hoary animal prairiedog doesn't like this [1]:
> My machine (Ubuntu 20) did not complain either. Maybe perl version
> difference? Any way, the fix pushed. Let's see how prairiedog feels.
Still not happy. After some digging in man pages, I believe the
problem is that
>> My hoary animal prairiedog doesn't like this [1]:
>>
>> # Failed test 'concurrent update with retrying stderr /(?s-xim:client
>> (0|1) got an error in command 3 \\(SQL\\) of script 0; ERROR: could not
>> serialize access due to concurrent update\\b.*\\g1)/'
>> # at t/001_pgbench_with_ser
On Wed, 23 Mar 2022 14:26:54 -0400
Tom Lane wrote:
> Tatsuo Ishii writes:
> > The patch Pushed. Thank you!
>
> My hoary animal prairiedog doesn't like this [1]:
>
> # Failed test 'concurrent update with retrying stderr /(?s-xim:client (0|1)
> got an error in command 3 \\(SQL\\) of script 0;
Tatsuo Ishii writes:
> The patch Pushed. Thank you!
My hoary animal prairiedog doesn't like this [1]:
# Failed test 'concurrent update with retrying stderr /(?s-xim:client (0|1)
got an error in command 3 \\(SQL\\) of script 0; ERROR: could not serialize
access due to concurrent update\\b.*\
>> I attached the updated patch. I also fixed the following paragraph which I
>> had
>> forgotten to fix in the previous patch.
>>
>> The first seven lines report some of the most important parameter settings.
>> The sixth line reports the maximum number of tries for transactions with
>> seria
>> I've checked other places using referring to , and found
>> that "xreflabel"s are used in such tags. So, I'll fix it
>> in this style.
>
> I attached the updated patch. I also fixed the following paragraph which I had
> forgotten to fix in the previous patch.
>
> The first seven lines repo
On Tue, 22 Mar 2022 09:08:15 +0900
Yugo NAGATA wrote:
> Hi Ishii-san,
>
> On Sun, 20 Mar 2022 09:52:06 +0900 (JST)
> Tatsuo Ishii wrote:
>
> > Hi Yugo,
> >
> > I have looked into the patch and I noticed that > linkend=... endterm=...> is used in pgbench.sgml. e.g.
> >
> >
> >
> > AFAIK th
> On Sun, 20 Mar 2022 16:11:43 +0900 (JST)
> Tatsuo Ishii wrote:
>
>> > Hi Yugo,
>> >
>> > I tested with serialization error scenario by setting:
>> > default_transaction_isolation = 'repeatable read'
>> > The result was:
>> >
>> > $ pgbench -t 10 -c 10 --max-tries=10 test
>> > transaction type
On Sun, 20 Mar 2022 16:11:43 +0900 (JST)
Tatsuo Ishii wrote:
> > Hi Yugo,
> >
> > I tested with serialization error scenario by setting:
> > default_transaction_isolation = 'repeatable read'
> > The result was:
> >
> > $ pgbench -t 10 -c 10 --max-tries=10 test
> > transaction type:
> > scaling
Hi Ishii-san,
On Sun, 20 Mar 2022 09:52:06 +0900 (JST)
Tatsuo Ishii wrote:
> Hi Yugo,
>
> I have looked into the patch and I noticed that linkend=... endterm=...> is used in pgbench.sgml. e.g.
>
>
>
> AFAIK this is the only place where "endterm" is used. In other places
> "link" tag is used
> Hi Yugo,
>
> I tested with serialization error scenario by setting:
> default_transaction_isolation = 'repeatable read'
> The result was:
>
> $ pgbench -t 10 -c 10 --max-tries=10 test
> transaction type:
> scaling factor: 10
> query mode: simple
> number of clients: 10
> number of threads: 1
>
Hi Yugo,
I tested with serialization error scenario by setting:
default_transaction_isolation = 'repeatable read'
The result was:
$ pgbench -t 10 -c 10 --max-tries=10 test
transaction type:
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 1
maximum number of tries:
Hi Yugo,
I have looked into the patch and I noticed that is used in pgbench.sgml. e.g.
AFAIK this is the only place where "endterm" is used. In other places
"link" tag is used instead:
Failures and Serialization/Deadlock
Retries
Note that the rendered result is identical. Do we want to use
Hello Fabien,
On Sat, 12 Mar 2022 15:54:54 +0100 (CET)
Fabien COELHO wrote:
> Hello Yugo-san,
>
> About Pgbench error handling v16:
Thank you for your review! I attached the updated patches.
> This patch set needs a minor rebase because of 506035b0. Otherwise, patch
> compiles, global and l
Hello Yugo-san,
About Pgbench error handling v16:
This patch set needs a minor rebase because of 506035b0. Otherwise, patch
compiles, global and local "make check" are ok. Doc generation is ok.
This patch is in good shape, the code and comments are clear.
Some minor remarks below, including
Hello Tatsuo-san,
It seems the patch is ready for committer except below. Do you guys want
to do more on below?
I'm planning a new review of this significant patch, possibly over the
next week-end, or the next.
--
Fabien.
Hi Yugo and Fabien,
It seems the patch is ready for committer except below. Do you guys
want to do more on below?
>> # TESTS
>>
>> I suggested to simplify the tests by using conditionals & sequences. You
>> reported that you got stuck. Hmmm.
>>
>> I tried again my tests which worked fine when
Hello Fabien,
Thank you so much for your review.
Sorry for the late reply. I've stopped working on it due to other
jobs but I came back again. I attached the updated patch. I would
appreciate it if you could review this again.
On Mon, 19 Jul 2021 20:04:23 +0200 (CEST)
Fabien COELHO wrote:
> #
I attached the updated patch.
# About pgbench error handling v15
Patches apply cleanly. Compilation, global and local tests ok.
- v15.1: refactoring is a definite improvement.
Good, even if it is not very useful (see below).
While restructuring, maybe predefined variables could be ma
Hello,
I attached the updated patch.
On Tue, 13 Jul 2021 15:50:52 +0900
Yugo NAGATA wrote:
> > >> > I'm a little hesitant about how to count and report such unfinished
> > >> > because of bench timeout transactions, though. Not counting them seems
> > >> > to be the best option.
> > > I will
On Tue, 13 Jul 2021 14:35:00 +0900 (JST)
Tatsuo Ishii wrote:
> >> > I would tend to agree with this behavior, that is not to start any new
> >> > transaction or transaction attempt once -T has expired.
> >
> > That is the behavior in the latest patch. Once -T has expired, any new
> > transaction
>> > I would tend to agree with this behavior, that is not to start any new
>> > transaction or transaction attempt once -T has expired.
>
> That is the behavior in the latest patch. Once -T has expired, any new
> transaction or retry does not start.
Actually v14 has not changed the behavior in
On Tue, 13 Jul 2021 13:00:49 +0900 (JST)
Tatsuo Ishii wrote:
> >>> Or, we should terminate the last cycle of benchmark regardless it is
> >>> retrying or not if -T expires. This will make pgbench behaves much
> >>> more consistent.
> >
> > I would tend to agree with this behavior, that is not to
>>> Or, we should terminate the last cycle of benchmark regardless it is
>>> retrying or not if -T expires. This will make pgbench behaves much
>>> more consistent.
>
> I would tend to agree with this behavior, that is not to start any new
> transaction or transaction attempt once -T has expired.
Hello,
Of course, users themselves should be careful of problematic script, but it
would be better that pgbench itself avoids problems if pgbench can beforehand.
Or, we should terminate the last cycle of benchmark regardless it is
retrying or not if -T expires. This will make pgbench behaves
I have played with v14 patch. I previously complained that pgbench
always reported 9 errors (actually the number is always the number
specified by "-c" -1 in my case).
$ pgbench -p 11000 -c 10 -T 10 --max-tries=0 test
pgbench (15devel, server 13.3)
starting vacuum...end.
transaction type:
scali
Hello Fabien,
I attached the updated patch (v14)!
On Wed, 30 Jun 2021 17:33:24 +0200 (CEST)
Fabien COELHO wrote:
> >> --report-latencies -> --report-per-command: should we keep supporting
> >> the previous option?
> >
> > Ok. Although now the option is not only for latencies, considering users
On Wed, 07 Jul 2021 21:50:16 +0900 (JST)
Tatsuo Ishii wrote:
> >> Well, "that's very little, let's ignore it" is not technically a right
> >> direction IMO.
> >
> > Hmmm, It seems to me these failures are ignorable because with regard to
> > failures
> > due to -T they occur only the last trans
>> Well, "that's very little, let's ignore it" is not technically a right
>> direction IMO.
>
> Hmmm, It seems to me these failures are ignorable because with regard to
> failures
> due to -T they occur only the last transaction of each client and do not
> affect
> the result such as TPS and lat
On Wed, 07 Jul 2021 16:11:23 +0900 (JST)
Tatsuo Ishii wrote:
> > Indeed, as Ishii-san pointed out, some users might not want to terminate
> > retrying transactions due to -T. However, the actual negative effect is only
> > printing the number of failed transactions. The other result that users
>
> Indeed, as Ishii-san pointed out, some users might not want to terminate
> retrying transactions due to -T. However, the actual negative effect is only
> printing the number of failed transactions. The other result that users want
> to
> know, such as tps, are almost not affected because they ar
Hello Ishii-san,
On Fri, 02 Jul 2021 09:25:03 +0900 (JST)
Tatsuo Ishii wrote:
> I have found an interesting result from patched pgbench (I have set
> the isolation level to REPEATABLE READ):
>
> $ pgbench -p 11000 -c 10 -T 30 --max-tries=0 test
> pgbench (15devel, server 13.3)
> starting vacu
Hello Ishii-san,
On Thu, 01 Jul 2021 09:03:42 +0900 (JST)
Tatsuo Ishii wrote:
> > v13 patches gave a compiler warning...
> >
> > $ make >/dev/null
> > pgbench.c: In function ‘commandError’:
> > pgbench.c:3071:17: warning: unused variable ‘command’ [-Wunused-variable]
> > const Command *comman
I have found an interesting result from patched pgbench (I have set
the isolation level to REPEATABLE READ):
$ pgbench -p 11000 -c 10 -T 30 --max-tries=0 test
pgbench (15devel, server 13.3)
starting vacuum...end.
transaction type:
scaling factor: 1
query mode: simple
number of clients: 10
numbe
> v13 patches gave a compiler warning...
>
> $ make >/dev/null
> pgbench.c: In function ‘commandError’:
> pgbench.c:3071:17: warning: unused variable ‘command’ [-Wunused-variable]
> const Command *command = sql_script[st->use_file].commands[st->command];
> ^~~
There is a ty
> I attached the patch updated according with your suggestion.
v13 patches gave a compiler warning...
$ make >/dev/null
pgbench.c: In function ‘commandError’:
pgbench.c:3071:17: warning: unused variable ‘command’ [-Wunused-variable]
const Command *command = sql_script[st->use_file].commands[st-
Hello Yugo-san,
Thanks for the update!
Patch seems to apply cleanly with "git apply", but does not compile on my
host: "undefined reference to `conditional_stack_reset'".
However it works better when using the "patch". I'm wondering why git
apply fails silently…
Hmm, I don't know why your c
Hello Fabien,
On Sat, 26 Jun 2021 12:15:38 +0200 (CEST)
Fabien COELHO wrote:
>
> Hello Yugo-san,
>
> # About v12.2
>
> ## Compilation
>
> Patch seems to apply cleanly with "git apply", but does not compile on my
> host: "undefined reference to `conditional_stack_reset'".
>
> However it wor
Hello Yugo-san,
# About v12.2
## Compilation
Patch seems to apply cleanly with "git apply", but does not compile on my
host: "undefined reference to `conditional_stack_reset'".
However it works better when using the "patch". I'm wondering why git
apply fails silently…
When compiling ther
Hello Yugo-san,
I'm wondering whether we could use "vars" instead of "variables" as a
struct field name and function parameter name, so that is is shorter and
more distinct from the type name "Variables". What do you think?
The struct "Variables" has a field named "vars" which is an array of
On Wed, 23 Jun 2021 10:38:43 +0200 (CEST)
Fabien COELHO wrote:
>
> Hello Yugo-san:
>
> # About v12.1
>
> This is a refactoring patch, which creates a separate structure for
> holding variables. This will become handy in the next patch. There is also
> a benefit from a software engineering po
Hello Yugo-san:
# About v12.1
This is a refactoring patch, which creates a separate structure for
holding variables. This will become handy in the next patch. There is also
a benefit from a software engineering point of view, so it has merit on
its own.
## Compilation
Patch applies cleanl
Hello Fabien,
On Tue, 22 Jun 2021 20:03:58 +0200 (CEST)
Fabien COELHO wrote:
>
> Hello Yugo-san,
>
> Thanks a lot for continuing this work started by Marina!
>
> I'm planning to review it for the July CF. I've just added an entry there:
>
> https://commitfest.postgresql.org/33/3194/
T
Hello Yugo-san,
Thanks a lot for continuing this work started by Marina!
I'm planning to review it for the July CF. I've just added an entry there:
https://commitfest.postgresql.org/33/3194/
--
Fabien.
Hi hackers,
On Mon, 24 May 2021 11:29:10 +0900
Yugo NAGATA wrote:
> Hi hackers,
>
> On Tue, 10 Mar 2020 09:48:23 +1300
> Thomas Munro wrote:
>
> > On Tue, Mar 10, 2020 at 8:43 AM Fabien COELHO wrote:
> > > >> Thank you very much! I'm going to send a new patch set until the end of
> > > >> th
82 matches
Mail list logo