The new status of this patch is: Waiting on Author
This patch was inactive during the commitfest, so I am going to mark it as
"Returned with Feedback".
Fabien, are you planning to continue working on it?
Not in the short term, but probably for the next CF. Can you park it
there?
--
Hello David,
Some feedback about v3:
In the doc I find TABLEACCESSMETHOD quite hard to read. Use TABLEAM
instead? Also, the next entry uses lowercase (tablespace), why change the
style?
Remove space after comma in help string. I'd use the more readable TABLEAM
in the help string rather
CFM reminder.
Hi, this entry is "Waiting on Author" and the thread was inactive for a
while. I see this discussion still has some open questions. Are you
going to continue working on it, or should I mark it as "returned with
feedback" until a better time?
IMHO the proposed fix is
Hello David,
The previous patch was based on branch "REL_13_STABLE". Now, the attached new
patch v2 is based on master branch. I followed the new code structure using
appendPQExpBuffer to append the new clause "using TABLEAM" in a proper
position and tested. In the meantime, I also updated
I noticed somewhat to my surprise as I was prepping the tests for the
"match the whole DN" patch that pg_ident.conf is parsed using the same
routines used for pg_hba.conf, and as a result the DN almost always
needs to be quoted, because they almost all contain a comma e.g.
"O=PGDG,OU=Testing".
Hello Andrew,
I noticed somewhat to my surprise as I was prepping the tests for the
"match the whole DN" patch that pg_ident.conf is parsed using the same
routines used for pg_hba.conf, and as a result the DN almost always
needs to be quoted, because they almost all contain a comma e.g.
I think the issue really is that, independent of PG lock contention,
it'll take a while to establish all connections, and that starting to
benchmark with only some connections established will create pretty
pointless numbers.
Yes. This is why I think that if we have some way to synchronize
Hello Marina,
1) It looks like pgbench will no longer support Windows XP due to the
function DeleteSynchronizationBarrier. From
https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-deletesynchronizationbarrier
Minimum supported client: Windows 8 [desktop apps only]
Minimum
Hello,
I can remove the line, but I strongly believe that reporting performance
figures if some client connection failed thus the bench could not run as
prescribed is a bad behavior. The good news is that it is probably quite
unlikely. So I'd prefer to keep it and possibly submit a patch to
I think I found a typo for the output of an error message which may cause
building warning.
Please refer to the attachment for the detail.
Indeed. Thanks for the fix!
--
Fabien.
Hello Tom,
Use ppoll, and start more threads but not too many?
Does ppoll exist on Windows?
Some g*gling suggest that the answer is no.
There was a prior thread on this topic, which seems to have drifted off
into the sunset:
Indeed. I may have contributed to this dwindling by not
Hello,
Indeed. I took your next patch with an added explanation. I'm unclear
whether proceeding makes much sense though, that is some thread would run
the test and other would have aborted. Hmmm.
Your comment looks good, thanks. In the previous version pgbench starts
benchmarking even if
Hello Marina,
While trying to test a patch that adds a synchronization barrier in pgbench
[1] on Windows,
Thanks for trying that, I do not have a windows setup for testing, and the
sync code I wrote for Windows is basically blind coding:-(
I found that since the commit "Use ppoll(2), if
Hello,
Please complete fixes for the documentation. At least the following sentence
should be fixed:
```
The last two lines report the number of transactions per second, figured with
and without counting the time to start database sessions.
```
Indeed. I scanned the file but did not find
Can you elaborate what you meant by the new "print overheads should probably
be avoided" comment?
Because printf is slow and this is on the critical path of data
generation. Printf has to interpret the format each time just to print
three ints, specialized functions could be used which
Bonjour Michaël,
https://www.postgresql.org/message-id/CAMN686ExUKturcWp4POaaVz3gR3hauSGBjOCd0E-Jh1zEXqf_Q%40mail.gmail.com
Since then, the patch is failing to apply. As this got zero activity
for the last six months, I am marking the entry as returned with
feedback in the CF app.
Hmmm… I
Hello,
Sorry, I sent a wrong version of the patch, contains some spelling
errors. This is the right one.
Sigh.. I fixed some more wordings.
I have submitted a patch which reworks how things are computed so that
performance figures make some/more sense, among other things.
Maybe you
Hello Tom,
It requires a mutex around the commands, I tried to do some windows
implementation which may or may not work.
Ugh, I'd really rather not do that. Even disregarding the effects
of a mutex, though, my initial idea for fixing this has a big problem:
if we postpone PREPARE of the
On 2020-09-08 21:10, Bruce Momjian wrote:
I see this only applied to master. Shouldn't this be backpatched?
I wasn't planning to. It's not a bug fix.
Other thoughts?
Yep. ISTM nicer if all docs have the same navigation, especially as
googling often points to random versions. No big
Hello Tom,
Accordingly, I borrowed some code from that thread and present
the attached revision. I also added some test coverage, since
that was lacking before, and wordsmithed docs and comments slightly.
Hearing no comments, pushed that way.
Thanks for the fixes and improvements!
I
Re-reading this thread, I see no complaints about introducing a
dependency on Expect.
Indeed, Tom's complaint was on another thread, possibly when interactive
tests "src/bin/psql/t/010_tab_completion.pl" were added.
ISTM that one of the issue was that some farm animal would be broken.
Hello,
This patch no longer applies: http://cfbot.cputube.org/patch_27_2262.log
CF entry has been updated to Waiting on Author.
This patch hasn't been updated and still doesn't apply, do you intend to rebase
it during this commitfest or should we move it to returned with feedback? It
can
In the meantime, here is a v9 which also fixes the behavior when using
\watch, so that now one can issue several \;-separated queries and have
their progress shown. I just needed that a few days ago and was
disappointed but unsurprised that it did not work.
This seems to break the
Attached v19 is a rebase, per cfbot.
Attached v20 fixes a doc xml typo, per cfbot again.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 9f3bb5fce6..d4a604c6fa 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -1033,7
in favor of *PQExpBuffer().
Attached v7 is rebased v5 which uses PQExpBuffer, per cfbot.
--
Fabien.diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c
index 08a5947a9e..3abc41954a 100644
--- a/src/bin/pgbench/pgbench.c
+++ b/src/bin/pgbench/pgbench.c
@@ -602,7 +602,9 @@ static
I don't think we should boot this patch. I don't think I would be able
to get this over the commit line in this CF, but let's not discard it.
Understood. I have moved the patch to the 2020-07 CF in Needs Review state.
Attached v19 is a rebase, per cfbot.
--
Fabien.diff --git
Attached v2 fixes some errors, per cfbot.
--
Fabien.diff --git a/doc/src/sgml/ref/pgbench.sgml b/doc/src/sgml/ref/pgbench.sgml
index 9f3bb5fce6..9894ae9c04 100644
--- a/doc/src/sgml/ref/pgbench.sgml
+++ b/doc/src/sgml/ref/pgbench.sgml
@@ -998,15 +998,14 @@ pgbench options d
There is a
Hello Anastasia,
My 0.02 €:
The patch implements following syntax:
CREATE TABLE ... PARTITION BY partition_method (list_of_columns)
partition_auto_create_clause
where partition_auto_create_clause is
CONFIGURATION [IMMEDIATE| DEFERRED] USING partition_bound_spec
and partition_bound_spec
Hallo Peter,
Would it make sense to accumulate in the other direction, older to newer,
so that new attributes are added at the end of the select?
I think that could make sense, but the current style was introduced by
daa9fe8a5264a3f192efa5ddee8fb011ad9da365. Should we revise that?
It seems
In the attached v3, I've tried to clarify comments and doc about tokenization
rules relating to comments, strings and continuations.
Attached v4 improves comments & doc as suggested by Justin.
--
Fabien.diff --git a/doc/src/sgml/client-auth.sgml b/doc/src/sgml/client-auth.sgml
index
Hello Peter,
The original stylesheets explicitly go out of their way to do it that way.
We can easily fix that by removing that special case. See attached patch.
That patch only fixes it for the header. To fix it for the footer as well,
we'd first need to import the navfooter template to
The original stylesheets explicitly go out of their way to do it that
way.
Can we find any evidence of the reasoning? As you say, that clearly was
an intentional choice.
Given the code, my guess would be the well-intentioned but misplaced
desire to avoid a redundancy, i.e. two links
* First patch reworks time measurements in pgbench.
* Second patch adds a barrier before starting the bench
It applies on top of the previous one. The initial imbalance due to
thread creation times is smoothed.
The usecs patch fails to apply to HEAD, can you please submit a rebased version
Hello,
You changed the query strings to use "\n" instead of " ". I would not have
changed that, because it departs from the style around, and I do not think
it improves readability at the C code level.
This was the style that was introduced by
daa9fe8a5264a3f192efa5ddee8fb011ad9da365.
[Resent on hackers for CF registration, sorry for the noise]
Hello Tom,
The attached patch fixes some of the underlying problems reported by delaying
the :var to $1 substitution to the last possible moments, so that what
variables are actually defined is known. PREPARE-ing is also delayed to
Hallo Peter,
v2 patches apply cleanly, compile, global check ok, citext check ok, doc
gen ok. No further comments.
As I did not find an entry in the CF, so I did nothing about tagging it
"ready".
--
Fabien.
Hello Isaac,
This is not the only area where empty tuples are not supported. Consider:
PRIMARY KEY ()
This should mean the table may only contain a single row, but is not
supported.
Yep. This is exactly the kind of case about which I was trying the
command, after reading Bruce Momjian
Bonjour Vik,
It's forbidden because the SQL standard forbids it.
Ok, that is definitely a reason. I'm not sure it is a good reason, though.
It's a very good reason. It might not be good *enough*, but it is a
good reason.
Ok for good, although paradoxically not "good enough":-)
Why
Hallo Peter,
My 0.02 €:
Patch applies cleanly, compiles, make check and pg_dump tap tests ok. The
refactoring is a definite improvements.
You changed the query strings to use "\n" instead of " ". I would not have
changed that, because it departs from the style around, and I do not think
Hello Tom,
INSERT INTO t() VALUES ();
I'm still unclear why it would be forbidden though, it seems logical to
try that, whereas the working one is quite away from the usual syntax.
It's forbidden because the SQL standard forbids it.
Ok, that is definitely a reason. I'm not sure it is
Hallo Thomas,
INSERT INTO t() VALUES ();
This is forbidden by postgres, and also sqlite.
Is there any good reason why this should be the case?
Maybe because
insert into t default values;
exists (and is standard SQL if I'm not mistaken)
That's a nice alternative I did not notice.
Hello devs,
I would like to create an "all defaults" row, i.e. a row composed of the
default values for all attributes, so I wrote:
INSERT INTO t() VALUES ();
This is forbidden by postgres, and also sqlite.
Is there any good reason why this should be the case?
--
Fabien.
Bonjour Michaël,
Should it be backpatched? I'm not sure what the usual practice is wrt to
small fixes in the doc.
The text is right, and this impacts only the appearance of the text,
so I did not see that this was enough for a backpatch.
Ok. It would mean that possible other doc patches on
Good catches, thanks Fabien. I will fix that tomorrow or so.
And applied to HEAD.
Ok.
Should it be backpatched? I'm not sure what the usual practice is wrt to
small fixes in the doc.
--
Fabien.
Hello Tom,
I didn't think there was much point in linkifying both in that case,
and other similar situations.
The point is that the user reads a sentence, attempts to jump but
sometimes can't, because the is not the first occurrence. I'd go for
all mentions of another relation should be
Hello devs,
I've been annoyed that the documentation navigation does not always has an
"Up" link. It has them inside parts, but the link disappears and you have
to go for the "Home" link which is far on the right when on the root page
of a part?
Is there a good reason not to have the "Up"
Hello,
While reviewing a documentation patch, I noticed that a few tags where
wrong in "catalog.sgml". Attached patch fixes them.
--
Fabien.diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml
index 700271fd40..5a66115df1 100644
--- a/doc/src/sgml/catalogs.sgml
+++
Hello Dagfinn,
The attached patch
applies cleanly, doc generation is ok. I'm ok with adding such links
systematically.
makes the first mention of another system catalog or view (as well as
pg_hba.conf in pg_hba_file_lines) a link, for easier navigation.
Why only the first mention? It
Hello Peter,
whereas the current standard says
SUBSTRING(text SIMILAR pattern ESCAPE escapechar)
The former was in SQL99, but the latter has been there since SQL:2003.
It's pretty easy to implement the second form also, so here is a patch that
does that.
Patches apply cleanly,
Hello Masahiko-san,
What I referred to "only one key" is KEK.
Ok, sorry, I misunderstood.
Yeah, it depends on KMS, meaning we need different extensions for
different KMS. A KMS might support an interface that creates key if not
exist during GET but some KMS might support CREATE and GET
Hello Dagfinn,
The attached patch makes the first mention of another system catalog or
view (as well as pg_hba.conf in pg_hba_file_lines) a link, for easier
navigation.
Why only the first mention? It seems unlikely that I would ever read such
chapter linearly, and even so that I would want
Hello Masahiko-san,
* The external place needs to manage more encryption keys than the
current patch does.
Why? If the external place is just a separate process on the same host,
probably it would manage the very same amount as what your patch.
In the current patch, the external place
Hello Tom,
I have in the past scraped the latter results and tried to make sense of
them. They are *mighty* noisy, even when considering just one animal
that I know to be running on a machine with little else to do. Maybe
averaging across the whole buildfarm could reduce the noise level,
Hello Robert,
My 0.02 €:
It seems to me that we're making the same mistake with the replication
parser that we've made in various placesin the regular parser: using a
syntax for options that requires that every potential option be a
keyword, and every potential option requires modification of
Hello Bruce.
The question is what should be put in the protocol, and I would tend to
think that some careful design time should be put in it.
I still don't see the value of this vs. its complexity.
Dunno. I'm looking for the value of having such a thing in core.
ISTM that there are no
Hello Masahiko-san,
Summarizing the discussed points so far, I think that the major
advantage points of your idea comparing to the current patch's
architecture are:
* More secure. Because it never loads KEK in postgres processes we can
lower the likelihood of KEK leakage.
Yes.
* More
Hello Masahiko-san,
I'm not sure I understood your concern. I try to answer below.
If I understand your idea correctly we put both DEK and KEK
"elsewhere", and a postgres process gets only DEK from it.
Yes, that is one of the option.
It seems to me this idea assumes that the place storing
Hello Masahiko-san,
If the KEK is ever present in pg process, it presumes that the threat
model being addressed allows its loss if the process is compromised, i.e.
all (past, present, future) security properties are void once the process
is compromised.
Why we should not put KEK in pg
Hello Bruce,
Sorry for the length (yet again) of this answer, I'm trying to make my
point as clear as possible.
Obviously it requires some more thinking and design, but my point is that
postgres should not hold a KEK, ever, nor presume how DEK are to be managed
by a DMS, and that is not
Hello Justin,
Rebased onto 7b48f1b490978a8abca61e9a9380f8de2a56f266 and renumbered OIDs.
Some feedback about v18, seen as one patch.
Patch applies cleanly & compiles. "make check" is okay.
pg_stat_file() and pg_stat_dir_files() now return a char type, as well as
the function which call
Hello,
This patch was marked as ready for committer, but clearly there's an
ongoin discussion about what should be the default behavoir, if this
breaks existing apps etc. So I've marked it as "needs review" and moved
it to the next CF.
The issue is that root (aka Tom) seems to be against the
Hello Bruce,
Hmmm. This seels to suggest that interacting with something outside
should be an option.
Our goal is not to implement every possible security idea someone has,
because we will never finish, and the final result would be too complex
to be unable.
Sure. I'm trying to propose
Hello Masahiko-san,
This key manager is aimed to manage cryptographic keys used for
transparent data encryption. As a result of the discussion, we
concluded it's safer to use multiple keys to encrypt database data
rather than using one key to encrypt the whole thing, for example, in
order to
Hello Masahiko-san,
I am sharing here a document patch based on top of kms_v10 that was
shared awhile back. This document patch aims to cover more design
details of the current KMS design and to help people understand KMS
better. Please let me know if you have any more comments.
A few
Hello Justin,
llvmjit_inline.cpp:59:10: fatal error: llvm/IR/CallSite.h: No such file or
directory
Seems to be the same as here:
https://www.postgresql.org/message-id/flat/CAGf%2BfX4sDP5%2B43HBz_3fjchawO6boqwgbYVfuFc1D4gbA6qQxw%40mail.gmail.com#540c3746c79c0f13360b35c9c369a887
Hello devs,
commit 2c24051bacd2d0eb7141fc4adb870281aec4e714
Author: Craig Topper
Date: Fri Apr 24 22:12:21 2020 -0700
[CallSite removal] Rename CallSite.h to AbstractCallSite.h. NFC
The CallSite and ImmutableCallSite were removed in a previous
commit. So rename the file to
Why not allowing the following:
EXPLAIN [ ANALYZE ] [ VERBOSE ] [ ( option [, ...] ) ] statement
That has nothing to do with this patch.
Sure, it was just in passing, I was surprised by this restriction.
--
Fabien.
The safe option seems not allowing to change ANALYZE option default.
EXPLAIN [ ANALYZE ] [ VERBOSE ] statement
Some more thoughts:
An argument for keeping it that way is that there is already a special
syntax to enable ANALYSE explicitely, which as far as I am concerned I
only ever
Bonjour Vik,
Do we really want default_explain_analyze ?
It sounds like bad news that EXPLAIN DELETE might or might not remove rows
depending on the state of a variable.
I have had sessions where not using ANALYZE was the exception, not the
rule. I would much prefer to type EXPLAIN
Hello,
My 0.02€, some of which may just show some misunderstanding on my part:
- you have clearly given quite a few thoughts about the what and how…
which makes your message an interesting read.
- Could this be proposed as some kind of extension, provided that enough
hooks are
Hello,
I've merged all time-related stuff (time_t, instr_time, int64) to use a
unique type (pg_time_usec_t) and set of functions/macros, which simplifies
the code somehow.
Hm. I'm not convinced it's a good idea for pgbench to do its own thing
here.
I really think that the refactoring part
Hello Bruce,
* 34a0a81bfb
We already have:
Reformat tables containing function information for better
clarity (Tom Lane)
so it seems it is covered as part of this.
AFAICR this one is not by the same author, and although the point was about
better clarity, it was not
Hello Bruce,
* e1ff780485
I was told in this email thread to not include that one.
Ok.
* 34a0a81bfb
We already have:
Reformat tables containing function information for better
clarity (Tom Lane)
so it seems it is covered as part of this.
AFAICR this one is not by
Hello Bruce,
OK, section and item added, patch attached,
Thanks!
Some items that might be considered for the added documentation section:
* e1ff780485
* 34a0a81bfb
* e829337d42
* "Document color support (Peter Eisentraut)"
THIS WAS NOT DOCUMENTED BEFORE?
Not as such, AFAICR it
Hello Tom,
Here's a more fully fleshed out draft for this, with stylesheet
markup to get extra space around the column type names.
I find this added spacing awkward, espacially as attribute names are
always one word anyway. I prefer the non spaced approach.
It's certainly arguable that
Hello Tom,
Here's a more fully fleshed out draft for this, with stylesheet
markup to get extra space around the column type names.
I find this added spacing awkward, espacially as attribute names are
always one word anyway. I prefer the non spaced approach.
If spacing is discussed,
Hello Sergei,
Aggregate functions have syntax for ordering: just "select array_agg(i order by i)
from "
Described here:
https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES
Great, that's indeed enough for my usage, thanks for the tip!
The questions remains,
Hello devs,
although having arrays is an anathema in a relational world, pg has them,
and I find it useful for some queries, mostly in an aggregation to show
in a compact way what items were grouped together.
There are a few functions available to deal with arrays. Among these
functions,
Hello Tom,
Uh, can someone else give an opinion on this? I am not sure how hard or
un-fun an item is should be used as criteria.
Historically we don't document documentation changes at all, do we?
ISTM that the "we did not do it previously" is as weak an argument as
un-fun-ness:-)
hinted about in the default initialization command string, and removes a
spurious empty paragraph found nearby.
Thanks, patch applied.
Ok.
You might remove the "DOCUMENT THE DEFAULT…" in the release note.
I'm wondering about the commit comment: "Reported-by: Fabien COELHO",
actually
Hello Tom,
oid OID
Meh. I'm not a fan of overuse of upper case --- it's well established
that that's harder to read than lower or mixed case. And it's definitely
project policy that type names are generally treated as identifiers not
keywords, even if some of them happen to be keywords
Hello Bruce,
* "DOCUMENT THE DEFAULT GENERATION METHOD"
=> The default is still to generate data client-side.
My point is that the docs are not clear about this.
Indeed.
Can you fix it?
Sure. Attached patch adds an explicit sentence about it, as it was only
hinted about in the
Hello Tom,
oid oid
Row identifier
adrelid oid (references pg_class.oid)
The table this column belongs to
adnum int2 (references pg_attribute.attnum)
The number of the column
adbin pg_node_tree
The column default value, in nodeToString() representation. Use
CREATE DATABASE LOCALE option (Fabien COELHO)
* Add function gen_random_uuid to generate version 4 UUIDs (Fabien COELHO)
I'm not responsible for these, I just reviewed them. ISTM that the author
for both is the committer, Peter Eisentraut.
Maybe there is something amiss in the commit-log
Hello,
more random thoughts about syntax, semantics, and keeping it relational.
While I'm not a huge fan of it, one of the other databases implementing
this functionality does so using the syntax:
WITH ITERATIVE R AS '(' R0 ITERATE Ri UNTIL N (ITERATIONS | UPDATES) ')' Qf
Where N in
Hello Corey, Hello Peter,
My 0.02 € about the alternative syntaxes:
Peter:
I think a syntax that would fit better within the existing framework
would be something like
WITH RECURSIVE t AS (
SELECT base case
REPLACE ALL -- instead of UNION ALL
SELECT recursive case
)
A good
Hello Jonah,
Nice.
-- No ORDER/LIMIT is required with ITERATIVE as only a single tuple is
present
EXPLAIN ANALYZE
WITH ITERATIVE fib_sum (iteration, previous_number, new_number)
AS (SELECT 1, 0::numeric, 1::numeric
UNION ALL
SELECT (iteration + 1), new_number, (previous_number +
Bonjour Michaël,
Hmm. It seems to me that this stuff needs to be more careful with the
function handling? For example, all those cases fail but they directly
pass down a variable that may not be defined, so shouldn't those results
be undefined as well instead of failing:
\set g
Hello Tom,
Before I spend more time on this, I want to make sure that people
are happy with this line of attack.
+1
I like it this way, because the structure is quite readable, which is the
point.
My 0.02€:
Maybe column heander "Example Result" should be simply "Result", because
it is
Bonjour Michaël,
Patch v3 is also a rebase.
This has rotten for half a year, so I am marking it as returned with
feedback. There have been comments from Alvaro and Andres as well...
Attached a v4. I'm resurrecting this small patch, after "\aset" has been
added to pgbench (9d8ef988).
Hello Justin,
About v15, seen as one patch.
Patches serie applies cleanly, compiles, "make check" ok.
Documentation:
- indent documentation text around 80 cols, as done around?
- indent SQL example for readability and capitalize keywords
(pg_ls_dir_metadata)
- "For each file in a
Hello,
Do I need to precede those with some recursive chmod commands? Perhaps
the client should refuse to run if there is still something left after
these.
I think the latter would be a very good idea, just so that this sort of
failure is less obscure. Not sure about whether a recursive
Attached is an attempt at improving things. I have added a explicit note and
hijacked an existing example to better illustrate the purpose of the
function.
A significant part of the complexity of the patch is the overflow-handling
implementation of (a * b % c) for 64 bits integers.
Hi Corey,
ISTM that occurrences of these words elsewhere in the documentation should
link to the glossary definitions?
Yes, that's a big project. I was considering writing a script to compile
all the terms as search terms, paired with their glossary ids, and then
invoke git grep to identify
BTW it's now visible at:
https://www.postgresql.org/docs/devel/glossary.html
Awesome! Linking beetween defs and to relevant sections is great.
BTW, I'm in favor of "an SQL" because I pronounce it "ess-kew-el", but I
guess that people who say "sequel" would prefer "a SQL". Failing that, I'm
Hello Robert,
Done now. Meanwhile, two more machines have reported the mysterious message:
sh: ./configure: not found
...that first appeared on spurfowl a few hours ago. The other two
machines are eelpout and elver, both of which list Thomas Munro as a
maintainer. spurfowl lists Stephen
Hello David,
+Agree. However, it would nice to update the sentence below if I understand
it correctly.
"+ Comments, whitespace and continuations are handled in the same way as
in" pg_hba.conf
In the attached v3, I've tried to clarify comments and doc about
tokenization rules relating
Bonjour Michaël,
Sure. I meant that the feature would make sense to write benchmark scripts
which would use aset and be able to act on the success or not of this aset,
not to resurrect it for a hidden coverage test.
This could always be discussed for v14. We'll see.
Or v15, or never, who
Michaël,
ISTM that I submitted a patch to test whether a variable exists in pgbench,
like available in psql (:{?var} I think),
Not sure if improving the readability of the tests is a reason for
this patch. So I would suggest to just live with relying on debug()
for now to check that a
Hi Justin,
There are no assumption about backslash escaping, quotes and such, which
seems reasonable given the lexing structure of the files, i.e. records of
space-separated words, and # line comments.
Quoting does allow words containing spaces:
Ok.
I meant that the continuation handling
301 - 400 of 1293 matches
Mail list logo