Re: [HACKERS] Improvement in log message of logical replication worker
Umm, just skimming here -- this patch shows some LOG messages using elog() rather than ereport(), which seems bogus to me. Also: "logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started" surely there is a more convenient name than "logical replication table synchronization worker" for this process? I think just getting rid of the words "logical replication" there would be sufficient, since we don't have the concept of "table synchronization worker" in any other context. More generally, the overall wording of this message seems a bit off. How about something along the lines of "starting synchronization for table \"%s\" in subscription \"%s\"" ? -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Improvement in log message of logical replication worker
On Thu, May 18, 2017 at 12:36:46PM +0900, Masahiko Sawada wrote: > On Thu, May 18, 2017 at 11:29 AM, Peter Eisentraut >wrote: > > On 5/12/17 00:30, Masahiko Sawada wrote: > >> I got same log messages 'starting logical replication worker for > >> subscription' total 5 times but actually 4 of them mean to launch > >> table sync worker and another one means apply worker. We cannot > >> distinguish them. Also, I got same log messages 'logical replication > >> synchronization worker finished processing' total 4 times but I think > >> it's better to show the table name in finish log message as well. Any > >> thoughts? > > > > Yeah, that's quite a lot of messages for normal operation. I've been > > playing around with it a little bit and came up with the attached patch > > that produced a slightly reduced log volume and more consistent messages. > > > > I think we don't need a message from the launcher that it will launch a > > worker and then the worker also reporting that it started, so I > > downgraded the former to DEBUG1. > > Agreed. Autovacuum launcher also doesn't emit such log message. > > > A more radical solution would be to > > downgrade all these messages to DEBUG1. > > > > We want to avoid showing OIDs in user-facing messages, but it's not > > always easy to look up the names. See the patch for one solution. > > > > The patch looks good to me. > There are some log messages saying just 'logical replication worker > for subscription ...' in reread_subscription but should we add 'apply' > to these messages in order for user to distinguish between apply > worker and table sync worker? [Action required within three days. This is a generic notification.] The above-described topic is currently a PostgreSQL 10 open item. Peter, since you committed the patch believed to have created it, you own this open item. If some other commit is more relevant or if this does not belong as a v10 open item, please let us know. Otherwise, please observe the policy on open item ownership[1] and send a status update within three calendar days of this message. Include a date for your subsequent status update. Testers may discover new open items at any time, and I want to plan to get them all fixed well in advance of shipping v10. Consequently, I will appreciate your efforts toward speedy resolution. Thanks. [1] https://www.postgresql.org/message-id/20170404140717.GA2675809%40tornado.leadboat.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Removal of plaintext password type references
Robert Haaswrites: > I guess it does seem likely that most users of the hook would need to > do the same, but it seems confusing to pass the same function both x > and f(x), so my vote is to not do that. I guess what's in the back of my mind is that the password type might someday not be just a function of the password, but require other inputs. That is, if we change the hook signature as proposed, then the signature of get_password_type() also becomes part of that API. If someday f(x) needs to become f(x,y), that becomes either more API breakage for users of the hook, or no change at all because it's the callers' problem. Maybe there's no reason to believe that that will ever happen. > But I'm not disposed to spend > a lot of energy arguing about it, so if other people feel differently, > that's cool. TBH, I'm not that hot about it either. But I'm thinking this is an API break we don't need. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bumping HASH_VERSION to 3
On Sat, May 20, 2017 at 2:24 AM, Robert Haaswrote: > On Tue, May 16, 2017 at 9:25 AM, Amit Kapila wrote: >> On Tue, May 16, 2017 at 5:14 PM, Robert Haas wrote: >>> On Tue, May 16, 2017 at 7:31 AM, Amit Kapila >>> wrote: I will send an updated patch once we agree on above points. >>> >>> Sounds good. >> >> Attached patch addresses all the comments as discussed. > > Committed. > Thanks. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Removal of plaintext password type references
On Fri, May 19, 2017 at 5:31 PM, Tom Lanewrote: >> Anyone else want to vote? So far I count 3-1 in favor of making this change. > > Actually, on looking at the final form of the patch, it's hard to think > that it's not just useless API churn. The one existing hook user would > have to turn around and call get_password_type() anyway, so it's not > an improvement for that use-case. What's the argument that most other > use-cases wouldn't need to do the same? OK, make that 2-2 in favor of the change. I guess it does seem likely that most users of the hook would need to do the same, but it seems confusing to pass the same function both x and f(x), so my vote is to not do that. But I'm not disposed to spend a lot of energy arguing about it, so if other people feel differently, that's cool. I just want to reach a decision and either do this or drop it from the open items list. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [POC] Faster processing at Gather node
On Fri, May 19, 2017 at 5:58 PM, Robert Haaswrote: > On Fri, May 19, 2017 at 7:55 AM, Rafia Sabih > wrote: >> While analysing the performance of TPC-H queries for the newly developed >> parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed >> that the time taken by gather node is significant. On investigation, as per >> the current method it copies each tuple to the shared queue and notifies the >> receiver. Since, this copying is done in shared queue, a lot of locking and >> latching overhead is there. >> >> So, in this POC patch I tried to copy all the tuples in a local queue thus >> avoiding all the locks and latches. Once, the local queue is filled as per >> it's capacity, tuples are transferred to the shared queue. Once, all the >> tuples are transferred the receiver is sent the notification about the same. > > What if, instead of doing this, we switched the shm_mq stuff to use atomics? > That is one of the very first things we have tried, but it didn't show us any improvement, probably because sending tuple-by-tuple over shm_mq is not cheap. Also, independently, we have tried to reduce the frequency of SetLatch (used to notify receiver), but that also didn't result in improving the results. Now, I think one thing that can be tried is to use atomics in shm_mq and reduce the frequency to notify receiver, but not sure if that can give us better results than with this idea. There are a couple of other ideas which has been tried to improve the speed of Gather like avoiding an extra copy of tuple which we need to do before sending tuple (tqueueReceiveSlot->ExecMaterializeSlot) and increasing the size of tuple queue length, but none of those has shown any noticeable improvement. I am aware of all this because I and Dilip were offlist involved in brainstorming ideas with Rafia to improve the speed of Gather. I think it might have been better to show the results of ideas that didn't work out, but I guess Rafia hasn't shared those with the intuition that nobody would be interested in hearing what didn't work out. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Regression in join selectivity estimations when using foreign keys
On 18 May 2017 at 20:28, David Rowleywrote: > A vastly simplified example case is: > > create table fkest (a int, b int, c int unique, primary key(a,b)); > create table fkest1 (a int, b int, primary key(a,b)); > > insert into fkest select x/10,x%10, x from generate_Series(1,400) x; > insert into fkest1 select x/10,x%10 from generate_Series(1,400) x; > > alter table fkest1 add constraint fkest1_a_b_fkey foreign key (a,b) > references fkest; > > analyze fkest; > analyze fkest1; > > explain (costs on) select * from fkest f > left join fkest1 f1 on f.a = f1.a and f.b = f1.b > left join fkest1 f2 on f.a = f2.a and f.b = f2.b > left join fkest1 f3 on f.a = f3.a and f.b = f3.b > where f.c = 1; I should have shown the EXPLAIN ANALYZE of this instead. QUERY PLAN -- Hash Left Join (cost=24.15..41.89 rows=996 width=36) (actual time=0.430..0.463 rows=1 loops=1) Hash Cond: ((f.a = f3.a) AND (f.b = f3.b)) -> Hash Left Join (cost=12.15..28.36 rows=100 width=28) (actual time=0.255..0.288 rows=1 loops=1) Hash Cond: ((f.a = f2.a) AND (f.b = f2.b)) -> Nested Loop Left Join (cost=0.15..16.21 rows=10 width=20) (actual time=0.046..0.079 rows=1 loops=1) -> Seq Scan on fkest f (cost=0.00..8.00 rows=1 width=12) (actual time=0.013..0.045 rows=1 loops=1) Filter: (c = 1) Rows Removed by Filter: 399 -> Index Only Scan using fkest1_pkey on fkest1 f1 (cost=0.15..8.17 rows=1 width=8) (actual time=0.031..0.031 rows=1 loops=1) Index Cond: ((a = f.a) AND (b = f.b)) Heap Fetches: 1 -> Hash (cost=6.00..6.00 rows=400 width=8) (actual time=0.180..0.180 rows=400 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 24kB -> Seq Scan on fkest1 f2 (cost=0.00..6.00 rows=400 width=8) (actual time=0.006..0.041 rows=400 loops=1) -> Hash (cost=6.00..6.00 rows=400 width=8) (actual time=0.162..0.162 rows=400 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 24kB -> Seq Scan on fkest1 f3 (cost=0.00..6.00 rows=400 width=8) (actual time=0.010..0.040 rows=400 loops=1) Planning time: 0.409 ms Execution time: 0.513 ms (19 rows) which you can obviously see the poor estimate propagating to up the plan tree. If we add another left join the final estimate is even worse: explain analyze select * from fkest f left join fkest1 f1 on f.a = f1.a and f.b = f1.b left join fkest1 f2 on f.a = f2.a and f.b = f2.b left join fkest1 f3 on f.a = f3.a and f.b = f3.b left join fkest1 f4 on f.a = f4.a and f.b = f4.b where f.c = 1; QUERY PLAN Hash Left Join (cost=36.15..69.06 rows=9915 width=44) (actual time=0.535..0.569 rows=1 loops=1) Hash Cond: ((f.a = f4.a) AND (f.b = f4.b)) -> Hash Left Join (cost=24.15..41.89 rows=996 width=36) (actual time=0.371..0.404 rows=1 loops=1) Hash Cond: ((f.a = f3.a) AND (f.b = f3.b)) -> Hash Left Join (cost=12.15..28.36 rows=100 width=28) (actual time=0.208..0.241 rows=1 loops=1) Hash Cond: ((f.a = f2.a) AND (f.b = f2.b)) -> Nested Loop Left Join (cost=0.15..16.21 rows=10 width=20) (actual time=0.029..0.062 rows=1 loops=1) -> Seq Scan on fkest f (cost=0.00..8.00 rows=1 width=12) (actual time=0.014..0.047 rows=1 loops=1) Filter: (c = 1) Rows Removed by Filter: 399 -> Index Only Scan using fkest1_pkey on fkest1 f1 (cost=0.15..8.17 rows=1 width=8) (actual time=0.012..0.012 rows=1 loops=1) Index Cond: ((a = f.a) AND (b = f.b)) Heap Fetches: 1 -> Hash (cost=6.00..6.00 rows=400 width=8) (actual time=0.168..0.168 rows=400 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 24kB -> Seq Scan on fkest1 f2 (cost=0.00..6.00 rows=400 width=8) (actual time=0.008..0.043 rows=400 loops=1) -> Hash (cost=6.00..6.00 rows=400 width=8) (actual time=0.156..0.156 rows=400 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 24kB -> Seq Scan on fkest1 f3 (cost=0.00..6.00 rows=400 width=8) (actual time=0.006..0.035 rows=400 loops=1) -> Hash (cost=6.00..6.00 rows=400 width=8) (actual time=0.155..0.155 rows=400 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 24kB -> Seq Scan on fkest1 f4 (cost=0.00..6.00 rows=400 width=8) (actual time=0.004..0.034 rows=400 loops=1) Planning time: 0.864 ms Execution time: 0.698 ms -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7
Re: [HACKERS] [Bug fix]If recovery.conf has target_session_attrs=read-write, the standby fails to start.
Michael Paquierwrites: > Do you think that it would be better to list the letter list for each > keyword in repl_scanner.l or have something more generic? As not that > many commands are added in the replication protocol, I would think > that this is more than enough for each command, say: > [Ss][Ll][Oo][Tt]{ return K_SLOT; } > etc. Yeah, that's probably good enough. It might be worth the trouble to make out-of-line definitions, as in cubescan.l's handling of NaN and Infinity for instance, but that's just cosmetic. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [Bug fix]If recovery.conf has target_session_attrs=read-write, the standby fails to start.
On Sat, May 20, 2017 at 5:32 AM, Tom Lanewrote: > We'd probably be better off to implement case-insensitivity the hard way. > There is a reason why none of our other flex scanners use this switch. Do you think that it would be better to list the letter list for each keyword in repl_scanner.l or have something more generic? As not that many commands are added in the replication protocol, I would think that this is more than enough for each command, say: [Ss][Ll][Oo][Tt]{ return K_SLOT; } etc. > While I'm whining ... it looks like the other flex options selected here > were cargo-culted in rather than being thought about. Surely we don't > run syncrep_scanner often enough, nor over so much data, that it's a > good tradeoff to use the options for larger tables. Indeed, good catch. -- Michael -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)
On Sat, May 20, 2017 at 10:43 AM, Thomas Munrowrote: > On Fri, May 19, 2017 at 6:35 PM, Amit Langote > wrote: >> On 2017/05/19 15:16, Thomas Munro wrote: >>> Would TransitionCaptureState be a better name for this struct? >> >> Yes. Although, losing the Trigger prefix might make it sound a bit >> ambiguous though. Right above its definition, we have TriggerData. So, >> maybe TriggerTransitionCaptureState or TriggerTransitionCaptureData or >> TriggerTransitionData may be worth considering. > > Ok, here's a version using TransitionCaptureState. Those other names > seem too long, and "TriggerTransition" is already in use so > "TriggerTransitionData" seems off the table. Having the word > "capture" in there seems good, since this is an object that controls > what we capture when we process a modify a set of tables. I hope > that's clear. Sent too soon. Several variables should also be renamed to make clear they refer to the transition capture state in effect, instead of vague names like 'transitions'. Sorry for the version churn. -- Thomas Munro http://www.enterprisedb.com transition-tuples-from-child-tables-v9.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Introducing SNI in TLS handshake for SSL connections
On 4/24/17 22:26, Florin Asavoaie wrote: > If there's nobody against this, I can try to do the patch myself, > doesn't look too difficult (I expect it to simply work by > calling SSL_set_tlsext_host_name(SSL_context, PQhost(conn))) somewhere > in initialize_SSL in fe-secure-openssl.c. I had to look up what SNI is: https://en.wikipedia.org/wiki/Server_Name_Indication This seems useful. If you have a patch, please add it here: https://commitfest.postgresql.org/14/ -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] TAP tests - installcheck vs check
A couple of thoughts I've been having that relate to this: The traditional meaning of "installcheck" in GNU packages is to test against the installed code, whereas "check" tests before installation. Our concept of testing against a running server obviously does not apply to many kinds of software, so there is no standard name for it. I think as our test suites expand, the concept of testing against a running server is not going to feasible for many new and interesting things. I can see some marginal appeal for what what we're currently doing, but I don't think it's useful to invest much more effort into it to make it work in more cases and situations. What I'm thinking going forward is: make check - as is make installcheck - test against installed code, but start new instances etc. (which you currently can't do for the main test suite) make runningcheck - test against running server, supported for certain tests make check and installcheck would be very similar except make check creates a temp install and sets some path variables. A trick of sorts I saw in Python packages that might also be worth adopting is to make the temp install use symlinks pointing into the source tree. Then as soon as you rebuild, the temp install is up to date (unless you add or remove entire files). Then the temp install is free after the first run. Thoughts? -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)
On Fri, May 19, 2017 at 6:35 PM, Amit Langotewrote: > On 2017/05/19 15:16, Thomas Munro wrote: >> Would TransitionCaptureState be a better name for this struct? > > Yes. Although, losing the Trigger prefix might make it sound a bit > ambiguous though. Right above its definition, we have TriggerData. So, > maybe TriggerTransitionCaptureState or TriggerTransitionCaptureData or > TriggerTransitionData may be worth considering. Ok, here's a version using TransitionCaptureState. Those other names seem too long, and "TriggerTransition" is already in use so "TriggerTransitionData" seems off the table. Having the word "capture" in there seems good, since this is an object that controls what we capture when we process a modify a set of tables. I hope that's clear. -- Thomas Munro http://www.enterprisedb.com transition-tuples-from-child-tables-v8.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Multiple table synchronizations are processed serially
On 19/05/17 21:47, Peter Eisentraut wrote: > On 5/19/17 01:01, Masahiko Sawada wrote: >> Seems all four table sync workers are launched at the almost same >> time, but three workers of them get stuck in idle transaction state >> when creating replication slot. That is these waiting workers cannot >> proceed its work until first connected table sync worker finishes. ps >> command shows the followings. > > Creating a replication slot waits for all transactions to finish. So if > one of those transactions is a table copy of another subscription, it > has to wait for that. > Well IMHO this concrete example is partially effect of the snapshort builder fixes we did with Andres. Before those fixes the ondisk snapshot may have been used in this situation. Makes me think if we want to mark the exported snapshot as containing all transactions and using it if that's the case. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Removal of plaintext password type references
Robert Haaswrites: > On Thu, May 18, 2017 at 2:37 AM, Heikki Linnakangas wrote: >> I still don't think this worth it to change the hook function's signature >> for this. Even though it's not a big deal for external modules to adapt, >> it's not zero effort either. And it's not that much nicer to us. > I favor committing the proposed patch. Otherwise, it just becomes > pointless baggage that we carry forever. > Anyone else want to vote? So far I count 3-1 in favor of making this change. Actually, on looking at the final form of the patch, it's hard to think that it's not just useless API churn. The one existing hook user would have to turn around and call get_password_type() anyway, so it's not an improvement for that use-case. What's the argument that most other use-cases wouldn't need to do the same? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Removal of plaintext password type references
On Thu, May 18, 2017 at 2:37 AM, Heikki Linnakangaswrote: > I still don't think this worth it to change the hook function's signature > for this. Even though it's not a big deal for external modules to adapt, > it's not zero effort either. And it's not that much nicer to us. I favor committing the proposed patch. Otherwise, it just becomes pointless baggage that we carry forever. Anyone else want to vote? So far I count 3-1 in favor of making this change. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] bumping HASH_VERSION to 3
On Tue, May 16, 2017 at 9:25 AM, Amit Kapilawrote: > On Tue, May 16, 2017 at 5:14 PM, Robert Haas wrote: >> On Tue, May 16, 2017 at 7:31 AM, Amit Kapila wrote: >>> I will send an updated patch once we agree on above points. >> >> Sounds good. > > Attached patch addresses all the comments as discussed. Committed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] HINT message for "ALTER SUBSCRIPTION.. WITH" need to change with SET keyword
On 5/19/17 06:58, Dilip Kumar wrote: > On Fri, May 19, 2017 at 3:41 PM, tusharwrote: >> postgres=# drop subscription sub; >> ERROR: could not connect to publisher when attempting to drop the >> replication slot "pub" >> DETAIL: The error was: could not connect to server: No such file or >> directory >> Is the server running locally and accepting >> connections on Unix domain socket "/tmp/.s.PGSQL.5432"? >> HINT: Use ALTER SUBSCRIPTION ... WITH (slot_name = NONE) to disassociate >> the subscription from the slot. >> >> expected = "HINT: Use ALTER SUBSCRIPTION ... SET (slot_name = NONE) to >> disassociate the subscription from the slot." > > Seems like syntax got changed in > b807f59828fbc02fea612e1cbc0066c6dfa3be9b commit but missed to change > the hint. Attached patch fixes that. Fixed, thanks. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [Bug fix]If recovery.conf has target_session_attrs=read-write, the standby fails to start.
"Higuchi, Daisuke"writes: > By adding flex option '-i', replication command parser could be more > flexible. > This option is already used for syncrep_scanner.c, so it is not strange to > add for repl_scanner.c too. Really? That wasn't an especially bright idea IMO, because it means that the scanner's behavior will be dependent on the locale flex was run in. An example here is that Turkish locale is likely to have a different idea of what "FIRST" matches than other locales do. Indeed, if I run the flex build in a non-C locale, I get warnings like $ LANG=en_US /usr/bin/flex -b -CF -p -i -o'syncrep_scanner.c' syncrep_scanner.l syncrep_scanner.l:91: warning, the character range [\200-\377] is ambiguous in a case-insensitive scanner syncrep_scanner.l:91: warning, the character range [\200-\377] is ambiguous in a case-insensitive scanner We'd probably be better off to implement case-insensitivity the hard way. There is a reason why none of our other flex scanners use this switch. While I'm whining ... it looks like the other flex options selected here were cargo-culted in rather than being thought about. Surely we don't run syncrep_scanner often enough, nor over so much data, that it's a good tradeoff to use the options for larger tables. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [doc fix] PG10: wroing description on connect_timeout when multiple hosts are specified
On Tue, May 16, 2017 at 11:58 PM, Michael Paquierwrote: > Thanks for the updated patch. This looks good to me. Committed. I also added a slight tweak to the wording of the documentation. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Peter Eisentrautwrites: > On 5/19/17 11:22, Tom Lane wrote: >> I certainly would rather that our version matched something that's under >> active maintenance someplace. But it seems like there are two good >> arguments for having a copy in our tree: > Is pgindent going to be indented by pgindent? If we were going to keep it in our tree, I'd plan to add an exclusion rule to keep pgindent from touching it, as we already have for assorted other files that are copied from external projects. However, it seems like "keep it in a separate repo" is winning, so it's moot. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [Bug fix]If recovery.conf has target_session_attrs=read-write, the standby fails to start.
On Fri, May 19, 2017 at 12:51 AM, Michael Paquierwrote: > On Fri, May 19, 2017 at 1:16 PM, Higuchi, Daisuke > wrote: >> The reason of this problem is that sending 'show transaction_read_only' is >> failed. >> 'show' must be in uppercase letters because the streaming replication >> protocol only accepts capital letters. >> In psql and libpq applications that do not use the streaming replication >> protocol, this error does not occur. >> >> The attached patch fixes this. This patch changes 'show >> transaction_read_only' to 'SHOW transaction_read_only'. > > if (!PQsendQuery(conn, > - "show transaction_read_only")) > + "SHOW transaction_read_only")) > Or perhaps the replication command parser could be made more flexible > with lower-case characters for replication commands? Maybe, but that sounds a lot more likely to inflict collateral damage of some kind than the originally-proposed fix, so I've committed that fix instead. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Precision and rounding fixes for money type
I looked a bit more carefully at cash.c in the wake of bug #14663, https://www.postgresql.org/message-id/20170519164653.29941.19098%40wrigleys.postgresql.org It seems to me that there are three different bugs in the multiplication and division operators: 1. As noted in the bug thread, applying rint() to the result of an integer division is useless, and it will cause precision loss if the 64-bit result exceeds 2^52 or thereabouts. We should drop it. 2. On the other hand, the cash-times-float operators really should apply rint() rather than allowing the default truncation behavior to happen when converting the float product to int64. 3. At least with my compiler (gcc 4.4.7), it seems that arithmetic between an int64 value and a float4 value is performed by casting the int64 to float4 and doing the arithmetic in float4. This results in really serious precision loss, since the result's only good to six digits or so. ISTM we'd be well advised to have the cash-and-float4 operators widen the float4 input to float8 and do the arithmetic in float8, so that they don't lose more precision than they have to. On modern machines there's unlikely to be any detectable speed difference. The attached patch rectifies these things and adds documentation about the truncation behavior of cash division. I propose to apply all of it to HEAD. What I'm less sure about is how much of it is a candidate to back-patch. I think point 1 (precision loss in what should be an exact integer operation) is a clear bug and we should back-patch it. But the other cases are not as open-and-shut; maybe we should just change those behaviors in HEAD. Thoughts? regards, tom lane diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index 42b2bb7..a322049 100644 *** a/doc/src/sgml/datatype.sgml --- b/doc/src/sgml/datatype.sgml *** SELECT '52093.89'::money::numeric::float *** 983,988 --- 983,993 + Division of a money value by an integer value is performed + with truncation of the fractional part towards zero. To get a rounded + result, divide by a floating-point value, or cast the money + value to numeric before dividing and back to money + afterwards. (The latter is preferable to avoid risking precision loss.) When a money value is divided by another money value, the result is double precision (i.e., a pure number, not money); the currency units cancel each other out in the division. diff --git a/src/backend/utils/adt/cash.c b/src/backend/utils/adt/cash.c index 5cb086e..a170294 100644 *** a/src/backend/utils/adt/cash.c --- b/src/backend/utils/adt/cash.c *** cash_mul_flt8(PG_FUNCTION_ARGS) *** 667,673 float8 f = PG_GETARG_FLOAT8(1); Cash result; ! result = c * f; PG_RETURN_CASH(result); } --- 667,673 float8 f = PG_GETARG_FLOAT8(1); Cash result; ! result = rint(c * f); PG_RETURN_CASH(result); } *** flt8_mul_cash(PG_FUNCTION_ARGS) *** 682,688 Cash c = PG_GETARG_CASH(1); Cash result; ! result = f * c; PG_RETURN_CASH(result); } --- 682,688 Cash c = PG_GETARG_CASH(1); Cash result; ! result = rint(f * c); PG_RETURN_CASH(result); } *** cash_mul_flt4(PG_FUNCTION_ARGS) *** 717,723 float4 f = PG_GETARG_FLOAT4(1); Cash result; ! result = c * f; PG_RETURN_CASH(result); } --- 717,723 float4 f = PG_GETARG_FLOAT4(1); Cash result; ! result = rint(c * (float8) f); PG_RETURN_CASH(result); } *** flt4_mul_cash(PG_FUNCTION_ARGS) *** 732,738 Cash c = PG_GETARG_CASH(1); Cash result; ! result = f * c; PG_RETURN_CASH(result); } --- 732,738 Cash c = PG_GETARG_CASH(1); Cash result; ! result = rint((float8) f * c); PG_RETURN_CASH(result); } *** cash_div_flt4(PG_FUNCTION_ARGS) *** 753,759 (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg("division by zero"))); ! result = rint(c / f); PG_RETURN_CASH(result); } --- 753,759 (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg("division by zero"))); ! result = rint(c / (float8) f); PG_RETURN_CASH(result); } *** cash_div_int8(PG_FUNCTION_ARGS) *** 802,808 (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg("division by zero"))); ! result = rint(c / i); PG_RETURN_CASH(result); } --- 802,808 (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg("division by zero"))); ! result = c / i; PG_RETURN_CASH(result); } *** cash_div_int4(PG_FUNCTION_ARGS) *** 854,860 (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg("division by zero"))); ! result = rint(c / i); PG_RETURN_CASH(result); } --- 854,860 (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg("division by zero"))); ! result = c / i; PG_RETURN_CASH(result); }
Re: [HACKERS] Multiple table synchronizations are processed serially
On 5/19/17 01:01, Masahiko Sawada wrote: > Seems all four table sync workers are launched at the almost same > time, but three workers of them get stuck in idle transaction state > when creating replication slot. That is these waiting workers cannot > proceed its work until first connected table sync worker finishes. ps > command shows the followings. Creating a replication slot waits for all transactions to finish. So if one of those transactions is a table copy of another subscription, it has to wait for that. You can avoid that by creating all the slots first and then triggering the initial table copy separately. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [bug fix] PG10: libpq doesn't connect to alternative hosts when some errors occur
On Thu, May 18, 2017 at 8:11 PM, Michael Paquierwrote: > On Thu, May 18, 2017 at 11:30 PM, Robert Haas wrote: >> On Thu, May 18, 2017 at 7:06 AM, Michael Paquier >> wrote: >>> FWIW, I am of the opinion to not have an implementation based on any >>> SQLSTATE codes, as well as not doing something similar to JDBC. >>> Keeping things simple is one reason, a second is that the approach >>> taken by libpq is correct at its root. >> >> Because why? > > Because it is critical to let the user know as well *why* an error > happened. Imagine that this feature is used with multiple nodes, all > primaries. If a DB admin busted the credentials in one of them then > all the load would be redirected on the other nodes, without knowing > what is actually causing the error. Then the node where the > credentials have been changed would just run idle, and the application > would be unaware of that. The entire purpose of an application-level failover feature is to make the application unaware of failures. That's like complaining that the stove gets hot when you turn it on. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [BUGS] BUG #14657: Server process segmentation fault in v10, May 10th dev snapshot
On Fri, May 19, 2017 at 6:07 AM, Ashutosh Bapatwrote: > We still have the same copy shared across multiple append paths and > set_plan_refs would change change it underneath those. May not be a > problem right now but may be a problem in the future. I agree. I think it's better for the path-creation functions to copy the list, so that there is no surprising sharing of substructure. set_plan_refs() obviously expects this data to be unshared, and this seems like the best way to ensure that's true in all cases. Committed that way. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
On 5/19/17 11:22, Tom Lane wrote: > I certainly would rather that our version matched something that's under > active maintenance someplace. But it seems like there are two good > arguments for having a copy in our tree: Is pgindent going to be indented by pgindent? -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
On 5/19/17 13:31, Alvaro Herrera wrote: > I favor having indent in a separate repository in our Git server, for > these reasons I am also in favor of that. > 0. it's under our control (so we can change rules as we see fit) > 1. we can have Piotr as a committer there > 2. we can use the same pgindent version for all Pg branches 3. We can use pgindent for external code. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Event triggers + table partitioning cause server crash in current master
On Wed, May 17, 2017 at 5:34 PM, Noah Mischwrote: > [Action required within three days. This is a generic notification.] > > The above-described topic is currently a PostgreSQL 10 open item. Robert, > since you committed the patch believed to have created it, you own this open > item. If some other commit is more relevant or if this does not belong as a > v10 open item, please let us know. Otherwise, please observe the policy on > open item ownership[1] and send a status update within three calendar days of > this message. Include a date for your subsequent status update. Testers may > discover new open items at any time, and I want to plan to get them all fixed > well in advance of shipping v10. Consequently, I will appreciate your efforts > toward speedy resolution. Thanks. I believe that the originally-reported crash has been fixed by 3ec76ff1f2cf52e9b900349957b42d28128b7bc7. I have pushed ac8d7e1b834e252c9aa8d5750f70a86ca74228b8 to fix the other issue which Amit spotted. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
I wrote: > What I was just looking at is the possibility of absorbing struct > tags ("xllist" in the above) as if they were typedef names. In > at least 95% of our usages, if a struct has a tag then the tag is > also the struct's typedef name. The reason this is interesting > is that it looks like (on at least Linux and macOS) the debug info > captures struct tags even when it misses the corresponding typedef. > We could certainly create a coding rule that struct tags *must* > match struct typedef names for our own code, but I'm not sure what > violations of that convention might appear in system headers. I did an experiment with seeing what would happen to the typedef list if we included struct tags. On my Linux box, that adds about 10% more names (3343 instead of 3028). A lot of them would be good to have, but there are a lot of others that maybe not so much. See attached diff output. I hesitate to suggest any rule as grotty as "take struct tags only if they begin with an upper-case letter", but that would actually work really well, looks like. regards, tom lane --- typedefs.std 2017-05-19 14:41:15.357406399 -0400 +++ typedefs.log 2017-05-19 14:46:11.978739384 -0400 @@ -39,17 +39,24 @@ AggSplit AggState AggStatePerAgg +AggStatePerAggData AggStatePerGroup +AggStatePerGroupData AggStatePerHash +AggStatePerHashData AggStatePerPhase +AggStatePerPhaseData AggStatePerTrans +AggStatePerTransData AggStrategy Aggref AggrefExprState AlenState Alias AllocBlock +AllocBlockData AllocChunk +AllocChunkData AllocPointer AllocSet AllocSetContext @@ -118,6 +125,7 @@ ArrayExprIterState ArrayIOData ArrayIterator +ArrayIteratorData ArrayMapState ArrayMetaState ArrayParseState @@ -153,6 +161,7 @@ BITVECP BMS_Comparison BMS_Membership +BND BN_CTX BOX BTArrayKeyInfo @@ -167,6 +176,7 @@ BTPageStat BTPageState BTParallelScanDesc +BTParallelScanDescData BTScanOpaque BTScanOpaqueData BTScanPos @@ -251,6 +261,7 @@ BufFile Buffer BufferAccessStrategy +BufferAccessStrategyData BufferAccessStrategyType BufferCachePagesContext BufferCachePagesRec @@ -263,6 +274,7 @@ BuildAccumulator BuiltinScript BulkInsertState +BulkInsertStateData CACHESIGN CAC_state CEOUC_WAIT_MODE @@ -295,6 +307,7 @@ CheckpointStatsData CheckpointerRequest CheckpointerShmemStruct +CheckpointerSlotMapping Chromosome City CkptSortItem @@ -351,12 +364,14 @@ CompressorState ConditionVariable ConditionalStack +ConditionalStackData ConfigData ConfigVariable ConnCacheEntry ConnCacheKey ConnStatusType ConnType +ConnectionOption ConnectionStateEnum ConsiderSplitContext Const @@ -424,6 +439,7 @@ CustomExecMethods CustomOutPtrType CustomPath +CustomPathMethods CustomScan CustomScanMethods CustomScanState @@ -453,6 +469,7 @@ DeclareCursorStmt DecodedBkpBlock DecodingOutputState +DecomprData DefElem DefElemAction DefaultACLInfo @@ -484,6 +501,7 @@ DomainIOData DropBehavior DropOwnedStmt +DropRelationCallbackState DropReplicationSlotCmd DropRoleStmt DropStmt @@ -499,6 +517,10 @@ DumpableObjectType DynamicFileList DynamicZoneAbbrev +ECPGgeneric_varchar +ECPGstruct_member +ECPGtype +ECPGtype_information_cache EC_KEY EDGE ENGINE @@ -517,6 +539,7 @@ EditableObjectType ElementsState EnableTimeoutParams +EncStat EndBlobPtrType EndBlobsPtrType EndDataPtrType @@ -607,6 +630,7 @@ FieldStore File FileFdwExecutionState +FileFdwOption FileFdwPlanState FileName FileNameMap @@ -828,7 +852,9 @@ GinPostingList GinQualCounts GinScanEntry +GinScanEntryData GinScanKey +GinScanKeyData GinScanOpaque GinScanOpaqueData GinState @@ -844,6 +870,7 @@ GistSplitUnion GistSplitVector GlobalTransaction +GlobalTransactionData GrantObjectType GrantRoleStmt GrantStmt @@ -900,8 +927,11 @@ HashJoin HashJoinState HashJoinTable +HashJoinTableData HashJoinTuple +HashJoinTupleData HashMemoryChunk +HashMemoryChunkData HashMetaPage HashMetaPageData HashPageOpaque @@ -920,6 +950,7 @@ HeadlineParsedText HeadlineWordEntry HeapScanDesc +HeapScanDescData HeapTuple HeapTupleData HeapTupleFields @@ -967,6 +998,7 @@ IndexRuntimeKeyInfo IndexScan IndexScanDesc +IndexScanDescData IndexScanState IndexStateFlagsAction IndexStmt @@ -1008,6 +1040,7 @@ IterateJsonStringValuesState JEntry JHashState +JhashState Join JoinCostWorkspace JoinExpr @@ -1132,6 +1165,7 @@ LogicalTapeSet MAGIC MBuf +MDCBufData MJEvalResult MVDependencies MVDependency @@ -1152,6 +1186,7 @@ MergeAppendState MergeJoin MergeJoinClause +MergeJoinClauseData MergeJoinState MergePath MergeScanSelCache @@ -1211,10 +1246,14 @@ NullTestType Numeric NumericAggState +NumericData NumericDigit +NumericLong +NumericShort NumericSortSupport NumericSumAccum NumericVar +ONEXIT OP OSAPerGroupState OSAPerQueryState @@ -1241,6 +1280,7 @@ OidOptions OkeysState OldSerXidControl +OldSerXidControlData OldSnapshotControlData OldToNewMapping OldToNewMappingData @@ -1303,6 +1343,7 @@
Re: [HACKERS] release note addition
On Fri, May 19, 2017 at 01:35:41PM -0400, Peter Eisentraut wrote: > A small patch for the release notes. The first hunk removes a duplicate > item. The second hunk adds another item to the incompatibility list. > > I can commit this myself but want to give Bruce a chance so I don't mess > up his workflow. You can commit it yourself, but I removed the first item earlier today in a commit. -- Bruce Momjianhttp://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Tom Lane wrote: > Robert Haaswrites: > > Yeah, but those advantages could also be gained by putting the > > pgindent tree on git.postgresql.org in a separate repository. Having > > it in the same repository as the actual PostgreSQL code is not > > required nor, in my opinion, particularly desirable. > > It adds an extra step to what a developer has to do to get pgindent > up and running, so it doesn't seem to me like it's helping the goal > of reducing the setup overhead. I favor having indent in a separate repository in our Git server, for these reasons 0. it's under our control (so we can change rules as we see fit) 1. we can have Piotr as a committer there 2. we can use the same pgindent version for all Pg branches I'm thinking that whenever we change the indent rules, we would re-indent supported back-branches, just as Tom's proposing we'd do now (which I endorse). We wouldn't change the rules often, but say if we leave some typedef wonky behavior alone for now, and somebody happens to fix it in the future, then the fix would apply not only to the current tree at the time but also to all older trees, which makes sense. Now, there is a process that can be followed to update a *patch* from an "pre-indent upstream PG" to a "post-indent upstream PG", to avoid manual work in rebasing the patch past pgindent. This can easily be used on branches that can be rebased, also (since they are essentially just a collection of patches). One problem that remains is that this doesn't apply easily to branches that get merged without rebase. I *think* it should be possible to come up with a process that creates a merge commit using pgindent, but I haven't tried. -- Álvaro Herrerahttps://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] release note addition
A small patch for the release notes. The first hunk removes a duplicate item. The second hunk adds another item to the incompatibility list. I can commit this myself but want to give Bruce a chance so I don't mess up his workflow. -- Peter Eisentraut http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services release-notes.patch Description: invalid/octet-stream -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [POC] hash partitioning
On Fri, May 19, 2017 at 5:32 AM, amul sulwrote: > Updated patch attached. 0001-patch rebased against latest head. > 0002-patch also incorporates code comments and error message changes > as per Robert's & your suggestions. Thanks ! -if (spec->strategy != PARTITION_STRATEGY_LIST) -elog(ERROR, "invalid strategy in partition bound spec"); +Assert(spec->strategy == PARTITION_STRATEGY_LIST); Let's just drop these hunks. I realize this is a response to a review comment I made, but I take it back. If the existing code is already doing it this way, there's no real need to revise it. The patch doesn't even make it consistent anyway, since elsewhere you elog() for a similar case. Perhaps elog() is best anyway. -partitioning methods include range and list, where each partition is -assigned a range of keys and a list of keys, respectively. +partitioning methods include hash, range and list, where each partition is +assigned a modulus and remainder of keys, a range of keys and a list of +keys, respectively. I think this sentence has become too long and unwieldy, and is more unclear than helpful. I'd just write "The currently supported partitioning methods are list, range, and hash." The use of the word include is actually wrong here, because it implies that there are more not mentioned here, which is false. - expression. If no btree operator class is specified when creating a - partitioned table, the default btree operator class for the datatype will - be used. If there is none, an error will be reported. + expression. List and range partitioning uses only btree operator class. + Hash partitioning uses only hash operator class. If no operator class is + specified when creating a partitioned table, the default operator class + for the datatype will be used. If there is none, an error will be + reported. + I suggest: If no operator class is specified when creating a partitioned table, the default operator class of the appropriate type (btree for list and range partitioning, hash for hash partitioning) will be used. If there is none, an error will be reported. + + Since hash partitiong operator class, provide only equality, not ordering, + collation is not relevant in hash partition key column. An error will be + reported if collation is specified. partitiong -> partitioning. Also, remove the comma after "operator class" and change "not relevant in hash partition key column" to "not relevant for hash partitioning". Also change "if collation is specified" to "if a collation is specified". + Create a hash partitioned table: + +CREATE TABLE orders ( +order_id bigint not null, +cust_id bigint not null, +status text +) PARTITION BY HASH (order_id); + Move this down so it's just above the example of creating partitions. + * For range and list partitioned tables, datums is an array of datum-tuples + * with key->partnatts datums each. + * For hash partitioned tables, it is an array of datum-tuples with 2 datums, + * modulus and remainder, corresponding to a given partition. Second line is very short; reflow as one paragraph. * In case of range partitioning, it stores one entry per distinct range * datum, which is the index of the partition for which a given datum * is an upper bound. + * In the case of hash partitioning, the number of the entries in the indexes + * array is same as the greatest modulus amongst all partitions. For a given + * partition key datum-tuple, the index of the partition which would accept that + * datum-tuple would be given by the entry pointed by remainder produced when + * hash value of the datum-tuple is divided by the greatest modulus. Insert line break before the new text as a paragraph break. +charstrategy;/* hash, list or range bounds? */ Might be clearer to just write /* hash, list, or range? */ or /* bounds for hash, list, or range? */ +static uint32 compute_hash_value(PartitionKey key, Datum *values, bool *isnull); + I think there should be a blank line after this but not before it. I don't really see why hash partitioning needs to touch partition_bounds_equal() at all. Why can't the existing logic work for hash partitioning without change? +valid_bound = true; valid_modulus, maybe? - errmsg("data type %s has no default btree operator class", - format_type_be(atttype)), - errhint("You must specify a btree operator class or define a default btree operator class for the data type."))); + errmsg("data type %s has no default %s operator class", + format_type_be(atttype), am_method), + errhint("You must specify a %s operator class or define a default %s operator class for the data
[HACKERS] Allowing dash character in LTREE
I propose to allow the `-` char in allowed label characters (currently a-zA-Z0-9_ https://www.postgresql.org/docs/9.6/static/ltree.html) The reason is to allow to use more easily base64 ids, (like https://github.com/dylang/shortid, ...), uuid's too in labels `-` is also not used as a query operator as a plus source: https://doxygen.postgresql.org/ltree_8h.html# a1cb5d5dd0c364d720854e8a250967dd1
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Andres Freundwrites: > On 2017-05-19 12:21:52 -0400, Robert Haas wrote: >> Yeah, but those advantages could also be gained by putting the >> pgindent tree on git.postgresql.org in a separate repository. Having >> it in the same repository as the actual PostgreSQL code is not >> required nor, in my opinion, particularly desirable. > I'm of the contrary opinion. A lot of the regular churn due to pgindent > right now is because it's inconvenient to run. Having to clone a > separate repository, compile that project, put it into PATH (fun if > there's multiple versions), run pgindent, discover typedefs.list is out > of date, update, run, ... is pretty much a guarantee that'll continue. > If we had a make indent that computed local typedefs list, *added* new > but not removed old ones, we could get much closer to just always being > properly indented. I hadn't really thought of automating it to that extent, but yeah, that seems like an interesting prospect. > The cost of putting it somewhere blow src/tools/pgindent seems fairly > minor. I think the main cost would be bloating distribution tarballs. Although we're talking about adding ~50K to tarballs that are already pushing 20MB, so realistically who's going to notice? If you want to cut the tarball size, let's reopen the discussion about keeping release notes since the dawn of time. Also, having the support in distributed tarballs is not all bad, because it would allow someone working from a tarball rather than a git pull to have pgindent support. Dunno if there are any such someones anymore, but maybe they're out there. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Heikki Linnakangaswrites: > On 05/19/2017 06:48 PM, Tom Lane wrote: >> That's going to catch a lot of things that are just variables, though. >> It might be all right as long as there was manual filtering after it. > At a quick glance, there are only a couple of them. This two cases > caught my eye. In twophase.c: > static struct xllist > { > ... > } records; > IMHO it would actually be an improvement if there was a space rather > than a tab there. Agreed, but if "records" were considered a typedef name, that would likely screw up the formatting of code referencing it. Maybe less badly with this version of indent than our old one, not sure. What I was just looking at is the possibility of absorbing struct tags ("xllist" in the above) as if they were typedef names. In at least 95% of our usages, if a struct has a tag then the tag is also the struct's typedef name. The reason this is interesting is that it looks like (on at least Linux and macOS) the debug info captures struct tags even when it misses the corresponding typedef. We could certainly create a coding rule that struct tags *must* match struct typedef names for our own code, but I'm not sure what violations of that convention might appear in system headers. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Robert Haaswrites: > On Fri, May 19, 2017 at 11:22 AM, Tom Lane wrote: >> I certainly would rather that our version matched something that's under >> active maintenance someplace. But it seems like there are two good >> arguments for having a copy in our tree: >> >> * easy accessibility for PG developers >> >> * at any given time we need to be using a specific "blessed" version, >> so that all developers can get equivalent results. There's pretty much >> no chance of that happening if we depend on distro-provided packages, >> even if those share a common upstream. > Yeah, but those advantages could also be gained by putting the > pgindent tree on git.postgresql.org in a separate repository. Having > it in the same repository as the actual PostgreSQL code is not > required nor, in my opinion, particularly desirable. It adds an extra step to what a developer has to do to get pgindent up and running, so it doesn't seem to me like it's helping the goal of reducing the setup overhead. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
On 2017-05-19 12:21:52 -0400, Robert Haas wrote: > On Fri, May 19, 2017 at 11:22 AM, Tom Lanewrote: > > I certainly would rather that our version matched something that's under > > active maintenance someplace. But it seems like there are two good > > arguments for having a copy in our tree: > > > > * easy accessibility for PG developers > > > > * at any given time we need to be using a specific "blessed" version, > > so that all developers can get equivalent results. There's pretty much > > no chance of that happening if we depend on distro-provided packages, > > even if those share a common upstream. > > Yeah, but those advantages could also be gained by putting the > pgindent tree on git.postgresql.org in a separate repository. Having > it in the same repository as the actual PostgreSQL code is not > required nor, in my opinion, particularly desirable. I'm of the contrary opinion. A lot of the regular churn due to pgindent right now is because it's inconvenient to run. Having to clone a separate repository, compile that project, put it into PATH (fun if there's multiple versions), run pgindent, discover typedefs.list is out of date, update, run, ... is pretty much a guarantee that'll continue. If we had a make indent that computed local typedefs list, *added* new but not removed old ones, we could get much closer to just always being properly indented. The cost of putting it somewhere blow src/tools/pgindent seems fairly minor. - Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
On Fri, May 19, 2017 at 11:22 AM, Tom Lanewrote: > I certainly would rather that our version matched something that's under > active maintenance someplace. But it seems like there are two good > arguments for having a copy in our tree: > > * easy accessibility for PG developers > > * at any given time we need to be using a specific "blessed" version, > so that all developers can get equivalent results. There's pretty much > no chance of that happening if we depend on distro-provided packages, > even if those share a common upstream. Yeah, but those advantages could also be gained by putting the pgindent tree on git.postgresql.org in a separate repository. Having it in the same repository as the actual PostgreSQL code is not required nor, in my opinion, particularly desirable. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [ANNOUNCE] PostgreSQL 10 Beta 1 Released!
On Fri, May 19, 2017 at 04:10:09AM +, Huong Dangminh wrote: > Hi, > > > * 10 Beta Release Notes: > > https://www.postgresql.org/docs/devel/static/release-10.html > > Just a minute thing, but changing of hot_standby default value is not fully > noted in release-10.sgml. > Please find the attached patch. Patch applied. Sorry I missed this. I must have been so focused on how to add the item that I forgot the commit details. -- Bruce Momjianhttp://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
On 05/19/2017 06:48 PM, Tom Lane wrote: Heikki Linnakangaswrites: You can get a pretty good typedefs list just by looking for the pattern "} ;". That's going to catch a lot of things that are just variables, though. It might be all right as long as there was manual filtering after it. At a quick glance, there are only a couple of them. This two cases caught my eye. In twophase.c: static struct xllist { StateFileChunk *head; /* first data block in the chain */ StateFileChunk *tail; /* last block in chain */ uint32 num_chunks; uint32 bytes_free; /* free bytes left in tail block */ uint32 total_len; /* total data bytes in chain */ } records; And this in informix.c: static struct { longval; int maxdigits; int digits; int remaining; charsign; char *val_string; } value; IMHO it would actually be an improvement if there was a space rather than a tab there. But I'm not sure what else it would mess up to consider those typedef names. And those are awfully generic names; wouldn't hurt to rename them, anyway. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
On Fri, May 19, 2017 at 11:22:54AM -0400, Tom Lane wrote: > We've had reasonably decent luck with tracking the tzcode/tzdata packages > as local copies, so I feel like we're not taking on anything unreasonable > if our model is that we'll occasionally (not oftener than once per year) > update our copy to recent upstream and then re-indent using that. I guess by having a copy in our tree we would overtly update our version, rather than downloading whatever happens to be the most recent version and finding changes by accident. If we could download a specific version that had everything we need, that would work too. > > What about perltidy itself..? We don't include that in our tree either. > > Not being much of a Perl guy, I don't care one way or the other about > perltidy. Somebody else can work on that if it needs work. We have agreed on a fixed version of perltidy and I added a download link to pgindent/README. -- Bruce Momjianhttp://momjian.us EnterpriseDB http://enterprisedb.com + As you are, so once was I. As I am, so you will be. + + Ancient Roman grave inscription + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Heikki Linnakangaswrites: > You can get a pretty good typedefs list just by looking for the pattern > "} ;". That's going to catch a lot of things that are just variables, though. It might be all right as long as there was manual filtering after it. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
On 05/19/2017 06:05 PM, Stephen Frost wrote: * Tom Lane (t...@sss.pgh.pa.us) wrote: Robert Haaswrites: On Thu, May 18, 2017 at 11:00 PM, Tom Lane wrote: The reason PGSSTrackLevel is "unrecognized" is that it's not in typedefs.list, which is a deficiency in our typedef-collection technology not in indent. (I believe the problem is that there are no variables declared with that typename, causing there to not be any of the kind of symbol table entries we are looking for.) This, however, doesn't sound so good. Isn't there some way this can be fixed? I'm intending to look into it, but I think it's mostly independent of whether we replace pgindent itself. The existing code has the same problem, really. One brute-force way we could deal with the problem is to have a "manual" list of names to be treated as typedefs, in addition to whatever the buildfarm produces. I see no other way than that to get, for instance, simplehash.h's SH_TYPE to be formatted as a typedef. There are also some typedefs that don't get formatted correctly because they are only used for wonky options that no existing typedef-reporting buildfarm member builds. Manual addition might be the path of least resistance there too. Now the other side of this coin is that, by definition, such typedefs are not getting used in a huge number of places. If we just had to live with it, it might not be awful. Dealing with the typedef lists in general is a bit of a pain to get right, to make sure that new just-written code gets correctly indented. Perhaps we could find a way to incorporate the buildfarm typedef lists and a manual list and a locally generated/provided set in a simpler fashion in general. You can get a pretty good typedefs list just by looking for the pattern "} ;". Something like this: grep -o -h -I --perl-regexp -r "}\W(\w+);" src/ contrib/ | perl -pe 's/}\W(.*);/\1/' | sort | uniq > typedefs.list It won't cover system headers and non-struct typedefs, but it catches those simplehash typedefs and PGSSTrackLevel. Maybe we should run that and merge the result with the typedef lists we collect in the buildfarm. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PG10 Crash-safe and replicable Hash Indexes and UNIQUE
Chapman Flackwrites: > Was my guess about the reason right? Does this PG10 announcement > also mean it will be possible to use UNIQUE constraints with some > pure-identifier, no-natural-ordering type that supports only hashing? No, nobody's done anything about allowing hash indexes to support uniqueness AFAIK. I don't have a clear picture of how much work it would be, but it would likely be more than trivial effort; there's definitely extra code in btree that supports that. (You might be right about the big picture, in that no one wanted to bother with working on that as long as hash indexes weren't crash safe. But there's not a technical connection.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] PG10 Crash-safe and replicable Hash Indexes and UNIQUE
Hi, The item on hash indexes reminded me of an old comment from years ago that I put in the code of the first custom PG datatype I ever built at $work: COMMENT ON OPERATOR CLASS puid_ops USING btree IS 'As puids are only identifiers, there is no obvious reason to define ordering operators or support btree indexing. But for some curious reason PostgreSQL 8.4 does not allow a hash index to support UNIQUE constraints (this may be because, per the manual, hash index "operations are not presently WAL-logged" so it could be risky to base constraints on them). Therefore, the whole set of ordering operators must be implemented to provide an operator class for the btree index method.'; Was my guess about the reason right? Does this PG10 announcement also mean it will be possible to use UNIQUE constraints with some pure-identifier, no-natural-ordering type that supports only hashing? -Chap -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Stephen Frostwrites: > * Tom Lane (t...@sss.pgh.pa.us) wrote: >> Yes, moving the goalposts on ease-of-use is an important consideration >> here. What that says to me is that we ought to pull FreeBSD indent >> into our tree, and provide Makefile support that makes it easy for >> any developer to build it and put it into their PATH. (I suppose >> that means support in the MSVC scripts too, but somebody else will >> have to do that part.) > I'm not a huge fan of this, however. Do we really need to carry around > the FreeBSD indent in our tree? I had been expecting that these changes > would eventually result in a package that's available in the common > distributions (possibly from apt/yum.postgresql.org, at least until it's > in the main Debian-based and RHEL-based package systems). Are you > thinking that we'll always have to have our own modified version? I certainly would rather that our version matched something that's under active maintenance someplace. But it seems like there are two good arguments for having a copy in our tree: * easy accessibility for PG developers * at any given time we need to be using a specific "blessed" version, so that all developers can get equivalent results. There's pretty much no chance of that happening if we depend on distro-provided packages, even if those share a common upstream. We've had reasonably decent luck with tracking the tzcode/tzdata packages as local copies, so I feel like we're not taking on anything unreasonable if our model is that we'll occasionally (not oftener than once per year) update our copy to recent upstream and then re-indent using that. > What about perltidy itself..? We don't include that in our tree either. Not being much of a Perl guy, I don't care one way or the other about perltidy. Somebody else can work on that if it needs work. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Incorrect mention of pg_xlog_switch() in xlogfuncs.c
While reading some code, noticed that the headers of functions pg_walfile_name_offset() and pg_walfile_name() incorrecty refer pg_xlog_switch() since the inception of code in commit 704ddaaa. In PG10 implementation, actual name of the referred function is pg_switch_wal(). So either refer the correct name in the function header or remove the other function referral from the function header. Attaching patches for both the ways. -- Regards, Neha diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c index b3223d6..463f668 100644 --- a/src/backend/access/transam/xlogfuncs.c +++ b/src/backend/access/transam/xlogfuncs.c @@ -448,8 +448,7 @@ pg_last_wal_replay_lsn(PG_FUNCTION_ARGS) } /* - * Compute an xlog file name and decimal byte offset given a WAL location, - * such as is returned by pg_stop_backup() or pg_xlog_switch(). + * Compute an xlog file name and decimal byte offset given a WAL location. * * Note that a location exactly at a segment boundary is taken to be in * the previous segment. This is usually the right thing, since the @@ -514,8 +513,7 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS) } /* - * Compute an xlog file name given a WAL location, - * such as is returned by pg_stop_backup() or pg_xlog_switch(). + * Compute an xlog file name given a WAL location. */ Datum pg_walfile_name(PG_FUNCTION_ARGS) diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c index b3223d6..fb905c0 100644 --- a/src/backend/access/transam/xlogfuncs.c +++ b/src/backend/access/transam/xlogfuncs.c @@ -449,7 +449,7 @@ pg_last_wal_replay_lsn(PG_FUNCTION_ARGS) /* * Compute an xlog file name and decimal byte offset given a WAL location, - * such as is returned by pg_stop_backup() or pg_xlog_switch(). + * such as is returned by pg_stop_backup() or pg_switch_wal(). * * Note that a location exactly at a segment boundary is taken to be in * the previous segment. This is usually the right thing, since the @@ -515,7 +515,7 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS) /* * Compute an xlog file name given a WAL location, - * such as is returned by pg_stop_backup() or pg_xlog_switch(). + * such as is returned by pg_stop_backup() or pg_switch_wal(). */ Datum pg_walfile_name(PG_FUNCTION_ARGS) -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Tom, all, * Tom Lane (t...@sss.pgh.pa.us) wrote: > Robert Haaswrites: > > On Thu, May 18, 2017 at 11:00 PM, Tom Lane wrote: > >> The reason PGSSTrackLevel is "unrecognized" is that it's not in > >> typedefs.list, which is a deficiency in our typedef-collection > >> technology not in indent. (I believe the problem is that there > >> are no variables declared with that typename, causing there to > >> not be any of the kind of symbol table entries we are looking for.) > > > This, however, doesn't sound so good. Isn't there some way this can be > > fixed? > > I'm intending to look into it, but I think it's mostly independent of > whether we replace pgindent itself. The existing code has the same > problem, really. > > One brute-force way we could deal with the problem is to have a "manual" > list of names to be treated as typedefs, in addition to whatever the > buildfarm produces. I see no other way than that to get, for instance, > simplehash.h's SH_TYPE to be formatted as a typedef. There are also > some typedefs that don't get formatted correctly because they are only > used for wonky options that no existing typedef-reporting buildfarm member > builds. Manual addition might be the path of least resistance there too. > > Now the other side of this coin is that, by definition, such typedefs > are not getting used in a huge number of places. If we just had to > live with it, it might not be awful. Dealing with the typedef lists in general is a bit of a pain to get right, to make sure that new just-written code gets correctly indented. Perhaps we could find a way to incorporate the buildfarm typedef lists and a manual list and a locally generated/provided set in a simpler fashion in general. > >> All in all, this looks pretty darn good from here, and I'm thinking > >> we should push forward on it. > > > What does that exactly mean concretely? > > That means I plan to continue putting effort into it with the goal of > making a switchover sometime pretty darn soon. We do not have a very > wide window for fooling with pgindent rules, IMO --- once v11 development > starts I think we can't touch it again (until this time next year). Agreed. > > We've talked about pulling pgindent into our main repo, or posting a > > link to a tarball someplace. An intermediate plan might be to give it > > its own repo, but on git.postgresql.org, which seems like it might > > give us the best of both worlds. But I really want something that's > > going to be easy to set up and configure. It took me years to be > > brave enough to get the current pgindent set up. > > Yes, moving the goalposts on ease-of-use is an important consideration > here. What that says to me is that we ought to pull FreeBSD indent > into our tree, and provide Makefile support that makes it easy for > any developer to build it and put it into their PATH. (I suppose > that means support in the MSVC scripts too, but somebody else will > have to do that part.) I'm not a huge fan of this, however. Do we really need to carry around the FreeBSD indent in our tree? I had been expecting that these changes would eventually result in a package that's available in the common distributions (possibly from apt/yum.postgresql.org, at least until it's in the main Debian-based and RHEL-based package systems). Are you thinking that we'll always have to have our own modified version? > We should also think hard about getting rid of the entab dependency, > to eliminate the other nonstandard prerequisite program. We could > either accept that that will result in some tab-vs-space changes in > our code, or try to convert those steps in pgindent into pure Perl, > or try to convince Piotr to add an option to indent that will make > it do tabs the way we want (ie use a space not a tab if the tab > would only move one space anyway). What about perltidy itself..? We don't include that in our tree either. I do think it'd be good to if Piotr would add such an option, hopefully that's agreeable. > Lastly (and I've said this before, but you pushed back on it at > the time), if we're doing this then we're going all in. That > means reformatting the back branches to match too. That diff > is already big enough to be a disaster for back-patching, and > we haven't even considered whether we want to let pgindent adopt > less-inconsistent rules for comment indentation. So I think that > as soon as the dust has settled in HEAD, we back-patch the addition > of FreeBSD indent, and the changes in pgindent proper, and then > run pgindent in each supported back branch. Ugh. This would be pretty painful, but I agree that back-patching without doing re-indenting the back-branches would also suck, so I'm on the fence about this. Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] PostgreSQL 10beta1 - compilation fails on OpenBSD -current
On Fri, May 19, 2017 at 01:32:37PM +0200, Pierre-Emmanuel André wrote: > On Fri, May 19, 2017 at 12:44:27PM +0200, Pierre-Emmanuel André wrote: > > On Fri, May 19, 2017 at 12:21:26PM +0300, Heikki Linnakangas wrote: > > > On 05/19/2017 12:03 PM, Pierre-Emmanuel André wrote: > > > > Hi, > > > > > > > > I tried to build PostgreSQL 10beta1 on OpenBSD -current and i have this > > > > error: > > > > > > > > gmake[3]: Entering directory > > > > '/usr/ports/pobj/postgresql-10beta1/postgresql-10beta1/src/backend/libpq' > > > > cc -Wall -Wmissing-prototypes -Wpointer-arith > > > > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute > > > > -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe > > > > -I../../../src/include -I/usr/local/include > > > > -I/usr/local/include/libxml2 -I/usr/local/include-c -o be-fsstubs.o > > > > be-fsstubs.c > > > > cc -Wall -Wmissing-prototypes -Wpointer-arith > > > > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute > > > > -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe > > > > -I../../../src/include -I/usr/local/include > > > > -I/usr/local/include/libxml2 -I/usr/local/include-c -o be-secure.o > > > > be-secure.c > > > > cc -Wall -Wmissing-prototypes -Wpointer-arith > > > > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute > > > > -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe > > > > -I../../../src/include -I/usr/local/include > > > > -I/usr/local/include/libxml2 -I/usr/local/include-c -o auth.o auth.c > > > > auth.c: In function 'CheckBSDAuth': > > > > auth.c:2265: error: too few arguments to function 'sendAuthRequest' > > > > > > > > > > > > In auth.c, there is this call: > > > > /* Send regular password request to client, and get the response */ > > > > sendAuthRequest(port, AUTH_REQ_PASSWORD); > > > > > > > > but this function is defined like this: > > > > static void sendAuthRequest(Port *port, AuthRequest areq, char > > > > *extradata,int extralen); > > > > > > This was an oversight in my commit 8d3b9cce81c. It added those new > > > arguments, but didn't update that call. I'll go fix that, thanks for the > > > report! > > > > > > You're the first to complain, and the buildfarm has been happy, so it > > > seems > > > that you're the first one to try building 10beta1 with --with-bsd-auth. We > > > have one OpenBSD box in the buildfarm, "curculio", but apparently it's not > > > using the --with-bsd-auth option. > > > > > > Mikael, could you add --with-bsd-auth to curculio's configuration, please? > > > On 9.6 and above. > > > > > > > I'm the maintainer of PostgreSQL on OpenBSD ;) > > And the port enable --with-bsd-auth. > > Thanks for your answer. > > > > Regards, > > > > > > I have disabled the --with-bsd-auth and now i have these errors: > > cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement > -Wendif-labels -Wmissing-format-attribute -Wformat-security > -fno-strict-aliasing -fwrapv -O2 -pipe command.o common.o conditional.o > copy.o crosstabview.o describe.o help.o input.o large_obj.o mainloop.o > prompt.o psqlscanslash.o sql_help.o startup.o stringutils.o tab-complete.o > variables.o -L../../../src/common -lpgcommon -L../../../src/port -lpgport > -L../../../src/interfaces/libpq -lpq -L../../../src/port > -L../../../src/common -L/usr/local/lib -L/usr/local/lib -L/usr/lib > -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -Wl,-Bdynamic > -L../../../src/fe_utils -lpgfeutils -lpq -lpgcommon -lpgport -lxml2 -lssl > -lcrypto -lz -lreadline -ltermcap -lm -o psql > command.o: In function `exec_command_set': > command.c:(.text+0x4e9a): warning: warning: strcat() is almost always > misused, please use strlcat() > describe.o: In function `listTSParsers': > describe.c:(.text+0x4e06): warning: warning: sprintf() is often misused, > please use snprintf() > large_obj.o: In function `do_lo_import': > large_obj.c:(.text+0x637): warning: warning: strcpy() is almost always > misused, please use strlcpy() > common.o: In function `psql_get_variable': > common.c:(.text+0x114c): undefined reference to `appendShellStringNoError' > mainloop.o: In function `MainLoop': > mainloop.c:(.text+0xcd): undefined reference to `psql_scan_set_passthrough' > startup.o: In function `main': > startup.c:(.text+0x1b01): undefined reference to `psql_scan_set_passthrough' > collect2: ld returned 1 exit status > gmake[3]: *** [Makefile:34: psql] Error 1 > > > > Any ideas ? > > Nevermind, i found the pb. > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Robert Haaswrites: > On Thu, May 18, 2017 at 11:00 PM, Tom Lane wrote: >> The reason PGSSTrackLevel is "unrecognized" is that it's not in >> typedefs.list, which is a deficiency in our typedef-collection >> technology not in indent. (I believe the problem is that there >> are no variables declared with that typename, causing there to >> not be any of the kind of symbol table entries we are looking for.) > This, however, doesn't sound so good. Isn't there some way this can be fixed? I'm intending to look into it, but I think it's mostly independent of whether we replace pgindent itself. The existing code has the same problem, really. One brute-force way we could deal with the problem is to have a "manual" list of names to be treated as typedefs, in addition to whatever the buildfarm produces. I see no other way than that to get, for instance, simplehash.h's SH_TYPE to be formatted as a typedef. There are also some typedefs that don't get formatted correctly because they are only used for wonky options that no existing typedef-reporting buildfarm member builds. Manual addition might be the path of least resistance there too. Now the other side of this coin is that, by definition, such typedefs are not getting used in a huge number of places. If we just had to live with it, it might not be awful. >> All in all, this looks pretty darn good from here, and I'm thinking >> we should push forward on it. > What does that exactly mean concretely? That means I plan to continue putting effort into it with the goal of making a switchover sometime pretty darn soon. We do not have a very wide window for fooling with pgindent rules, IMO --- once v11 development starts I think we can't touch it again (until this time next year). > We've talked about pulling pgindent into our main repo, or posting a > link to a tarball someplace. An intermediate plan might be to give it > its own repo, but on git.postgresql.org, which seems like it might > give us the best of both worlds. But I really want something that's > going to be easy to set up and configure. It took me years to be > brave enough to get the current pgindent set up. Yes, moving the goalposts on ease-of-use is an important consideration here. What that says to me is that we ought to pull FreeBSD indent into our tree, and provide Makefile support that makes it easy for any developer to build it and put it into their PATH. (I suppose that means support in the MSVC scripts too, but somebody else will have to do that part.) We should also think hard about getting rid of the entab dependency, to eliminate the other nonstandard prerequisite program. We could either accept that that will result in some tab-vs-space changes in our code, or try to convert those steps in pgindent into pure Perl, or try to convince Piotr to add an option to indent that will make it do tabs the way we want (ie use a space not a tab if the tab would only move one space anyway). Lastly (and I've said this before, but you pushed back on it at the time), if we're doing this then we're going all in. That means reformatting the back branches to match too. That diff is already big enough to be a disaster for back-patching, and we haven't even considered whether we want to let pgindent adopt less-inconsistent rules for comment indentation. So I think that as soon as the dust has settled in HEAD, we back-patch the addition of FreeBSD indent, and the changes in pgindent proper, and then run pgindent in each supported back branch. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partition-wise join for join between (declaratively) partitioned tables
On Thu, May 18, 2017 at 4:38 AM, Ashutosh Bapatwrote: > But in case a user has written an = operator which returns true for > two NULL values, per description in [1], that comparison operator is > flawed and > using that operator is going to result in SQL-standard-incompliant > behaviour. I have tried to preserve all the relevant portions of > discussion in this mail. Am I missing something? Yes. You're confusing friendly advice about how to write good SQL with internals documentation about how the system actually works. The documentation we have about how operator classes and index methods and so forth actually work under the hood is in https://www.postgresql.org/docs/devel/static/xindex.html -- as a developer, that's what you should be looking at. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Hash Functions
On Fri, May 19, 2017 at 2:36 AM, Jeff Daviswrote: > I could agree to something like that. Let's explore some of the challenges > there and potential solutions: > > 1. Dump/reload of hash partitioned data. > > Falling back to restore-through-the-root seems like a reasonable answer > here. Moving to a different encoding is not an edge case, but it's not > common either, so a performance penalty seems acceptable. I'm not > immediately sure how we'd implement this in pg_dump/restore, so I'd feel a > little more comfortable if I saw a sketch. Right, I think this needs some investigation. I can't whip up a sketch on short notice, but I'll see if someone else at EnterpriseDB can work on it unless somebody else wants to take a crack at it. > 2. Having a lot of hash partitions would be cumbersome > > The user would need to create and manage each partition, and try to do > global operations in a sane way. The normal case would probably involve > scripts to do things like add an index to all partitions, or a column. Many > partitions would also just pollute the namespace unless you remember to put > them in a separate schema (yes, it's easy, but most people will still > forget). Some syntax sugar would go a long way here. I agree. Adding a column already cascades to all children, and there's a proposal to make the same thing true for indexes. See discussion beginning at http://postgr.es/m/c8fe4f6b-ff46-aae0-89e3-e936a35f0...@postgrespro.ru I haven't had time to review the code posted there yet, but I would like to see something along the lines discussed there committed to v11, and hopefully also something around foreign keys. It should be possible to create an outbound foreign key on a foreign table and have that cascade to all children. Conversely, it should also be possible to create a foreign key referencing a partitioned table provided that the foreign key references the partitioning key, and that there's a unique index on those same columns on every partition. (Referencing a foreign key that is not the partitioning key will have to wait for global indexes, I think.) These things are useful not just for hash partitioning, but also for list and range partitioning, and we'll save a lot of work if we can use the same infrastructure for both cases. > 3. The user would need to specify details they really don't care about for > each partition. > > Things like "modulus 16, remainder 0", "modulus 16, remainder 1" are tedious > boilerplate. And if the user makes a mistake, then 1/16 of inserts start > failing. Probably would be caught during testing, but not exactly a good > user experience. I'm not thrilled about this, considering that all the user > really wants is 16 partitions, but it's not the end of the world. As I said on the hash partitioning thread, I think the way to handle this is to get the basic feature in first, then add some convenience syntax to auto-create the partitions. That shouldn't be very difficult to implement; I just didn't want to complicate things more than necessary for the first version. The same issue arose when discussing list and range partitioning: Oracle has syntax like ours, but with the ability (requirement?) to create the partitions in the initial CREATE TABLE statement. However, I wanted to make sure we had the "inconvenient" syntax fully working and fully tested before we added that, because I believe pg_dump needs to dump everything out the long way. > 4. Detach is a foot-gun > > If you detach a partition, random inserts will start failing. Not thrilled > about this, but a hapless user would accept most of the blame if they > stumble over it. Another way of saying this is with hash partitioning you > really need the whole set for the table to be online at all. But we can't > really enforce that, because it would limit some of the flexibility that you > have in mind. Yes, I agree with all of that. I don't think it's really going to be a problem in practice. The only reason to detach a hash partition is if you want to split it, and we may eventually have convenience syntax to do that in an automated (i.e. less error-prone) way. If somebody does it manually and without a plan for putting back a replacement partition, they may be sad, but if somebody puts a CHECK (false) constraint on their table, they may be sad about that, too. It's more important to allow for flexibility than to prohibit every stupid thing somebody might try to do. Also, documentation helps. We've got a chapter on partitioning and it can be expanded to discuss these kinds of issues. > Stepping back, your approach might be closer to the general postgres > philosophy of allowing the user to assemble from spare parts first, then a > few releases later we offer some pre-built subassemblies, and a few releases > later we make the typical cases work out of the box. I'm fine with it as > long as we don't paint ourselves into a corner. That's basically my thinking here. Right
Re: [HACKERS] Get stuck when dropping a subscription during synchronizing table
On 18/05/17 16:16, Robert Haas wrote: > On Wed, May 17, 2017 at 6:58 AM, Masahiko Sawada> wrote: >> I think the above changes can solve this issue but It seems to me that >> holding AccessExclusiveLock on pg_subscription by DROP SUBSCRIPTION >> until commit could lead another deadlock problem in the future. So I'd >> to contrive ways to reduce lock level somehow if possible. For >> example, if we change the apply launcher so that it gets the >> subscription list only when pg_subscription gets invalid, apply >> launcher cannot try to launch the apply worker being stopped. We >> invalidate pg_subscription at commit of DROP SUBSCRIPTION and the >> apply launcher can get new subscription list which doesn't include the >> entry we removed. That way we can reduce lock level to >> ShareUpdateExclusiveLock and solve this issue. >> Also in your patch, we need to change DROP SUBSCRIPTION as well to >> resolve another case I encountered, where DROP SUBSCRIPTION waits for >> apply worker while holding a tuple lock on pg_subscription_rel and the >> apply worker waits for same tuple on pg_subscription_rel in >> SetSubscriptionRelState(). > > I don't really understand the issue being discussed here in any > detail, but as a general point I'd say that it might be more > productive to make the locks finer-grained rather than struggling to > reduce the lock level. For example, instead of locking all of > pg_subscription, use LockSharedObject() to lock the individual > subscription, still with AccessExclusiveLock. That means that other > accesses to that subscription also need to take a lock so that you > actually get a conflict when there should be one, but that should be > doable. I expect that trying to manage locking conflicts using only > catalog-wide locks is a doomed strategy. We do LockSharedObject() but it's rather useless the way it's done now as no other access locks it. We can't block all other accesses however, the workers need to be able to access the catalog during clean shutdown in some situations. What we need is to block starting of new workers for that subscription so only those code paths would need to block. So I think we might want to do both finer-grained locking and decreasing lock level. -- Petr Jelinek http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [BUGS] Concurrent ALTER SEQUENCE RESTART Regression
On Thu, May 18, 2017 at 4:54 PM, Andres Freundwrote: > There's still weird behaviour, unfortunately. If you do an ALTER > SEQUENCE changing minval/maxval w/ restart in a transaction, and abort, > you'll a) quite possibly not be able to use the sequence anymore, > because it may of bounds b) DDL still isn't transactional. Your emails would be a bit easier to understand if you included a few more words. I'm guessing "may of bounds" is supposed to say "may be out of bounds"? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [POC] Faster processing at Gather node
On Fri, May 19, 2017 at 7:55 AM, Rafia Sabihwrote: > While analysing the performance of TPC-H queries for the newly developed > parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed > that the time taken by gather node is significant. On investigation, as per > the current method it copies each tuple to the shared queue and notifies the > receiver. Since, this copying is done in shared queue, a lot of locking and > latching overhead is there. > > So, in this POC patch I tried to copy all the tuples in a local queue thus > avoiding all the locks and latches. Once, the local queue is filled as per > it's capacity, tuples are transferred to the shared queue. Once, all the > tuples are transferred the receiver is sent the notification about the same. What if, instead of doing this, we switched the shm_mq stuff to use atomics? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
On Thu, May 18, 2017 at 11:00 PM, Tom Lanewrote: > * Improvements in formatting around sizeof and related constructs, > for example: > > * Likewise, operators after casts work better than before: > > * Sane formatting of function typedefs, for example: > > * Non-typedef struct pointers are now formatted consistently, for example: > > * Better handling of pointers with const/volatile qualifiers, for example: > > * Better handling of PG_USED_FOR_ASSERTS_ONLY declarations, for example > > * Corner cases where no space was left before a comment are fixed: Those all sound like good things. > Another set of changes is slightly different handling of unrecognized > typedef names: > > @@ -250,7 +250,7 @@ typedef enum > PGSS_TRACK_NONE,/* track no statements */ > PGSS_TRACK_TOP, /* only top level statements */ > PGSS_TRACK_ALL /* all statements, including nested ones */ > -} PGSSTrackLevel; > +} PGSSTrackLevel; > > The reason PGSSTrackLevel is "unrecognized" is that it's not in > typedefs.list, which is a deficiency in our typedef-collection > technology not in indent. (I believe the problem is that there > are no variables declared with that typename, causing there to > not be any of the kind of symbol table entries we are looking for.) > The handling of such names was already slightly wonky, though; > note that the previous version was already differently indented > from what would happen if PGSSTrackLevel were known. This, however, doesn't sound so good. Isn't there some way this can be fixed? > All in all, this looks pretty darn good from here, and I'm thinking > we should push forward on it. What does that exactly mean concretely? We've talked about pulling pgindent into our main repo, or posting a link to a tarball someplace. An intermediate plan might be to give it its own repo, but on git.postgresql.org, which seems like it might give us the best of both worlds. But I really want something that's going to be easy to set up and configure. It took me years to be brave enough to get the current pgindent set up. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [bug fix] PG10: libpq doesn't connect to alternative hosts when some errors occur
On Thu, May 18, 2017 10:24 pm, Tsunakawa, Takayuki wrote: > From: pgsql-hackers-ow...@postgresql.org >> [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Michael >> Paquier >> On Thu, May 18, 2017 at 11:30 PM, Robert Haas>> wrote: >> > Because why? >> >> Because it is critical to let the user know as well *why* an error >> happened. >> Imagine that this feature is used with multiple nodes, all primaries. >> If >> a DB admin busted the credentials in one of them then all the load >> would >> be redirected on the other nodes, without knowing what is actually >> causing >> the error. Then the node where the credentials have been changed would >> just >> run idle, and the application would be unaware of that. > > In that case, the DBA can know the authentication errors in the server log > of the idle instance. > > I'm sorry to repeat myself, but libpq connection failover is the feature > for HA. So I believe what to prioritize is HA. I'm in agreement here, the feature for me sounds very useful for HA, but HA means it needs to work as reliable as possible, not just "often enough" :) If one goes to the length to have multiple instances, there is surely also monitoring in place to see if they are healthy or under load/stress. The beaty of having libpq connecting to multiple hosts until one works is that you can handle temporary unavailability (e.g. one instance is restarted for patching) and general failure (one instance goes down to whatever error) in one place and without having to implement this logic into every app (database user connector). Imagine f.i. that you have 3 hosts A, B and C and B. There are two scenarioes here: Everyone tries "A,B,C", or everyone tries random permutations like "A,C,B" or "B,C,A". If In the first scenary, almost all connections would go to A, until it no longer accepts no connections, then they spill over to B. In the second one, each host gets 1/3 of all connections equally. Now imagine that B is down for either a brief period or permantently. If libpq stops when it connects to B, then the scenarios play out like this: 1: Almost all connections run on A, but a random subset breaks when spillover to B does not happen. Host C is idle. 2: 2/3 of all connections just work, 1/3 just breaks. Both A and C have a higher load than usual. If libpq skips B and continues, then we have instead: 1: Almost all connections run on A, but a random subset spills over to C after skipping B. 2: All connections run on A or C, B is always skipped if it appears before A or C. The admin would see on the monitoring that B is offline (briefly or permanent) and need to correct it. >From the user's perspective, the second variant is smooth, the first is breaking randomly. A "database user" would not really want to know that B is down or why, it would just expect to get a working DB connection. That's my 0.02 € anyway. Tels -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] [POC] Faster processing at Gather node
Hello everybody, While analysing the performance of TPC-H queries for the newly developed parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed that the time taken by gather node is significant. On investigation, as per the current method it copies each tuple to the shared queue and notifies the receiver. Since, this copying is done in shared queue, a lot of locking and latching overhead is there. So, in this POC patch I tried to copy all the tuples in a local queue thus avoiding all the locks and latches. Once, the local queue is filled as per it's capacity, tuples are transferred to the shared queue. Once, all the tuples are transferred the receiver is sent the notification about the same. With this patch I could see significant improvement in performance for simple queries, head: explain analyse select * from t where i < 3000; QUERY PLAN - Gather (cost=0.00..83225.55 rows=29676454 width=19) (actual time=1.379..35871.235 rows=2999 loops=1) Workers Planned: 64 Workers Launched: 64 -> Parallel Seq Scan on t (cost=0.00..83225.55 rows=463695 width=19) (actual time=0.125..1415.521 rows=461538 loops=65) Filter: (i < 3000) Rows Removed by Filter: 1076923 Planning time: 0.180 ms Execution time: 38503.478 ms (8 rows) patch: explain analyse select * from t where i < 3000; QUERY PLAN Gather (cost=0.00..83225.55 rows=29676454 width=19) (actual time=0.980..24499.427 rows=2999 loops=1) Workers Planned: 64 Workers Launched: 64 -> Parallel Seq Scan on t (cost=0.00..83225.55 rows=463695 width=19) (actual time=0.088..968.406 rows=461538 loops=65) Filter: (i < 3000) Rows Removed by Filter: 1076923 Planning time: 0.158 ms Execution time: 27331.849 ms (8 rows) head: explain analyse select * from t where i < 4000; QUERY PLAN - Gather (cost=0.00..83225.55 rows=39501511 width=19) (actual time=0.890..38438.753 rows=3999 loops=1) Workers Planned: 64 Workers Launched: 64 -> Parallel Seq Scan on t (cost=0.00..83225.55 rows=617211 width=19) (actual time=0.074..1235.180 rows=615385 loops=65) Filter: (i < 4000) Rows Removed by Filter: 923077 Planning time: 0.113 ms Execution time: 41609.855 ms (8 rows) patch: explain analyse select * from t where i < 4000; QUERY PLAN Gather (cost=0.00..83225.55 rows=39501511 width=19) (actual time=1.085..31806.671 rows=3999 loops=1) Workers Planned: 64 Workers Launched: 64 -> Parallel Seq Scan on t (cost=0.00..83225.55 rows=617211 width=19) (actual time=0.083..954.342 rows=615385 loops=65) Filter: (i < 4000) Rows Removed by Filter: 923077 Planning time: 0.151 ms Execution time: 35341.429 ms (8 rows) head: explain analyse select * from t where i < 4500; QUERY PLAN Gather (cost=0.00..102756.80 rows=44584013 width=19) (actual time=0.563..49156.252 rows=4499 loops=1) Workers Planned: 32 Workers Launched: 32 -> Parallel Seq Scan on t (cost=0.00..102756.80 rows=1393250 width=19) (actual time=0.069..1905.436 rows=1363636 loops=33) Filter: (i < 4500) Rows Removed by Filter: 167 Planning time: 0.106 ms Execution time: 52722.476 ms (8 rows) patch: explain analyse select * from t where i < 4500; QUERY PLAN Gather (cost=0.00..102756.80 rows=44584013 width=19) (actual time=0.545..37501.200 rows=4499 loops=1) Workers Planned: 32 Workers Launched: 32 -> Parallel Seq Scan on t (cost=0.00..102756.80 rows=1393250 width=19) (actual time=0.068..2165.430 rows=1363636 loops=33) Filter: (i < 4500) Rows Removed by Filter: 167 Planning time: 0.087 ms Execution time: 41458.969 ms (8 rows) The improvement in performance is most when the selectivity is around 20-30%, in which case currently parallelism is not selected. I am
Re: [HACKERS] PostgreSQL 10beta1 - compilation fails on OpenBSD -current
On Fri, May 19, 2017 at 11:21 AM, Heikki Linnakangaswrote: > On 05/19/2017 12:03 PM, Pierre-Emmanuel André wrote: > > Mikael, could you add --with-bsd-auth to curculio's configuration, please? > On 9.6 and above. > > - Heikki > > Hi Heikki, I've added -with-bsd-auth to 9.6 and HEAD now. Let's see what it says the next time it runs. /Mikael
Re: [HACKERS] PostgreSQL 10beta1 - compilation fails on OpenBSD -current
On Fri, May 19, 2017 at 12:44:27PM +0200, Pierre-Emmanuel André wrote: > On Fri, May 19, 2017 at 12:21:26PM +0300, Heikki Linnakangas wrote: > > On 05/19/2017 12:03 PM, Pierre-Emmanuel André wrote: > > > Hi, > > > > > > I tried to build PostgreSQL 10beta1 on OpenBSD -current and i have this > > > error: > > > > > > gmake[3]: Entering directory > > > '/usr/ports/pobj/postgresql-10beta1/postgresql-10beta1/src/backend/libpq' > > > cc -Wall -Wmissing-prototypes -Wpointer-arith > > > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute > > > -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe > > > -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 > > > -I/usr/local/include-c -o be-fsstubs.o be-fsstubs.c > > > cc -Wall -Wmissing-prototypes -Wpointer-arith > > > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute > > > -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe > > > -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 > > > -I/usr/local/include-c -o be-secure.o be-secure.c > > > cc -Wall -Wmissing-prototypes -Wpointer-arith > > > -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute > > > -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe > > > -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 > > > -I/usr/local/include-c -o auth.o auth.c > > > auth.c: In function 'CheckBSDAuth': > > > auth.c:2265: error: too few arguments to function 'sendAuthRequest' > > > > > > > > > In auth.c, there is this call: > > > /* Send regular password request to client, and get the response */ > > > sendAuthRequest(port, AUTH_REQ_PASSWORD); > > > > > > but this function is defined like this: > > > static void sendAuthRequest(Port *port, AuthRequest areq, char > > > *extradata,int extralen); > > > > This was an oversight in my commit 8d3b9cce81c. It added those new > > arguments, but didn't update that call. I'll go fix that, thanks for the > > report! > > > > You're the first to complain, and the buildfarm has been happy, so it seems > > that you're the first one to try building 10beta1 with --with-bsd-auth. We > > have one OpenBSD box in the buildfarm, "curculio", but apparently it's not > > using the --with-bsd-auth option. > > > > Mikael, could you add --with-bsd-auth to curculio's configuration, please? > > On 9.6 and above. > > > > I'm the maintainer of PostgreSQL on OpenBSD ;) > And the port enable --with-bsd-auth. > Thanks for your answer. > > Regards, > > I have disabled the --with-bsd-auth and now i have these errors: cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe command.o common.o conditional.o copy.o crosstabview.o describe.o help.o input.o large_obj.o mainloop.o prompt.o psqlscanslash.o sql_help.o startup.o stringutils.o tab-complete.o variables.o -L../../../src/common -lpgcommon -L../../../src/port -lpgport -L../../../src/interfaces/libpq -lpq -L../../../src/port -L../../../src/common -L/usr/local/lib -L/usr/local/lib -L/usr/lib -L/usr/local/lib -L/usr/local/lib -L/usr/local/lib -Wl,-Bdynamic -L../../../src/fe_utils -lpgfeutils -lpq -lpgcommon -lpgport -lxml2 -lssl -lcrypto -lz -lreadline -ltermcap -lm -o psql command.o: In function `exec_command_set': command.c:(.text+0x4e9a): warning: warning: strcat() is almost always misused, please use strlcat() describe.o: In function `listTSParsers': describe.c:(.text+0x4e06): warning: warning: sprintf() is often misused, please use snprintf() large_obj.o: In function `do_lo_import': large_obj.c:(.text+0x637): warning: warning: strcpy() is almost always misused, please use strlcpy() common.o: In function `psql_get_variable': common.c:(.text+0x114c): undefined reference to `appendShellStringNoError' mainloop.o: In function `MainLoop': mainloop.c:(.text+0xcd): undefined reference to `psql_scan_set_passthrough' startup.o: In function `main': startup.c:(.text+0x1b01): undefined reference to `psql_scan_set_passthrough' collect2: ld returned 1 exit status gmake[3]: *** [Makefile:34: psql] Error 1 Any ideas ? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Allowing dash character in LTREE
I propose to allow the `-` char in allowed label characters The reason is to allow to use more easily base64 ids, (like https://github.com/dylang/shortid, ...), uuid's too in labels `-` is also not used as a query operator as a plus source: https://doxygen.postgresql.org/ltree_8h.html#a1cb5d5dd0c364d720854e8a250967dd1
Re: [HACKERS] Preliminary results for proposed new pgindent implementation
Tom, all, * Tom Lane (t...@sss.pgh.pa.us) wrote: > All in all, this looks pretty darn good from here, and I'm thinking > we should push forward on it. +1. Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] PostgreSQL 10beta1 - compilation fails on OpenBSD -current
On Fri, May 19, 2017 at 12:21:26PM +0300, Heikki Linnakangas wrote: > On 05/19/2017 12:03 PM, Pierre-Emmanuel André wrote: > > Hi, > > > > I tried to build PostgreSQL 10beta1 on OpenBSD -current and i have this > > error: > > > > gmake[3]: Entering directory > > '/usr/ports/pobj/postgresql-10beta1/postgresql-10beta1/src/backend/libpq' > > cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement > > -Wendif-labels -Wmissing-format-attribute -Wformat-security > > -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include > > -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include > > -c -o be-fsstubs.o be-fsstubs.c > > cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement > > -Wendif-labels -Wmissing-format-attribute -Wformat-security > > -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include > > -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include > > -c -o be-secure.o be-secure.c > > cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement > > -Wendif-labels -Wmissing-format-attribute -Wformat-security > > -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include > > -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include > > -c -o auth.o auth.c > > auth.c: In function 'CheckBSDAuth': > > auth.c:2265: error: too few arguments to function 'sendAuthRequest' > > > > > > In auth.c, there is this call: > > /* Send regular password request to client, and get the response */ > > sendAuthRequest(port, AUTH_REQ_PASSWORD); > > > > but this function is defined like this: > > static void sendAuthRequest(Port *port, AuthRequest areq, char > > *extradata,int extralen); > > This was an oversight in my commit 8d3b9cce81c. It added those new > arguments, but didn't update that call. I'll go fix that, thanks for the > report! > > You're the first to complain, and the buildfarm has been happy, so it seems > that you're the first one to try building 10beta1 with --with-bsd-auth. We > have one OpenBSD box in the buildfarm, "curculio", but apparently it's not > using the --with-bsd-auth option. > > Mikael, could you add --with-bsd-auth to curculio's configuration, please? > On 9.6 and above. > I'm the maintainer of PostgreSQL on OpenBSD ;) And the port enable --with-bsd-auth. Thanks for your answer. Regards, > - Heikki > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] HINT message for "ALTER SUBSCRIPTION.. WITH" need to change with SET keyword
On Fri, May 19, 2017 at 3:41 PM, tusharwrote: > postgres=# drop subscription sub; > ERROR: could not connect to publisher when attempting to drop the > replication slot "pub" > DETAIL: The error was: could not connect to server: No such file or > directory > Is the server running locally and accepting > connections on Unix domain socket "/tmp/.s.PGSQL.5432"? > HINT: Use ALTER SUBSCRIPTION ... WITH (slot_name = NONE) to disassociate > the subscription from the slot. > > expected = "HINT: Use ALTER SUBSCRIPTION ... SET (slot_name = NONE) to > disassociate the subscription from the slot." Seems like syntax got changed in b807f59828fbc02fea612e1cbc0066c6dfa3be9b commit but missed to change the hint. Attached patch fixes that. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com fix_subscription_hint.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] HINT message for "ALTER SUBSCRIPTION.. WITH" need to change with SET keyword
Hi, There is small issue in the HINT message which we provide at the time of dropping subscription ,where we are saying -WITH (slot_name) which need to change with SET (slot_name). postgres=# drop subscription sub; ERROR: could not connect to publisher when attempting to drop the replication slot "pub" DETAIL: The error was: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? HINT: Use ALTER SUBSCRIPTION ... *WITH* (slot_name = NONE) to disassociate the subscription from the slot. expected = "HINT: Use ALTER SUBSCRIPTION ... *SET* (slot_name = NONE) to disassociate the subscription from the slot." -- regards,tushar EnterpriseDB https://www.enterprisedb.com/ The Enterprise PostgreSQL Company
Re: [HACKERS] Re: [BUGS] BUG #14657: Server process segmentation fault in v10, May 10th dev snapshot
On Thu, May 18, 2017 at 7:23 AM, Amit Langotewrote: > On 2017/05/18 10:49, Amit Langote wrote: >> On 2017/05/18 2:14, Dilip Kumar wrote: >>> On Wed, May 17, 2017 at 7:41 PM, wrote: (gdb) bt #0 0x0061ab1b in list_nth () #1 0x005e4081 in ExecLockNonLeafAppendTables () #2 0x005f4d52 in ExecInitMergeAppend () #3 0x005e0365 in ExecInitNode () #4 0x005f35a7 in ExecInitLimit () #5 0x005e00f3 in ExecInitNode () #6 0x005dd207 in standard_ExecutorStart () #7 0x006f96d2 in PortalStart () #8 0x006f5c7f in exec_simple_query () #9 0x006f6fac in PostgresMain () #10 0x00475cdc in ServerLoop () #11 0x00692ffa in PostmasterMain () #12 0x00476600 in main () >> >> Thanks for the test case Sveinn and thanks Dilip for analyzing. >> >>> Seems like the issue is that the plans under multiple subroots are >>> pointing to the same partitioned_rels. >> >> That's correct. >> >>> If I am not getting it wrong "set_plan_refs(PlannerInfo *root, Plan >>> *plan, int rtoffset)" the rtoffset is specific to the subroot. Now, >>> problem is that set_plan_refs called for different subroot is updating >>> the same partition_rel info and make this value completely wrong which >>> will ultimately make ExecLockNonLeafAppendTables to access the out of >>> bound "rte" index. >> >> Yes. >> >>> set_plan_refs >>> { >>> [clipped] >>> case T_MergeAppend: >>> { >>> [clipped] >>> >>> foreach(l, splan->partitioned_rels) >>> { >>> lfirst_int(l) += rtoffset; >>> >>> >>> I think the solution should be that create_merge_append_path make the >>> copy of partitioned_rels list? >> >> Yes, partitioned_rels should be copied. >> >>> Attached patch fixes the problem but I am not completely sure about the fix. >> >> Thanks for creating the patch, although I think a better fix would be to >> make get_partitioned_child_rels() do the list_copy. That way, any other >> users of partitioned_rels will not suffer the same issue. Attached patch >> implements that, along with a regression test. >> >> Added to the open items. > > Oops, forgot to cc -hackers. Patch attached again. May be we should add a comment as to why the copy is needed. We still have the same copy shared across multiple append paths and set_plan_refs would change change it underneath those. May not be a problem right now but may be a problem in the future. Another option, which consumes a bit less memory is to make a copy at the time of planning if the path gets selected as the cheapest path. -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Pulling up more complicated subqueries
On Wed, May 17, 2017 at 8:38 PM, Heikki Linnakangaswrote: > > As a final note, I found an interesting paper called "Unnesting Arbitrary > Queries", by Thomas Neumann and Alfons Kemper > (http://www.btw-2015.de/res/proceedings/Hauptband/Wiss/Neumann-Unnesting_Arbitrary_Querie.pdf). > It describes a series of transformations that can be applied to de-correlate > (de-lateralize?) any lateral subquery join. The trick is to build a > temporary relation that contains all the distinct outer values of the join, > and pass that relation down to the subquery. So instead of "executing" the > subquery for every outer row, passing the outer values as parameters (using > PostgreSQL terminology), you collect a list of all the outer values first, > and then execute the subquery only once. In the subquery, you perform a join > with the temporary relation. And then the paper describes a bunch of rules > and optimizations, that can optimize away the temporary relation in many > cases. That might be one way to implement the de-lateralization steps in > PostgreSQL. > IIUC what you describe here, in a query involving foreign scan, we could pass down a list of parameters to the foreign server instead of running foreign scan on every parameter in the list. That would improve performance of such queries by a lot. -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [POC] hash partitioning
On Wed, May 17, 2017 at 6:54 PM, Ashutosh Bapatwrote: [...] > > Comments on the tests > +#ifdef USE_ASSERT_CHECKING > +{ > +/* > + * Hash partition bound stores modulus and remainder at > + * b1->datums[i][0] and b1->datums[i][1] position respectively. > + */ > +for (i = 0; i < b1->ndatums; i++) > +Assert((b1->datums[i][0] == b2->datums[i][0] && > +b1->datums[i][1] == b2->datums[i][1])); > +} > +#endif > Why do we need extra {} here? > Okay, removed in the attached version. > Comments on testcases > +CREATE TABLE hpart_1 PARTITION OF hash_parted FOR VALUES WITH > (modulus 8, remainder 0); > +CREATE TABLE fail_part (LIKE hpart_1 INCLUDING CONSTRAINTS); > +ALTER TABLE hash_parted ATTACH PARTITION fail_part FOR VALUES WITH > (modulus 4, remainder 0); > Probably you should also test the other-way round case i.e. create modulus 4, > remainder 0 partition and then try to add partitions with modulus 8, remainder > 4 and modulus 8, remainder 0. That should fail. > Fixed. > Why to create two tables hash_parted and hash_parted2, you should be able to > test with only a single table. > Fixed. > +INSERT INTO hpart_2 VALUES (3, 'a'); > +DELETE FROM hpart_2; > +INSERT INTO hpart_5_a (a, b) VALUES (6, 'a'); > This is slightly tricky. On different platforms the row may map to different > partitions depending upon how the values are hashed. So, this test may not be > portable on all the platforms. Probably you should add such testcases with a > custom hash operator class which is identity function as suggested by Robert. > This also applies to the tests in insert.sql and update.sql for partitioned > table without custom opclass. > Yes, you are correct. Fixed in the attached version. > +-- delete the faulting row and also add a constraint to skip the scan > +ALTER TABLE hpart_5 ADD CONSTRAINT hcheck_a CHECK (a IN (5)), ALTER a > SET NOT NULL; > The constraint is not same as the implicit constraint added for that > partition. > I am not sure whether it's really going to avoid the scan. Did you verify it? > If yes, then how? > I haven't tested that, may be I've copied blindly, sorry about that. I don't think this test is needed again for hash partitioning, so removed. > +ALTER TABLE hash_parted2 ATTACH PARTITION fail_part FOR VALUES WITH > (modulus 3, remainder 2); > +ERROR: every hash partition modulus must be a factor of the next > larger modulus > We should add this test with at least two partitions in there so that we can > check lower and upper modulus. Also, testing with some interesting > bounds discussed earlier > in this mail e.g. adding modulus 15 when 5, 10, 60 exist will be better than > testing with 3, 4 and 8. > Similar test do exists in create_table.sql file. > +ERROR: cannot use collation for hash partition key column "a" > This seems to indicate that we can not specify collation for hash partition > key > column, which isn't true. Column a here can have its collation. What's not > allowed is specifying collation in PARTITION BY clause. > May be reword the error as "cannot use collation for hash partitioning". or > plain "cannot use collation in PARTITION BY clause for hash partitioning". > > +ERROR: invalid bound specification for a list partition > +LINE 1: CREATE TABLE fail_part PARTITION OF list_parted FOR VALUES W... > +^ > Should the location for this error be that of WITH clause like in case of > range > and list partitioned table. > Fixed. > +select tableoid::regclass as part, a from hash_parted order by part; > May be add a % 4 to show clearly that the data really goes to the partitioning > with that remainder. > Fixed. Updated patch attached. 0001-patch rebased against latest head. 0002-patch also incorporates code comments and error message changes as per Robert's & your suggestions. Thanks ! Regards, Amul 0001-Cleanup_v3.patch Description: Binary data 0002-hash-partitioning_another_design-v10.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL 10beta1 - compilation fails on OpenBSD -current
On 05/19/2017 12:03 PM, Pierre-Emmanuel André wrote: Hi, I tried to build PostgreSQL 10beta1 on OpenBSD -current and i have this error: gmake[3]: Entering directory '/usr/ports/pobj/postgresql-10beta1/postgresql-10beta1/src/backend/libpq' cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o be-fsstubs.o be-fsstubs.c cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o be-secure.o be-secure.c cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o auth.o auth.c auth.c: In function 'CheckBSDAuth': auth.c:2265: error: too few arguments to function 'sendAuthRequest' In auth.c, there is this call: /* Send regular password request to client, and get the response */ sendAuthRequest(port, AUTH_REQ_PASSWORD); but this function is defined like this: static void sendAuthRequest(Port *port, AuthRequest areq, char *extradata,int extralen); This was an oversight in my commit 8d3b9cce81c. It added those new arguments, but didn't update that call. I'll go fix that, thanks for the report! You're the first to complain, and the buildfarm has been happy, so it seems that you're the first one to try building 10beta1 with --with-bsd-auth. We have one OpenBSD box in the buildfarm, "curculio", but apparently it's not using the --with-bsd-auth option. Mikael, could you add --with-bsd-auth to curculio's configuration, please? On 9.6 and above. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] PostgreSQL 10beta1 - compilation fails on OpenBSD -current
Hi, I tried to build PostgreSQL 10beta1 on OpenBSD -current and i have this error: gmake[3]: Entering directory '/usr/ports/pobj/postgresql-10beta1/postgresql-10beta1/src/backend/libpq' cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o be-fsstubs.o be-fsstubs.c cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o be-secure.o be-secure.c cc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pipe -I../../../src/include -I/usr/local/include -I/usr/local/include/libxml2 -I/usr/local/include-c -o auth.o auth.c auth.c: In function 'CheckBSDAuth': auth.c:2265: error: too few arguments to function 'sendAuthRequest' In auth.c, there is this call: /* Send regular password request to client, and get the response */ sendAuthRequest(port, AUTH_REQ_PASSWORD); but this function is defined like this: static void sendAuthRequest(Port *port, AuthRequest areq, char *extradata,int extralen); Regards, -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Redundant check of em_is_child
In match_eclasses_to_foreign_key_col(), there is this: if (em->em_is_child) continue; /* ignore children here */ ISTM, it might as well be: Assert(!em->em_is_child);/* no children yet */ That's because, I think it's still too early in query_planner() to be expecting any child EC members. Thanks, Amit diff --git a/src/backend/optimizer/path/equivclass.c b/src/backend/optimizer/path/equivclass.c index 67bd760fb4..735bd7fdc6 100644 --- a/src/backend/optimizer/path/equivclass.c +++ b/src/backend/optimizer/path/equivclass.c @@ -2017,8 +2017,7 @@ match_eclasses_to_foreign_key_col(PlannerInfo *root, EquivalenceMember *em = (EquivalenceMember *) lfirst(lc2); Var *var; - if (em->em_is_child) -continue; /* ignore children here */ + Assert(!em->em_is_child); /* no children yet */ /* EM must be a Var, possibly with RelabelType */ var = (Var *) em->em_expr; -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [Bug fix]If recovery.conf has target_session_attrs=read-write, the standby fails to start.
From: Michael Paquier [mailto:michael.paqu...@gmail.com] > if (!PQsendQuery(conn, >- "show transaction_read_only")) >+ "SHOW transaction_read_only")) >Or perhaps the replication command parser could be made more flexible with >lower-case characters for replication commands? By adding flex option '-i', replication command parser could be more flexible. # repl_scanner is compiled as part of repl_gram repl_gram.o: repl_scanner.c +repl_scanner.c: FLEXFLAGS = -CF -p -i This option is already used for syncrep_scanner.c, so it is not strange to add for repl_scanner.c too. Attached patch also fixes this problem. Regards, Daisuke, Higuchi flexible_replication_command_parser_v1.patch Description: flexible_replication_command_parser_v1.patch -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Hash Functions
On Thursday, May 18, 2017, Robert Haaswrote: > My experience with this area has led > me to give up on the idea of complete uniformity as impractical, and > instead look at it from the perspective of "what do we absolutely have > to ban in order for this to be sane?". I could agree to something like that. Let's explore some of the challenges there and potential solutions: 1. Dump/reload of hash partitioned data. Falling back to restore-through-the-root seems like a reasonable answer here. Moving to a different encoding is not an edge case, but it's not common either, so a performance penalty seems acceptable. I'm not immediately sure how we'd implement this in pg_dump/restore, so I'd feel a little more comfortable if I saw a sketch. 2. Having a lot of hash partitions would be cumbersome The user would need to create and manage each partition, and try to do global operations in a sane way. The normal case would probably involve scripts to do things like add an index to all partitions, or a column. Many partitions would also just pollute the namespace unless you remember to put them in a separate schema (yes, it's easy, but most people will still forget). Some syntax sugar would go a long way here. 3. The user would need to specify details they really don't care about for each partition. Things like "modulus 16, remainder 0", "modulus 16, remainder 1" are tedious boilerplate. And if the user makes a mistake, then 1/16 of inserts start failing. Probably would be caught during testing, but not exactly a good user experience. I'm not thrilled about this, considering that all the user really wants is 16 partitions, but it's not the end of the world. 4. Detach is a foot-gun If you detach a partition, random inserts will start failing. Not thrilled about this, but a hapless user would accept most of the blame if they stumble over it. Another way of saying this is with hash partitioning you really need the whole set for the table to be online at all. But we can't really enforce that, because it would limit some of the flexibility that you have in mind. Stepping back, your approach might be closer to the general postgres philosophy of allowing the user to assemble from spare parts first, then a few releases later we offer some pre-built subassemblies, and a few releases later we make the typical cases work out of the box. I'm fine with it as long as we don't paint ourselves into a corner. Of course we still have work to do on the hash functions. We should solve at least the most glaring portability problems, and try to harmonize the hash opfamilies. If you agree, I can put together a patch or two. Regards, Jeff Davis
Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)
On 2017/05/19 15:16, Thomas Munro wrote: > On Fri, May 19, 2017 at 5:51 PM, Amit Langote >wrote: >> I saw in the latest patch that now ExecSetupTriggerTransitionState() looks >> at mtstate->mt_partition_dispatch_info when setting up the transition >> conversion map. In the case where it's non-NULL, you may have realized >> that mt_transition_tupconv_map will be an exact copy of >> mt_partition_tupconv_maps that's already built. Would it perhaps be a >> good idea to either share the same or make a copy using memcpy() instead >> of doing the convert_tuples_by_name() calls again? > > Isn't it the opposite? mt_partition_tupconv_maps holds maps that > convert the parent format to the partition format. > mt_transition_tupconv_maps holds maps that convert the partition > format to the parent format (= transition tuplestore format). You're right, never mind. >>> On Thu, May 18, 2017 at 10:16 PM, Amit Langote >>> wrote: +typedef struct TriggerTransitionState +{ ... +boolttf_delete_old_table; Just curious: why ttf_? TriggerTransition field? >>> >>> Oops. Changed to "tts_". I had renamed this struct but not the members. >> >> Ah. BTW, maybe it's not a problem, but the existing TupleTableSlot's >> member names are prefixed with tts_, too. :) > > Would TransitionCaptureState be a better name for this struct? Yes. Although, losing the Trigger prefix might make it sound a bit ambiguous though. Right above its definition, we have TriggerData. So, maybe TriggerTransitionCaptureState or TriggerTransitionCaptureData or TriggerTransitionData may be worth considering. Thanks, Amit -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] transition table behavior with inheritance appears broken (was: Declarative partitioning - another take)
On Fri, May 19, 2017 at 5:51 PM, Amit Langotewrote: > I saw in the latest patch that now ExecSetupTriggerTransitionState() looks > at mtstate->mt_partition_dispatch_info when setting up the transition > conversion map. In the case where it's non-NULL, you may have realized > that mt_transition_tupconv_map will be an exact copy of > mt_partition_tupconv_maps that's already built. Would it perhaps be a > good idea to either share the same or make a copy using memcpy() instead > of doing the convert_tuples_by_name() calls again? Isn't it the opposite? mt_partition_tupconv_maps holds maps that convert the parent format to the partition format. mt_transition_tupconv_maps holds maps that convert the partition format to the parent format (= transition tuplestore format). >> On Thu, May 18, 2017 at 10:16 PM, Amit Langote >> wrote: >>> +typedef struct TriggerTransitionState >>> +{ >>> ... >>> +boolttf_delete_old_table; >>> >>> Just curious: why ttf_? TriggerTransition field? >> >> Oops. Changed to "tts_". I had renamed this struct but not the members. > > Ah. BTW, maybe it's not a problem, but the existing TupleTableSlot's > member names are prefixed with tts_, too. :) Would TransitionCaptureState be a better name for this struct? -- Thomas Munro http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Race conditions with WAL sender PID lookups
On Fri, May 19, 2017 at 11:05 AM, Michael Paquierwrote: > On Thu, May 18, 2017 at 1:43 PM, Thomas Munro > wrote: >> On Thu, May 11, 2017 at 1:48 PM, Michael Paquier >> wrote: >>> I had my eyes on the WAL sender code this morning, and I have noticed >>> that walsender.c is not completely consistent with the PID lookups it >>> does in walsender.c. In two code paths, the PID value is checked >>> without holding the WAL sender spin lock (WalSndRqstFileReload and >>> pg_stat_get_wal_senders), which looks like a very bad idea contrary to >>> what the new WalSndWaitStopping() does and what InitWalSenderSlot() is >>> doing for ages. >> >> There is also code that accesses shared walsender state without >> spinlocks over in syncrep.c. I think that file could use a few words >> of explanation for why it's OK to access pid, state and flush without >> synchronisation. > > Yes, that is read during the quorum and priority sync evaluation. > Except sync_standby_priority, all the other variables should be > protected using the spin lock of the WAL sender. walsender_private.h > is clear regarding that. So the current coding is inconsistent even > there. Attached is an updated patch. Also, as Horiguchi-san pointed out earlier, walreceiver seems need the similar fix. Regards, -- Masahiko Sawada NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers