Re: [HACKERS] pg_basebackup vs. Windows and tablespaces
On 12/20/2014 05:59 AM, Amit Kapila wrote: On Wed, Dec 17, 2014 at 11:32 AM, Amit Kapila amit.kapil...@gmail.com mailto:amit.kapil...@gmail.com wrote: On Tue, Dec 16, 2014 at 10:11 PM, Heikki Linnakangas hlinnakan...@vmware.com mailto:hlinnakan...@vmware.com wrote: On 12/16/2014 06:30 PM, Andrew Dunstan wrote: I'm not clear why human readability is the major criterion here. As for that, it will be quite difficult for a human to distinguish a name with a space at the end from one without. I really think a simple encoding scheme would be much the best. Yeah that could work, but we need the special encoding mainly for newline, other's would work with current patch. However it might be worth to do it for all kind of spaces. Currently it just reads the line upto newline using fscanf, but if we use special encoding, we might need to read the file character by character and check for newline without backslash(or other special encoding character); do you have something like that in mind? Another thing is that we need to take care that we encode/decode link path for tar format, as plain format might already be working. Attached patch handles the newline and other characters that are allowed in tablespace path, as we need escape character only for newline, I have added the same only for newline. So after patch, the tablespace_map file will look like below for different kind of paths, as you can see for tablespace id 16393 which contains newline, there is additional escape sequence \ before each newline where as other paths containing space works as it is. 16391 /home/akapila/mywork/workspace_pg/master/tbs1 16393 /home/akapila/mywork/workspace_pg/master/tbs\ a\ b\ 16392 /home/akapila/mywork/workspace_pg/master/tbs 2 So with this, I have handled all review comments raised for this patch and it is ready for review, as the status of this patch is changed from Ready for Committer to Waiting on Author, so ideally I think it should go back to Ready for Committer, however as I am not very sure about this point, I will change it to Needs Review (correct me if I am wrong). Summarization of latest changes: 1. Change file name from symlink_label to tablespace_map and changed the same every where in comments and variable names. 2. This feature will be supportted for both windows and linux; tablespace_map file will be generated on both windows and linux to restore tablespace links during archive recovery. 3. Handling for special characters in tablesapce path name. 4. Updation of docs. This generally looks good, but I have a couple of questions before I commit it. First, why is the new option for the BASE_BACKUP command of the Streaming Replication protcol TAR? It seems rather misleading. Shouldn't it be something like TABLESPACEMAP? I realize we ask for it when pg_basebackup is operating in TAR format mode, but the backend has no notion of that, does it? The only thing this does is trigger the sending of the tablespace map, so surely that's what the protocol option should suggest. Second, these lines in xlog.c seem wrong: else if ((ch == '\n' || ch == '\r') prev_ch == '\\') str[i-1] = '\n'; It looks to me like we should be putting ch in the string, not arbitrarily transforming \r into \n. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Auditing extension for PostgreSQL (Take 2)
On Thu, May 7, 2015 at 03:41:13PM -0400, Stephen Frost wrote: Bruce, What is our history of doing things in contrib because we are not sure what we want, then moving it into core? My general recollection is that there is usually something in the contrib version we don't want to add to core and people are locked into the contrib API, so we are left supporting it, e.g. xml2, though you could argue that auditing doesn't have application lock-in and xml2 was tied to an external library feature. That's exactly the argument that I'd make there. My recollection is that we did move pieces of hstore and have moved pieces of other contrib modules into core; perhaps we've not yet had a case where we've completely pulled one in, but given the relatively low level of dependency associated with pgAudit, I'm certainly hopeful that we'll be able to here. Lack of history which could be pointed to that's exactly what I'm suggesting here doesn't seem like a reason to not move forward here though; the concept of having a capability initially in contrib and then bringing it into core has certainly been discussed a number of times on other threads and generally makes sense, at least to me, especially when there's little API associated with the extension. OK, I am just asking so we remember this can go badly, not that it will. I guess the over-arching question is whether we have to put this into contrib so we can get feedback and change the API, or whether using from PGXN or incrementally adding it to core is the right approach. I'm surprised to hear this question of if we have to do X, Y, or Z. pgAudit brings a fantastic capability to PostgreSQL which users have been asking to have for many years and is a feature we should be itching to have included. That we can then take it and incrementally add it to core, to leverage things which are only available in core (as discussed last summer, including grammar and relation metadata), looks to me like a great direction to go in and one which we could use over and over to bring new features and capabilities to PG. Lack of auditing is one of the capabilities that users coming from other large RDBMS's see as preventing their ability to migrate to PostgreSQL. Other databases (open and closed source) have it and have had it for years and it's a serious shortcoming of ours that makes users either stick with their existing vendor or look to other closed-source or even open-source solutions. Yes, more extensive auditing is definitely needed. involved in this discussion will be also. Additionally, as discussed last summer, we can provide a migration path (which does not need to be automated or even feature compatible) from pgAudit to an in-core solution and then sunset pgAudit. Uh, that usually ends badly too. I'm confused by this, as it was the result of our discussion and your suggestion from last summer: 20140730192136.gm2...@momjian.us I certainly hope that hasn't substantially changed as that entire discussion is why we're even able to have this discussion about including pgAudit now. I was very much on-board with trying to work on an in-core solution until that thread convinced me that the upgrade concerns which I was worried about wouldn't be an issue for inclusion of an extension to provide the capability. I had forgotten about that. Yes, pg_upgrade can easily do this. Building an in-core solution, in my estimation at least, is going to require at least a couple of release cycles and having the feedback from users of pgAudit will be very valuable to building a good solution, but I don't believe we'll get that feedback without including it. See above --- is it jump through the user hoops and only then they will use it and give us feedback? How motivated can they be if they can't use the PGXN version? Why wouldn't we want to include this capability in PG? I also addressed the why not PGXN above. It it not a lack of motivation but the entire intent and design of the PGXN system which precludes most large organizations from using it, particularly for sensitive requirements such as auditing. So they trust the Postgres, but not the PGXN authors --- I guess legally that makes sense. The bottom line is that for the _years_ we ship pg_audit in /contrib, we will have some logging stuff in postgresql.conf and some in contrib/pg_audit and that distinction is going to look quite odd. To the extent you incrementally add to core, you will have duplicate functionality in both places. That's entirely correct, of course, but I'm not seeing it as an issue. I'm certainly prepared to support shipping pgAudit in contrib, as are others based on how this feature has been developed, for the years that we'll have 9.5, 9.6 (or 10.0, etc) supported- and that's also another reason why users will use it when they wouldn't use something on PGXN. Further, I look forward
Re: [HACKERS] initdb -S and tablespaces
On 2015-05-08 22:08:31 -0400, Robert Haas wrote: That seems a bit better. I think it's really important, if we're going to start to try to make fsync=off anything other than a toy, I think it's long past that. I've seen many, many people use it during initial data loading. that we document really clearly the circumstances in which it is or is not safe: Yea, we really should have done that a long time ago. - If you crash while fsync=off, your cluster may be corrupted. HW crash, right? - If you crash while fsync=on, but it was off at the last checkpoint, your cluster may be corrupted. - If you turn fsync=off, do stuff, turn fsync=on, and checkpoint successfully, a subsequent crash should not corrupt anything. Yep. Of course, even the last one isn't totally bullet-proof. Suppose one backend fails to absorb the new setting for some reason... I've a hard time worrying much about that one... Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Manipulating complex types as non-contiguous structures in-memory
Hi, The attached updated patch reduces both of those do-loop tests to about 60 msec on my machine. It contains two improvements over the 1.1 patch: Looking at this. First reading the patch to understand the details. * The VARTAG_IS_EXPANDED(tag) trick in VARTAG_SIZE is unlikely to beneficial, before the compiler could implement the whole thing as a computed goto or lookup table, afterwards not. * It'd be nice if the get_flat_size comment in expandeddatm.h could specify whether the header size is included. That varies enough around toast that it seems worthwhile. * You were rather bothered by the potential of multiple evaluations for the ilist stuff. And now the AARR macros are full of them... * I find the ARRAY_ITER_VARS/ARRAY_ITER_NEXT macros rather ugly. I don't buy the argument that turning them into functions will be slower. I'd bet the contrary on common platforms. * Not a fan of the EH_ prefix in array_expanded.c and EOH_ elsewhere. Just looks ugly to me. Whatever. * The list of hardwired safe ops in exec_check_rw_parameter is somewhat sad. Don't have a better idea though. * Also, a C function that is modifying a read-write expanded value in-place should take care to leave the value in a sane state if it fails partway through. - that's a pretty hefty requirement imo. I wonder if it'd not be possible to convert RW to RO if a value originates from outside an exception block. IIRC that'd be useful for a bunch of other error cases we currently basically shrug away (something around toast and aborted xacts comes to mind). * The forced RW-RO conversion in subquery scans is a bit sad, but I seems like something left for later. These are more judgement calls than anything else... Somewhere in the thread you comment on the fact that it's a bit sad that plpgsql is the sole benefactor of this (unless some function forces expansion internally). I'd be ok to leave it at that for now. It'd be quite cool to get some feedback from postgis folks about the suitability of this for their cases. I've not really looked into performance improvements around this, choosing to look into somewhat reasonable cases where it'll regress. ISTM that the worst case for the new situation is large arrays that exist as plpgsql variables but are only ever passed on. Say e.g. a function that accepts an array among other parameters and passes it on to another function. As rather extreme case of this: CREATE OR REPLACE FUNCTION plpgsql_array_length(p_a anyarray) RETURNS int LANGUAGE plpgsql AS $$ BEGIN RETURN array_length(p_a, 1); END; $$; SELECT plpgsql_array_length(b.arr) FROM (SELECT array_agg(d) FROM generate_series(1, 1) d) b(arr), generate_series(1, 10) repeat; with \o /dev/null redirecting the output. in an assert build it goes from 325.511 ms to 655.733 ms optimized from 94.648 ms to 287.574 ms. Now this is a fairly extreme example; and I don't think it'll get much worse than that. But I do think there's a bunch of cases where values exist in plpgsql that won't actually be accessed. Say, e.g. return values from queries that are then conditionally returned and such. I'm not sure it's possible to do anything about that. Expanding only in cases where it'd be beneficial is going to be hard. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER SYSTEM and ParseConfigFile()
* Stephen Frost (sfr...@snowman.net) wrote: I have a feeling that much of the GUC machinery could be reworked; as Jim mentioned up-thread, it might be possible to use memory contexts too which would make a lot of it much cleaner, I believe. Err, on another thread, not on this one. :) And Jim Nasby, to be more specific. Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] initdb -S and tablespaces
Andres Freund and...@anarazel.de writes: On 2015-05-08 22:08:31 -0400, Robert Haas wrote: Of course, even the last one isn't totally bullet-proof. Suppose one backend fails to absorb the new setting for some reason... I've a hard time worrying much about that one... You should. At the very least, whatever recipe we write for changing fsync safely has to include a clause like wait for all postmaster children to have absorbed the new fsync setting. The facts that (a) this could be a long time and (b) there's no easy way to be entirely certain about when it's done don't make it something you should ignore. I wonder whether we should change fsync to be PGC_POSTMASTER and then document the safe procedure as requiring a postmaster restart. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER SYSTEM and ParseConfigFile()
Stephen Frost wrote: * Tom Lane (t...@sss.pgh.pa.us) wrote: I think the code is OK, but yeah, this comment should be changed to reflect the idea that the function will append entries to an existing list of name/value pairs (and thus, that head_p/tail_p are not output but in/out parameters). I've pushed a fix for the comment to address this. This open-coded list thingy is pretty odd. I wonder if it'd be nicer to replace it with ilist. (Not for 9.5, of course.) -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER SYSTEM and ParseConfigFile()
* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote: Stephen Frost wrote: * Tom Lane (t...@sss.pgh.pa.us) wrote: I think the code is OK, but yeah, this comment should be changed to reflect the idea that the function will append entries to an existing list of name/value pairs (and thus, that head_p/tail_p are not output but in/out parameters). I've pushed a fix for the comment to address this. This open-coded list thingy is pretty odd. I wonder if it'd be nicer to replace it with ilist. (Not for 9.5, of course.) I have a feeling that much of the GUC machinery could be reworked; as Jim mentioned up-thread, it might be possible to use memory contexts too which would make a lot of it much cleaner, I believe. Agreed that it's not for 9.5 though. Thanks! Stephen signature.asc Description: Digital signature
Re: [HACKERS] Default Roles (was: Additional role attributes)
All, * Stephen Frost (sfr...@snowman.net) wrote: Starting a new thread, as suggested by Robert, for consideration of adding default roles for sets of administrative functions, therefore removing the need for superuser-level roles in many use-cases. [...] This is part 2 and really the guts of the changes proposed. Part 1 was the patch sent earlier today to change pg_stat_get_activity() to use a tuplestore, and this patch depends on that one. I'll rebase and resend after that's gone in. I did notice that Andres just pushed upsert though, and it wouldn't surprise me if there are now merge conflicts. Will address all of that tomorrow, in any case. Here's a rebase with a few additional items as follows. Andres suggested that we drop the REPLICATION role attribute and just use membership in pg_replication instead. That's turned out quite fantastically as we can now handle upgrades without breaking anything- CREATE ROLE and ALTER ROLE still accept the attribute but simply grant pg_replication to the role instead, and postinit.c has been changed to check role membership similar to other pg_hba role membership checks when a replication connection comes in. Hat's off to Andres for his suggestion. I've added two more default roles, since it was pointed out to me that I hadn't exactly mimicked the role attributes originally proposed. These are pg_rotate_logfile and pg_signal_backend. This also removes any direct object grants to pg_admin; it now means only all of the other roles combined without anything additional. Documentation and regression tests updated. Comments and suggestions are most welcome, as always. Thanks! Stephen From 381a3e619b8450a0f3a5225d70098fcb8291fd52 Mon Sep 17 00:00:00 2001 From: Stephen Frost sfr...@snowman.net Date: Thu, 7 May 2015 23:35:03 -0400 Subject: [PATCH] Create default roles for administrative functions To reduce the number of users on a system who are superusers, create a set of roles by default during initdb which are granted rights to certain functions and views that allow non-superusers to perform specific administrative tasks and have access to privileged information. The prefix pg_ is reserved for default system roles, similar to schemas and tablespaces. pg_upgrade is modified to check for any roles which start with pg_ and complain if they exist. pg_dumpall is modified to not dump out roles starting with pg_ on 9.5-and-above systems. CreateRole is modified to refuse creation of roles which start with pg_, similar to CreateSchema. Roles created are: pg_backup, pg_monitor, pg_replay, pg_replication, pg_rotate_logfile, pg_signal_backend and pg_admin. Behavior of existing system views is unchanged. Views and functions are added for pg_stat_activity_all and pg_stat_replication_all, to provide unfiltered results for users granted the pg_monitor role. is_superuser() checks are removed and EXECUTE revoked from public and instead granted to the appropriate role for appropriate administrative functions. Role attribute REPLICATION is superseded by the pg_replication role and therefore removed. CREATE ROLE and ALTER ROLE will still accept the option and transform it into a GRANT pg_replication TO role; to facilitate upgrades from older versions. --- contrib/test_decoding/expected/permissions.out | 8 +- doc/src/sgml/catalogs.sgml | 32 +--- doc/src/sgml/ref/alter_role.sgml | 5 +- doc/src/sgml/ref/create_role.sgml | 16 -- doc/src/sgml/ref/createuser.sgml | 22 --- doc/src/sgml/ref/pg_basebackup.sgml| 4 +- doc/src/sgml/ref/pg_receivexlog.sgml | 4 +- doc/src/sgml/user-manag.sgml | 87 ++ src/backend/access/transam/xlogfuncs.c | 30 src/backend/catalog/catalog.c | 5 +- src/backend/catalog/system_views.sql | 134 ++- src/backend/commands/user.c| 61 +-- src/backend/replication/logical/logicalfuncs.c | 11 -- src/backend/replication/slotfuncs.c| 15 -- src/backend/replication/walsender.c| 92 -- src/backend/utils/adt/misc.c | 64 +-- src/backend/utils/adt/pgstatfuncs.c| 103 +++- src/backend/utils/init/miscinit.c | 18 -- src/backend/utils/init/postinit.c | 2 +- src/bin/pg_dump/pg_dumpall.c | 17 +- src/bin/pg_upgrade/check.c | 40 - src/bin/psql/describe.c| 2 +- src/bin/psql/tab-complete.c| 16 +- src/bin/scripts/createuser.c | 15 -- src/include/catalog/pg_authid.h| 31 +++- src/include/catalog/pg_proc.h | 6 + src/include/miscadmin.h| 1 - src/include/replication/walsender.h| 1 + src/include/utils/builtins.h | 1 +
Re: [HACKERS] a fast bloat measurement tool (was Re: Measuring relation free space)
At 2015-05-09 02:20:51 +0200, and...@anarazel.de wrote: + * Abhijit Menon-Sen a...@2ndquadrant.com + * Portions Copyright (c) 2001,2002Tatsuo Ishii (from pgstattuple) I think for new extension we don't really add authors and such anymore. OK, I'll get rid of the boilerplate. Hm. I do wonder if this should really be called 'statbloat'. Perhaps it'd more appropriately be called pg_estimate_bloat or somesuch? Since we've moved it into pgstattuple, perhaps pgstattuple_approximate() or pgstattuple_estimated() would be a better name. I don't really care, I'll change it to whatever people like. Also, is free_space really exact? The fsm isn't byte exact. You're right, I'll fix that. Why go through C strings? I personally think we really shouldn't add more callers to BuildTupleFromCStrings, it's such an absurd interface. I just copied this more or less blindly from pgstattuple. I'll fix it and submit a separate patch to fix the equivalent code in pgstattuple. I think it'd actually be fine to just say that the relation has to be a table or matview. Good point. Agreed. I don't believe that rationale is really true these days. I'd much rather just rip this out here than copy the rather complex logic. Yes, thanks, I very much agree that this isn't really worth copying. I'll drop it in my next submission. I haven't checked, but I'm not sure that it's safe/meaningful to call PageGetHeapFreeSpace() on a new page. OK, I'll check and fix if necessary. Thanks for the feedback. I'll submit a revised patch shortly. -- Abhijit -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Async execution of postgres_fdw.
Tom, * Tom Lane (t...@sss.pgh.pa.us) wrote: Stephen Frost sfr...@snowman.net writes: I'm all for improving performance of postgres_fdw and would like to see us support sending queries off to be worked asyncronously, but starting execution on the remote server during ExecInitNode is against the documentated FDW API spec. I discussed exactly this issue over a year ago here: http://www.postgresql.org/message-id/20131104032604.gb2...@tamriel.snowman.net Sadly, there weren't any direct responses to that email, but I do recall having a discussion on another thread (or in person?) with Tom where we ended up agreeing that we can't simply remove that requirement from the docs or the API. Yeah. There are at least a couple of reasons why not: Thanks for the reminders of those. Also, so far as a quick review of the actual patch goes, I would really like to see this lose the PFC wrapper layer, which accounts for 95% of the code churn in the patch and doesn't seem to add any actual value. What it does add is unchecked malloc failure conditions. Agreed, the wrapper isn't doing anything particularly useful; I had started out thinking that would be my first comment until it became clear where all the performance improvement was coming from. I've gone ahead and marked this as Rejected. The concept of async execution of postgres_fdw is certainly still open and a worthwhile goal, but this implementation isn't the way to achieve that. Thanks! Stephen signature.asc Description: Digital signature
Re: Custom/Foreign-Join-APIs (Re: [HACKERS] [v9.5] Custom Plan API)
Robert Haas robertmh...@gmail.com writes: On Fri, May 8, 2015 at 5:48 PM, Tom Lane t...@sss.pgh.pa.us wrote: I think that we'd really be better off insisting on same server (as in same pg_foreign_server OID), hence automatically same FDW, and what's even more important, same user mapping for any possible query execution context. The possibility that there are some corner cases where some FDWs could optimize other scenarios seems to me to be poor return for the bugs and security holes that will arise any time typical FDWs forget to check this. I originally wanted to go quite the other way with this and check for join pushdown via handler X any time at least one of the two relations involved used handler X, even if the other one used some other handler or was a plain table. In particular, it seems to me quite plausible to want to teach an FDW that a certain local table is replicated on a remote node, allowing a join between a foreign table and a plain table to be pushed down. If we did do something like that, I think a saner implementation would involve substituting a foreign table for the local one earlier, via view expansion. So by the time we are doing join planning, there would be no need to consider cross-server joins anyway. This infrastructure can't be used that way anyhow, so maybe there's no harm in tightening it up, but I'm wary of circumscribing what FDW authors can do. Somebody who's really intent on shooting themselves in the foot can always use the set_join_pathlist_hook to inject paths for arbitrary joins. The FDW mechanism should support reasonable use cases without undue complication, and I doubt that what we've got now is adding anything except complication and risk of bugs. For the archives' sake, let me lay out a couple of reasons why an FDW that tried to allow cross-server joins would almost certainly be broken, and broken in security-relevant ways. Suppose for instance that postgres_fdw tried to be smart and drill down into foreign tables' server IDs to allow joining of any two tables that have the same effective host name, port, database name, user name, and anything else you think would be relevant to its choice of connections. The trouble with that is that the user mapping is context dependent, in particular one local userID might map to the same remote user name for two different server OIDs, while another might map to different user names. So if we plan a query under the first userID we might decide it's okay to do the join remotely. Then, if we re-use that plan while executing as another userID (which is entirely possible) what will probably happen is that the remote join query will get sent off under one or the other of the remote usernames associated with the second local userID. This could lead to either a permission failure, or a remote table access that should not be allowed to the current local userID. Admittedly, such cases might be rare in practice, but it's still a security hole. Also, even if the FDW is defensive enough to recheck full matching of the tables' connection properties at execution time, there's not much it could do about the situation except fail; it couldn't cause a re-plan to occur. For another case, we do not have any mechanism whereby operations like ALTER SERVER OPTIONS could invalidate existing plans. Thus, even if the two tables' connection properties matched at plan time, there's no guarantee that they still match at execution. This is probably not a security hole (at least not if you assume foreign-server owners are trusted users), but it's still a bug that exists only if someone tries to allow cross-server joins. For these reasons, I think that if an FDW tried to be laxer than tables must be on the same pg_foreign_server entry to be joined, the odds approach unity that it would be broken, and probably dangerously broken. So we should just make that check for the FDWs. Anybody who thinks they're smarter than the average bear can use set_join_pathlist_hook, but they are probably wrong. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rounding to even for numeric data type
v2 applied tested. [...] Not sure about that, the tests are placed here to be consistent with for is done for float8. Maybe float8 to numeric casts could have been in numeric too. [...] I reworked the example in the docs. Indeed, looks good. -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Custom/Foreign-Join-APIs (Re: [HACKERS] [v9.5] Custom Plan API)
2015-05-09 11:21 GMT+09:00 Robert Haas robertmh...@gmail.com: On Fri, May 8, 2015 at 5:48 PM, Tom Lane t...@sss.pgh.pa.us wrote: ... btw, I just noticed something that had escaped me because it seems so obviously wrong that I had not even stopped to consider the possibility that the code was doing what it's doing. To wit, that the planner supposes that two foreign tables are potentially remote-joinable if they share the same underlying FDW handler function. Not the same server, and not even the same pg_foreign_data_wrapper entry, but the pg_proc entry for the handler function. I think this is fundamentally bogus. Under what circumstances are we not just laying off the need to check same server origin onto the FDW? How is it that the urgent need for the FDW to check for that isn't even mentioned in the documentation? I think that we'd really be better off insisting on same server (as in same pg_foreign_server OID), hence automatically same FDW, and what's even more important, same user mapping for any possible query execution context. The possibility that there are some corner cases where some FDWs could optimize other scenarios seems to me to be poor return for the bugs and security holes that will arise any time typical FDWs forget to check this. I originally wanted to go quite the other way with this and check for join pushdown via handler X any time at least one of the two relations involved used handler X, even if the other one used some other handler or was a plain table. In particular, it seems to me quite plausible to want to teach an FDW that a certain local table is replicated on a remote node, allowing a join between a foreign table and a plain table to be pushed down. This infrastructure can't be used that way anyhow, so maybe there's no harm in tightening it up, but I'm wary of circumscribing what FDW authors can do. I think it's better to be rather expansive in terms of when we call them and let them return without doing anything some of them time than to define the situations in which we call them too narrowly and end up ruling out interesting use cases. Probably, it is relatively minor case to join a foreign table and a replicated local relation on remote side. Even if the rough check by sameness of foreign server-id does not invoke GetForeignJoinPaths, FDW driver can implement its arbitrary logic using set_join_pathlist_hook by its own risk, isn't it? The attached patch changed the logic to check joinability of two foreign relations. As upthread, it checks foreign server-id instead of handler function. build_join_rel() set fdw_server of RelOptInfo if inner and outer foreign- relations have same value, then it eventually allows to kick GetForeignJoinPaths on add_paths_to_joinrel(). Thanks, -- KaiGai Kohei kai...@kaigai.gr.jp custom-join-fdw-pushdown-check-by-server.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ALTER SYSTEM and ParseConfigFile()
* Tom Lane (t...@sss.pgh.pa.us) wrote: I think the code is OK, but yeah, this comment should be changed to reflect the idea that the function will append entries to an existing list of name/value pairs (and thus, that head_p/tail_p are not output but in/out parameters). I've pushed a fix for the comment to address this. Thanks! Stephen signature.asc Description: Digital signature
Re: Custom/Foreign-Join-APIs (Re: [HACKERS] [v9.5] Custom Plan API)
2015-05-09 8:18 GMT+09:00 Kohei KaiGai kai...@kaigai.gr.jp: 2015-05-09 2:46 GMT+09:00 Tom Lane t...@sss.pgh.pa.us: Kouhei Kaigai kai...@ak.jp.nec.com writes: I've been trying to code-review this patch, because the documentation seemed several bricks shy of a load, and I find myself entirely confused by the fdw_ps_tlist and custom_ps_tlist fields. Main-point of your concern is lack of documentation/comments to introduce how does the pseudo-scan targetlist works here, isn't it?? Well, there's a bunch of omissions and outright errors in the docs and comments, but this is the main issue that I was uncertain how to fix from looking at the patch. Also, if that is what they're for (ie, to allow the FDW to redefine the scan tuple contents) it would likely be better to decouple that feature from whether the plan node is for a simple scan or a join. In this version, we don't intend FDW/CSP to redefine the contents of scan tuples, even though I want off-loads heavy targetlist calculation workloads to external computing resources in *the future version*. I do not think it's a good idea to introduce such a field now and then redefine how it works and what it's for in a future version. We should not be moving the FDW APIs around more than we absolutely have to, especially not in ways that wouldn't throw an obvious compile error for un-updated code. Also, the longer we wait to make a change that we know we want, the more pain we inflict on FDW authors (simply because there will be more of them a year from now than there are today). Ah, above my sentence don't intend to reuse the existing field for different works in the future version. It's just what I want to support in the future version. Yep, I see. It is not a good idea to redefine the existing field for different purpose silently. It's not my plan. The business about resjunk columns in that list also seems a bit half baked, or at least underdocumented. I'll add source code comments to introduce how does it works any when does it have resjunk=true. It will be a bit too deep to be introduced in the SGML file. I don't actually see a reason for resjunk marking in that list at all, if what it's for is to define the contents of the scan tuple. I think we should just s/ExecCleanTypeFromTL/ExecTypeFromTL/ in nodeForeignscan and nodeCustom, and get rid of the sanity check in create_foreignscan_plan (which is pretty pointless anyway, considering the number of other ways you could screw up that tlist without it being detected). http://www.postgresql.org/message-id/9a28c8860f777e439aa12e8aea7694f8010d7...@bpxm15gp.gisp.nec.co.jp Does the introduction in above post make sense? The *_ps_tlist is not only used for a basic of scan-tuple descriptor, but also used to solve var-node if varno==INDEX_VAR in EXPLAIN command. On the other hands, existence of the junk entries (which are referenced in external computing resources only) may cause unnecessary projection. So, I want to discriminate target-entries for basis of scan-tuple descriptor from other ones just for EXPLAIN command. I'm also inclined to rename the fields to fdw_scan_tlist/custom_scan_tlist, which would better reflect what they do, and to change the API of make_foreignscan() to add a parameter corresponding to the scan tlist. It's utterly bizarre and error-prone that this patch has added a field that the FDW is supposed to set and not changed make_foreignscan to match. OK, I'll do the both of changes. The name of ps_tlist is a shorten of pseudo-scan target-list. So, fdw_scan_tlist/custom_scan_tlist are almost intentional. The attached patch renamed *_ps_tlist by *_scan_tlist according to the suggestion. Also, put a few detailed source code comments around this alternative scan_tlist. Thanks, -- KaiGai Kohei kai...@kaigai.gr.jp custom-join-rename-ps_tlist.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rounding to even for numeric data type
On Sat, May 2, 2015 at 9:53 PM, Fabien COELHO wrote: Quick review: patches applies, make check is fine, all is well. Thanks for the feedback, Fabien! All the casting tests could be put in numeric.sql, as there are all related to numeric and that would avoid duplicating the values lists. Not sure about that, the tests are placed here to be consistent with for is done for float8. For the documentation, I would also add 3.5 so that rounding to even is even clearer:-) Good idea. I reworked the example in the docs. -- Michael From 7a40acab425f25f7c06344b2e039405542ed020e Mon Sep 17 00:00:00 2001 From: Michael Paquier michael@otacoo.com Date: Sat, 9 May 2015 22:15:47 +0900 Subject: [PATCH] Precise rounding behavior of numeric and double precision in docs Regression tests improving the coverage in this area are added as well. --- doc/src/sgml/datatype.sgml| 19 +++ src/test/regress/expected/int2.out| 20 src/test/regress/expected/int4.out| 20 src/test/regress/expected/int8.out| 20 src/test/regress/expected/numeric.out | 24 src/test/regress/sql/int2.sql | 10 ++ src/test/regress/sql/int4.sql | 10 ++ src/test/regress/sql/int8.sql | 10 ++ src/test/regress/sql/numeric.sql | 10 ++ 9 files changed, 143 insertions(+) diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index da1f25f..24efe25 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -612,6 +612,25 @@ NUMERIC equivalent. Both types are part of the acronymSQL/acronym standard. /para + +para + With using the functionround/ function, the typenumeric/type + type rounds ties away from zero, and the typedouble precision/type + type rounds ties away to even. + +programlisting +SELECT num, + round(num::double precision) AS prec_round, + round(num::numeric) AS nume_round + FROM generate_series(1.5, 3.5, 1) as num; + num | prec_round | nume_round +-++ + 1.5 | 2 | 2 + 2.5 | 2 | 3 + 3.5 | 4 | 4 +(3 rows) +/programlisting +/para /sect2 diff --git a/src/test/regress/expected/int2.out b/src/test/regress/expected/int2.out index 311fe73..3ea4ed9 100644 --- a/src/test/regress/expected/int2.out +++ b/src/test/regress/expected/int2.out @@ -286,3 +286,23 @@ FROM (VALUES (-2.5::float8), 2.5 | 2 (7 rows) +-- check rounding when casting from numeric +SELECT x, x::int2 AS int2_value +FROM (VALUES (-2.5::numeric), + (-1.5::numeric), + (-0.5::numeric), + (0.0::numeric), + (0.5::numeric), + (1.5::numeric), + (2.5::numeric)) t(x); + x | int2_value +--+ + -2.5 | -3 + -1.5 | -2 + -0.5 | -1 + 0.0 | 0 + 0.5 | 1 + 1.5 | 2 + 2.5 | 3 +(7 rows) + diff --git a/src/test/regress/expected/int4.out b/src/test/regress/expected/int4.out index 83fe022..372fd4d 100644 --- a/src/test/regress/expected/int4.out +++ b/src/test/regress/expected/int4.out @@ -383,3 +383,23 @@ FROM (VALUES (-2.5::float8), 2.5 | 2 (7 rows) +-- check rounding when casting from numeric +SELECT x, x::int4 AS int4_value +FROM (VALUES (-2.5::numeric), + (-1.5::numeric), + (-0.5::numeric), + (0.0::numeric), + (0.5::numeric), + (1.5::numeric), + (2.5::numeric)) t(x); + x | int4_value +--+ + -2.5 | -3 + -1.5 | -2 + -0.5 | -1 + 0.0 | 0 + 0.5 | 1 + 1.5 | 2 + 2.5 | 3 +(7 rows) + diff --git a/src/test/regress/expected/int8.out b/src/test/regress/expected/int8.out index da8be51..ed0bd34 100644 --- a/src/test/regress/expected/int8.out +++ b/src/test/regress/expected/int8.out @@ -866,3 +866,23 @@ FROM (VALUES (-2.5::float8), 2.5 | 2 (7 rows) +-- check rounding when casting from numeric +SELECT x, x::int8 AS int8_value +FROM (VALUES (-2.5::numeric), + (-1.5::numeric), + (-0.5::numeric), + (0.0::numeric), + (0.5::numeric), + (1.5::numeric), + (2.5::numeric)) t(x); + x | int8_value +--+ + -2.5 | -3 + -1.5 | -2 + -0.5 | -1 + 0.0 | 0 + 0.5 | 1 + 1.5 | 2 + 2.5 | 3 +(7 rows) + diff --git a/src/test/regress/expected/numeric.out b/src/test/regress/expected/numeric.out index 9d68145..e6ee548 100644 --- a/src/test/regress/expected/numeric.out +++ b/src/test/regress/expected/numeric.out @@ -730,6 +730,30 @@ SELECT a, ceil(a), ceiling(a), floor(a), round(a) FROM ceil_floor_round; (7 rows) DROP TABLE ceil_floor_round; +-- Check rounding, it should round ties away from zero.
Re: [HACKERS] subxcnt defined as signed integer in SnapshotData and SerializeSnapshotData
On Fri, May 8, 2015 at 10:27 PM, Simon Riggs wrote: On 8 May 2015 at 13:02, Michael Paquier wrote: I think that we should redefine subxcnt as uint32 for consistency with xcnt, and remove the two assertions that 924bcf4 has introduced. I could get a patch quickly done FWIW. (uint32) +1 Attached is the patch. This has finished by being far simpler than what I thought first. Regards, -- Michael diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c index a2cb4a0..e1ca52c 100644 --- a/src/backend/utils/time/snapmgr.c +++ b/src/backend/utils/time/snapmgr.c @@ -168,7 +168,7 @@ typedef struct SerializedSnapshotData TransactionId xmin; TransactionId xmax; uint32 xcnt; - int32 subxcnt; + uint32 subxcnt; bool suboverflowed; bool takenDuringRecovery; CommandId curcid; @@ -1480,9 +1480,6 @@ SerializeSnapshot(Snapshot snapshot, char *start_address) { SerializedSnapshotData *serialized_snapshot; - Assert(snapshot-xcnt = 0); - Assert(snapshot-subxcnt = 0); - serialized_snapshot = (SerializedSnapshotData *) start_address; /* Copy all required fields */ diff --git a/src/include/utils/snapshot.h b/src/include/utils/snapshot.h index a734bf0..d37ff0b 100644 --- a/src/include/utils/snapshot.h +++ b/src/include/utils/snapshot.h @@ -85,7 +85,7 @@ typedef struct SnapshotData * out any that are = xmax */ TransactionId *subxip; - int32 subxcnt; /* # of xact ids in subxip[] */ + uint32 subxcnt; /* # of xact ids in subxip[] */ bool suboverflowed; /* has the subxip array overflowed? */ bool takenDuringRecovery; /* recovery-shaped snapshot? */ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: Custom/Foreign-Join-APIs (Re: [HACKERS] [v9.5] Custom Plan API)
2015-05-09 8:32 GMT+09:00 Kohei KaiGai kai...@kaigai.gr.jp: 2015-05-09 3:51 GMT+09:00 Tom Lane t...@sss.pgh.pa.us: Robert Haas robertmh...@gmail.com writes: On Fri, May 8, 2015 at 1:46 PM, Tom Lane t...@sss.pgh.pa.us wrote: That's nice, but 9.5 feature freeze is only a week away. I don't have a lot of confidence that this stuff is actually in a state where we won't regret shipping it in 9.5. Yeah. The POC you were asking for upthread certainly exists and has for a while, or I would not have committed this. But I do not think it likely that the postgres_fdw support will be ready for 9.5. Well, we have two alternatives. I can keep hacking on this and get it to a state where it seems credible to me, but we won't have any proof that it actually works (though perhaps we could treat any problems as bugs that should hopefully get found before 9.5 ships, if a postgres_fdw patch shows up in the next few months). Or we could revert the whole thing and bounce it to the 9.6 cycle. I don't really like doing the latter, but I'm pretty uncomfortable with committing to published FDW APIs that are (a) as messy as this and (b) practically untested. The odds that something slipped through the cracks are high. Aside from the other gripes I raised, I'm exceedingly unhappy with the ad-hoc APIs proposed for GetForeignJoinPaths and set_join_pathlist_hook. It's okay for internal calls in joinpath.c to look like that, but exporting that set of parameters seems like pure folly. We've changed those parameter lists repeatedly (for instance in 9.2 and again in 9.3); the odds that they'll need to change again in future approach 100%. One way we could reduce the risk of code breakage there is to stuff all or most of those parameters into a struct. This might result in a small slowdown for the internal calls, or then again maybe not --- there probably aren't many architectures that can pass 10 parameters in registers anyway. Is it like a following structure definition? typedef struct { PlannerInfo *root; RelOptInfo *joinrel; RelOptInfo *outerrel; RelOptInfo *innerrel; List *restrictlist; JoinType jointype; SpecialJoinInfo *sjinfo; SemiAntiJoinFactors *semifactors; Relids param_source_rels; Relids extra_lateral_rels; } SetJoinPathListArgs; I agree the idea. It also helps CSP driver implementation where it calls next driver that was already chained on its installation. if (set_join_pathlist_next) set_join_pathlist_next(args); is more stable manner than if (set_join_pathlist_next) set_join_pathlist_next(root, joinrel, outerrel, innerrel, restrictlist, jointype, sjinfo, semifactors, param_source_rels, extra_lateral_rels); The attached patch newly defines ExtraJoinPathArgs struct to pack all the necessary information to be delivered on GetForeignJoinPaths and set_join_pathlist_hook. Its definition is below: typedef struct { PlannerInfo*root; RelOptInfo *joinrel; RelOptInfo *outerrel; RelOptInfo *innerrel; List *restrictlist; JoinTypejointype; SpecialJoinInfo *sjinfo; SemiAntiJoinFactors *semifactors; Relids param_source_rels; Relids extra_lateral_rels; } ExtraJoinPathArgs; then, hook invocation gets much simplified, like: /* * 6. Finally, give extensions a chance to manipulate the path list. */ if (set_join_pathlist_hook) set_join_pathlist_hook(jargs); Thanks, -- KaiGai Kohei kai...@kaigai.gr.jp custom-join-argument-by-struct.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal : REINDEX xxx VERBOSE
On Sat, May 9, 2015 at 4:26 AM, Fabrízio de Royes Mello fabriziome...@gmail.com wrote: On Fri, May 8, 2015 at 4:23 PM, Fabrízio de Royes Mello fabriziome...@gmail.com wrote: On Thu, May 7, 2015 at 7:55 PM, Sawada Masahiko sawada.m...@gmail.com wrote: On 5/7/15, Sawada Masahiko sawada.m...@gmail.com wrote: On Wed, May 6, 2015 at 5:42 AM, Robert Haas robertmh...@gmail.com javascript:; wrote: On Tue, May 5, 2015 at 11:10 AM, Sawada Masahiko sawada.m...@gmail.com javascript:; wrote: On Fri, May 1, 2015 at 9:04 PM, Robert Haas robertmh...@gmail.com javascript:; wrote: On Thu, Apr 30, 2015 at 11:05 PM, Sawada Masahiko sawada.m...@gmail.com javascript:; wrote: VACUUM has both syntax: with parentheses and without parentheses. I think we should have both syntax for REINDEX like VACUUM does because it would be pain to put parentheses whenever we want to do REINDEX. Are the parentheses optional in REINDEX command? No. The unparenthesized VACUUM syntax was added back before we realized that that kind of syntax is a terrible idea. It requires every option to be a keyword, and those keywords have to be in a fixed order. I believe the intention is to keep the old VACUUM syntax around for backward-compatibility, but not to extend it. Same for EXPLAIN and COPY. REINDEX will have only one option VERBOSE for now. Even we're in a situation like that it's not clear to be added newly additional option to REINDEX now, we should need to put parenthesis? In my opinion, yes. The whole point of a flexible options syntax is that we can add new options without changing the grammar. That involves some compromise on the syntax, which doesn't bother me a bit. Our previous experiments with this for EXPLAIN and COPY and VACUUM have worked out quite well, and I see no reason for pessimism here. I agree that flexible option syntax does not need to change grammar whenever we add new options. Attached patch is changed based on your suggestion. And the patch for reindexdb is also attached. Please feedbacks. Also I'm not sure that both implementation and documentation regarding VERBOSE option should be optional. I don't know what this means. Sorry for confusing you. Please ignore this. Sorry, I forgot attach files. I applied the two patches to master and I got some errors when compile: tab-complete.c: In function ‘psql_completion’: tab-complete.c:3338:12: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:21: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:31: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:41: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:53: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:5: warning: statement with no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:59: error: expected ‘;’ before ‘}’ token {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3340:22: error: ‘list_REINDEX’ undeclared (first use in this function) COMPLETE_WITH_LIST(list_REINDEX); ^ tab-complete.c:169:22: note: in definition of macro ‘COMPLETE_WITH_LIST’ completion_charpp = list; \ ^ tab-complete.c:3340:22: note: each undeclared identifier is reported only once for each function it appears in COMPLETE_WITH_LIST(list_REINDEX); ^ tab-complete.c:169:22: note: in definition of macro ‘COMPLETE_WITH_LIST’ completion_charpp = list; \ ^ make[3]: *** [tab-complete.o] Error 1 make[3]: *** Waiting for unfinished jobs make[2]: *** [install-psql-recurse] Error 2 make[2]: *** Waiting for unfinished jobs make[1]: *** [install-bin-recurse] Error 2 make: *** [install-src-recurse] Error 2 Looking at the code I think you remove one line accidentally from tab-complete.c: $ git diff src/bin/psql/tab-complete.c diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 750e29d..55b0df5 100644 --- a/src/bin/psql/tab-complete.c +++ b/src/bin/psql/tab-complete.c @@ -3335,7 +3335,6 @@ psql_completion(const char *text, int
Re: [HACKERS] Proposal : REINDEX xxx VERBOSE
On Sun, May 10, 2015 at 2:23 AM, Sawada Masahiko sawada.m...@gmail.com wrote: On Sat, May 9, 2015 at 4:26 AM, Fabrízio de Royes Mello fabriziome...@gmail.com wrote: On Fri, May 8, 2015 at 4:23 PM, Fabrízio de Royes Mello fabriziome...@gmail.com wrote: On Thu, May 7, 2015 at 7:55 PM, Sawada Masahiko sawada.m...@gmail.com wrote: On 5/7/15, Sawada Masahiko sawada.m...@gmail.com wrote: On Wed, May 6, 2015 at 5:42 AM, Robert Haas robertmh...@gmail.com javascript:; wrote: On Tue, May 5, 2015 at 11:10 AM, Sawada Masahiko sawada.m...@gmail.com javascript:; wrote: On Fri, May 1, 2015 at 9:04 PM, Robert Haas robertmh...@gmail.com javascript:; wrote: On Thu, Apr 30, 2015 at 11:05 PM, Sawada Masahiko sawada.m...@gmail.com javascript:; wrote: VACUUM has both syntax: with parentheses and without parentheses. I think we should have both syntax for REINDEX like VACUUM does because it would be pain to put parentheses whenever we want to do REINDEX. Are the parentheses optional in REINDEX command? No. The unparenthesized VACUUM syntax was added back before we realized that that kind of syntax is a terrible idea. It requires every option to be a keyword, and those keywords have to be in a fixed order. I believe the intention is to keep the old VACUUM syntax around for backward-compatibility, but not to extend it. Same for EXPLAIN and COPY. REINDEX will have only one option VERBOSE for now. Even we're in a situation like that it's not clear to be added newly additional option to REINDEX now, we should need to put parenthesis? In my opinion, yes. The whole point of a flexible options syntax is that we can add new options without changing the grammar. That involves some compromise on the syntax, which doesn't bother me a bit. Our previous experiments with this for EXPLAIN and COPY and VACUUM have worked out quite well, and I see no reason for pessimism here. I agree that flexible option syntax does not need to change grammar whenever we add new options. Attached patch is changed based on your suggestion. And the patch for reindexdb is also attached. Please feedbacks. Also I'm not sure that both implementation and documentation regarding VERBOSE option should be optional. I don't know what this means. Sorry for confusing you. Please ignore this. Sorry, I forgot attach files. I applied the two patches to master and I got some errors when compile: tab-complete.c: In function ‘psql_completion’: tab-complete.c:3338:12: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:21: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:31: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:41: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:53: warning: left-hand operand of comma expression has no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:5: warning: statement with no effect [-Wunused-value] {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3338:59: error: expected ‘;’ before ‘}’ token {TABLE, INDEX, SYSTEM, SCHEMA, DATABASE, NULL}; ^ tab-complete.c:3340:22: error: ‘list_REINDEX’ undeclared (first use in this function) COMPLETE_WITH_LIST(list_REINDEX); ^ tab-complete.c:169:22: note: in definition of macro ‘COMPLETE_WITH_LIST’ completion_charpp = list; \ ^ tab-complete.c:3340:22: note: each undeclared identifier is reported only once for each function it appears in COMPLETE_WITH_LIST(list_REINDEX); ^ tab-complete.c:169:22: note: in definition of macro ‘COMPLETE_WITH_LIST’ completion_charpp = list; \ ^ make[3]: *** [tab-complete.o] Error 1 make[3]: *** Waiting for unfinished jobs make[2]: *** [install-psql-recurse] Error 2 make[2]: *** Waiting for unfinished jobs make[1]: *** [install-bin-recurse] Error 2 make: *** [install-src-recurse] Error 2 Looking at the code I think you remove one line accidentally from tab-complete.c: $ git diff src/bin/psql/tab-complete.c diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c index 750e29d..55b0df5 100644 --- a/src/bin/psql/tab-complete.c +++
[HACKERS] Typo in reindexdb documentation
Hi all, Attached patch fixes the typo is in reindexdb documentation regarding REINDEX SCHEMA. Regards, --- Sawada Masahiko diff --git a/doc/src/sgml/ref/reindexdb.sgml b/doc/src/sgml/ref/reindexdb.sgml index b5b449c..a745f6c 100644 --- a/doc/src/sgml/ref/reindexdb.sgml +++ b/doc/src/sgml/ref/reindexdb.sgml @@ -30,7 +30,7 @@ PostgreSQL documentation arg choice=plainoption--schema/option/arg arg choice=plainoption-S/option/arg /group - replaceabletable/replaceable + replaceableschema/replaceable /arg /arg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Typo in reindexdb documentation
Sawada, * Sawada Masahiko (sawada.m...@gmail.com) wrote: Attached patch fixes the typo is in reindexdb documentation regarding REINDEX SCHEMA. Fix pushed, thanks! Stephen signature.asc Description: Digital signature