Re: [HACKERS] Range Type constructors
On Wed, 2011-02-09 at 15:39 +0900, Itagaki Takahiro wrote: > On Wed, Feb 9, 2011 at 14:50, Jeff Davis wrote: > > 1. > > The obvious constructor would be: > > range(1, 10) > > But is that [1, 10), (1, 10], (1, 10), or [1, 10]? We need to support > > all 4, and it's not obvious how to do that easily. > > here is the same issue in table partitioning. Also, We might use the > syntax for our partitioning in the future. Just for reference, > DB2 uses EXCLUSIVE and INCLUSIVE keywords to specify boundaries. > > CREATE TABLE ... PARTITION BY RANGE (...) > (STARTING 0 EXCLUSIVE ENDING 100 INCLUSIVE) Interesting. It needs to be usable in normal expressions, however, so it may require some adaptation. That's how arrays do it: there's a special Expr node that represents an array expression. Maybe the same thing could be used for range types, but I fear that there may be some grammar conflicts. I doubt we'd want to fully reserve the keyword "range". Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SQL/MED - file_fdw
On Wed, Feb 09, 2011 at 02:55:26PM +0900, Itagaki Takahiro wrote: > On Wed, Feb 9, 2011 at 03:49, Noah Misch wrote: > > The code still violates the contract of ExecEvalExpr() by calling it with > > CurrentMemoryContext != econtext->ecxt_per_tuple_memory. > > I'm not sure whether we have such contract because the caller might > want to get the expression result in long-lived context. execQual.c has this comment: /* * ExecEvalExpr routines ... * The caller should already have switched into the temporary memory * context econtext->ecxt_per_tuple_memory. The convenience entry point * ExecEvalExprSwitchContext() is provided for callers who don't prefer to * do the switch in an outer loop. We do not do the switch in these routines * because it'd be a waste of cycles during nested expression evaluation. * */ Assuming that comment is accurate, ExecEvalExpr constrains us; any default values must get allocated in econtext->ecxt_per_tuple_memory. To get them in a long-lived allocation, the caller can copy the datums or supply a long-lived ExprContext. Any CurrentMemoryContext works when econtext == NULL, of course. I'd suggest enhancing your new paragraph in the NextCopyFrom() header comment like this: - * 'econtext' is used to evaluate default expression for each columns not - * read from the file. It can be NULL when no default values are used, i.e. - * when all columns are read from the file. + * 'econtext' is used to evaluate default expression for each columns not read + * from the file. It can be NULL when no default values are used, i.e. when all + * columns are read from the file. If econtext is not NULL, the caller must have + * switched into the temporary memory context econtext->ecxt_per_tuple_memory. > But anyway > using an external ExprContext looks cleaner. The new prototype is: > > bool NextCopyFrom( >[IN] CopyState cstate, ExprContext *econtext, >[OUT] Datum *values, bool *nulls, Oid *tupleOid) Looks good. > Note that econtext can be NULL if no default values are used. > Since file_fdw won't use default values, we can just pass NULL for it. Nice. Good thinking. One small point: > --- 2475,2504 >* provided by the input data. Anything not processed here or > above >* will remain NULL. >*/ > + /* XXX: End of only-indentation changes. */ > + if (num_defaults > 0) > + { > + Assert(econtext != NULL); > + > for (i = 0; i < num_defaults; i++) This could be better-written as "Assert(num_defaults == 0 || econtext != NULL);". >From a functional and code structure perspective, I find this ready to commit. (I assume you'll drop the XXX: indent only comments on commit.) Kevin, did you want to do that performance testing you spoke of? Thanks, nm -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Range Type constructors
On Wed, Feb 9, 2011 at 14:50, Jeff Davis wrote: > 1. > The obvious constructor would be: > range(1, 10) > But is that [1, 10), (1, 10], (1, 10), or [1, 10]? We need to support > all 4, and it's not obvious how to do that easily. here is the same issue in table partitioning. Also, We might use the syntax for our partitioning in the future. Just for reference, DB2 uses EXCLUSIVE and INCLUSIVE keywords to specify boundaries. CREATE TABLE ... PARTITION BY RANGE (...) (STARTING 0 EXCLUSIVE ENDING 100 INCLUSIVE) http://publib.boulder.ibm.com/infocenter/db2luw/v9r8/index.jsp?topic=/com.ibm.db2.luw.sql.ref.doc/doc/r927.html I'm not sure it is the best syntax, but at least it's easy to read for beginners and works with parentheses completion by text editors. -- Itagaki Takahiro -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Basic Recovery Control functions for use in Hot Standby. Pause,
On Wed, Feb 9, 2011 at 2:02 PM, Simon Riggs wrote: >> Why did you change the default to on? This would surprise people who are >> used to PITR. > > You pointed out that the code did not match the documented default. So I > made them match according to the docs. Well, I meant changing the docs rather than the code. > Making it pause at target by default is more natural behaviour, even if > it is a change of behaviour. It can waste a lot of time if it leaves > recovery at the wrong point so I don't see the change as a bad one? Only > PITR is affected, not replication or standalone operation. I agree that new option is useful to reduce the waste of time as you described. But I'm still not sure that the change of default behavior is better. Because I can easily imagine the case where a user feels confused about the pause of PITR when he starts PITR as he did in previous version. It would take some time for him to learn what to do in that situation (i.e., execute pg_xlog_replay_resume). On the second thought, I think it's useful to emit the NOTICE message when recovery reaches the pause point, as follows. NOTICE: Recovery will not complete until pg_xlog_replay_resume() is called. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Named restore points
On Wed, Feb 9, 2011 at 4:53 AM, Simon Riggs wrote: > Committed. Thanks for the patch and the review. - * We also track the timestamp of the latest applied COMMIT/ABORT record - * in XLogCtl->recoveryLastXTime, for logging purposes. + * We also track the timestamp of the latest applied COMMIT/ABORT/RESTORE POINT + * record in XLogCtl->recoveryLastXTime, for logging purposes. Tracking the timestamp of the restore point record in recoveryLastXTime messes up pg_last_xact_replay_timestamp which uses recoveryLastXTime. The timestamp of the restore point is wrongly returned as that of the latest transaction, by the function. As far as I read the patch, I don't think that it's necessary to track the timestamp of the restore point. The attached patch changes the code so that it doesn't track the timestamp of the restore point. + if (strlen(restore_name_str) >= MAXFNAMELEN) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), +errmsg("value too long for restore point"))); I think that logging the maximum length of the name is useful as follows: ERROR: value too long for restore point (max 63 characters) So the attached patch also changes the log message that way. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center restore_name_timestamp_v1.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] REVIEW Range Types
On Tue, 2011-02-08 at 20:43 +0100, Erik Rijkers wrote: > -- > Documentation: > -- > > Section 9.18: > table 9-42. range functions: > The following functions are missing (I encountered them in the regression > tests): > contained_by() > range_eq() I opted not to document functions that mostly exist to implement operators. Some people might prefer the names, but then they don't benefit from GiST index searches. So, I left this as-is for now. > section 'Constructing Ranges' (8.16.6): > In the code example, remove the following line: > "-- the int4range result will appear in the canonical format" > it doesn't make sense there. At this place "canonical format" has not been > discussed; > maybe it is not even discussed anywhere. Thank you. I have added a forward reference. > also (same place): >'where "_" is used to mean "exclusive" and "" is used to mean > "inclusive".' > should be: >'where "_" is used to mean "exclusive" and "i" is used to mean > "inclusive".' Thank you, fixed. > And btw: it should mention here that the range*inf* functions, > an underscore to separate 'range' from the rest of the function name, e.g.: >range_linfi_() => infinite lower bound, inclusive upper bound I tried to clarify that section. Thank you for the review! Updated patch forthcoming. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Range Type constructors
There are two issues I'd like to discuss related to constructing range types from other values. 1. The obvious constructor would be: range(1, 10) But is that [1, 10), (1, 10], (1, 10), or [1, 10]? We need to support all 4, and it's not obvious how to do that easily. The solution that I came up with is not particularly clean, but is quite practical: range(1, 10) -> [1, 10) range__(1, 10) -> (1, 10) range_i(1, 10) -> (1, 10] rangei_(1, 10) -> [1, 10) rangeii(1, 10) -> [1, 10] The last two letters refer to the lower and upper bounds, respectively. A "i" means "inclusive" and an "_" means "exclusive". range() is an alias for rangei_(), because that's the most common representation to use. I realize this isn't a clean solution, and better ideas are welcome. This one actually is quite natural to use I think: short to type and easy to remember (for me at least ;). It gets a little stranger for trying to construct unbounded ranges from other values. Again, there are four possibilities: range_uinfi(5) -> [5, INF) range_uinf_(5) -> (5, INF) range_linfi(5) -> (-INF, 5] range_linf_(5) -> (-INF, 5) And again, not exactly clean, but they work. Constructing a singleton range is easy, fortunately, because only something like "[5,5]" makes sense, "[5,5)" doesn't. So there's just a single-argument version of range: range(5) -> [5,5] 2. The second issue is with the type system. In order for the polymorphic constructors to work, they need to be able to determine the data types of their inputs to construct the range. I am using get_fn_expr_argtype() to accomplish that, but it's not always guaranteed to work. That was the problem Erik ran into: the "range @> elem" operator was implicitly constructing a range on the right side based on the type of the right operand; but was being called in contexts where the types aren't known (like the selectivity estimator). The fix was easy: get the type from the range operand (which is actually stored with the range). But that fix won't work for the constructors above, because there is no range argument to start from. So: in what contexts are functions called that get_fn_expr_argtype() might fail; and are the above constructors at risk for that? Is there a better way? Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Range Types
On Tue, 2011-02-08 at 09:10 -0800, Jeff Davis wrote: > On Mon, 2011-02-07 at 18:23 +0100, Dimitri Fontaine wrote: > > I would think > > > > CREATE TYPE foo AS RANGE (bar) USING (btree_ops); > > > > The USING clause is optional, because you generally have a default btree > > opclass for the datatype. > > There are other options, like "CANONICAL", so where do those fit? > > If CREATE TYPE already has an options list, it seems a little strange to > add grammar to support this feature. "USING" doesn't seem to mean a lot, > except that we happen to use it in other contexts to mean "operator > class". For the user-facing part, how about just passing it as a parameter called "SUBTYPE_OPCLASS"? It sounds a little on the "internal detail" side, but so do some other type definition parameters. As for the catalog, I'm inclined to leave the compare function in there directly and just add a dependency on the opclass. That way, it's only one syscache lookup rather than two, to get the compare function oid. Then again, perhaps that doesn't matter anyway. Thoughts? Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI patch version 14
On Tue, Feb 08, 2011 at 09:09:48PM -0500, Robert Haas wrote: > No way to fail is a tall order. Well, no way to fail due to running out of shared memory in RegisterPredicateLock/RegisterPredicateLockingXid, but that doesn't have quite the same ring to it... > If we don't allocate all the memory up front, does that allow memory > to be dynamically shared between different hash tables in shared > memory? I'm thinking not, but... Not in a useful way. If we only allocate some of the memory up front, then the rest goes into the global shmem pool (actually, that has nothing to do with the hash table per se, just the ShmemSize calculations), and it's up for grabs for any hash table that wants to expand, even beyond its declared maximum capacity. But once it's claimed by a hash table it can't get returned. This doesn't sound like a feature to me. In particular, I'd worry that something that allocates a lot of locks (either of the heavyweight or predicate variety) would fill up the associated hash table, and then we're out of shared memory for the other hash tables -- and have no way to get it back short of restarting the whole system. > Frankly, I think this is an example of how our current shared memory > model is a piece of garbage. Our insistence on using sysv shm, and > only sysv shm, is making it impossible for us to do things that other > products can do easily. My first reaction to this whole discussion > was "who gives a crap about 2MB of shared memory?" and then I said > "oh, right, we do, because it might cause someone who was going to get > 24MB of shared buffers to get 16MB instead, and then performance will > suck even worse than it does already". But of course the person > should really be running with 256MB or more, in all likelihood, and > would be happy to have us do that right out of the box if it didn't > require them to do tap-dance around their kernel settings and reboot. I'm completely with you on this. Dan -- Dan R. K. Ports MIT CSAILhttp://drkp.net/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Basic Recovery Control functions for use in Hot Standby. Pause,
On Wed, 2011-02-09 at 12:50 +0900, Fujii Masao wrote: > On Wed, Feb 9, 2011 at 3:30 AM, Simon Riggs wrote: > > Basic Recovery Control functions for use in Hot Standby. Pause, Resume, > > Status check functions only. Also, new recovery.conf parameter to > > pause_at_recovery_target, default on. > > Why did you change the default to on? This would surprise people who are > used to PITR. You pointed out that the code did not match the documented default. So I made them match according to the docs. Making it pause at target by default is more natural behaviour, even if it is a change of behaviour. It can waste a lot of time if it leaves recovery at the wrong point so I don't see the change as a bad one? Only PITR is affected, not replication or standalone operation. -- Simon Riggs http://www.2ndQuadrant.com/books/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump directory archive format / parallel pg_dump
On Tue, Feb 8, 2011 at 8:31 PM, Itagaki Takahiro wrote: > On Tue, Feb 8, 2011 at 13:34, Robert Haas wrote: >> So how close are we to having a committable version of this? Should >> we push this out to 9.2? > > I think so. The feature is pretty attractive, but more works are required: > * Re-base on synchronized snapshots patch > * Consider to use pipe also on Windows. > * Research libpq + fork() issue. We have a warning in docs: > http://developer.postgresql.org/pgdocs/postgres/libpq-connect.html > | On Unix, forking a process with open libpq connections can lead to > unpredictable results Just for the records, once the sync snapshot patch is committed, there is no need to do fancy libpq + fork() combinations anyway. Unfortunately, so far no committer has commented on the synchronized snapshot patch at all. I am not fighting for getting parallel pg_dump done in 9.1, as I don't really have a personal use case for the patch. However it would be the irony of the year if we shipped 9.1 with a synchronized snapshot patch but no parallel dump :-) Joachim -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pl/python do not delete function arguments
2010/12/31 Jan Urbański : > (continuing the flurry of patches) > > Here's a patch that stops PL/Python from removing the function's > arguments from its globals dict after calling it. It's > an incremental patch on top of the plpython-refactor patch sent in > http://archives.postgresql.org/message-id/4d135170.3080...@wulczer.org. > > Git branch for this patch: > https://github.com/wulczer/postgres/tree/dont-remove-arguments > > Apart from being useless, as the whole dict is unreffed and thus freed > in PLy_procedure_delete, removing args actively breaks things for > recursive invocation of the same function. The recursive callee after > returning will remove the args from globals, and subsequent access to > the arguments in the caller will cause a NameError (see new regression > test in patch). I've reviewed this. The patch is old enough to be rejected by patch command, but I manged to apply it by hand. It compiles clean. Added tests pass. I created fibonacci function similar to recursion_test in the patch and confirmed the recursion raises error on 9.0 but not on 9.1. Doc is not with the patch since this change is to remove unnecessary optimization internally. "Ready for Committer" Regards, -- Hitoshi Harada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Basic Recovery Control functions for use in Hot Standby. Pause,
On Wed, Feb 9, 2011 at 3:30 AM, Simon Riggs wrote: > Basic Recovery Control functions for use in Hot Standby. Pause, Resume, > Status check functions only. Also, new recovery.conf parameter to > pause_at_recovery_target, default on. Why did you change the default to on? This would surprise people who are used to PITR. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Robert Haas writes: > On Tue, Feb 8, 2011 at 10:25 PM, Tom Lane wrote: >> My point is that the current restriction to just one containing >> extension seems to me to be an implementation restriction, rather than >> something inherent in the concept of extensions. I have no intention of >> trying to relax that restriction in the near future --- I'm just >> pointing out that it could become an interesting thing to do. > OK. My point was that I think we should definitely *enforce* that > restriction until we have a very clear vision of what it means to do > anything else, so it sounds like we're basically in agreement. Oh, for certain. I've been busy revising Dimitri's patch to use the get_object_address infrastructure, and here's what I've got for the actual implementation as distinct from syntax: + /* + * Execute ALTER THING SET EXTENSION + */ + void + ExecAlterObjectExtensionStmt(AlterObjectExtensionStmt *stmt) + { + ObjectAddress object; + ObjectAddress extension; + Relationrelation; + + /* +* For now, insist on superuser privilege. Later we might want to +* relax this to ownership of the target object and the extension. +*/ + if (!superuser()) + ereport(ERROR, + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), +(errmsg("must be superuser to use ALTER SET EXTENSION"; + + /* +* Translate the parser representation that identifies this object into +* an ObjectAddress. get_object_address() will throw an error if the +* object does not exist, and will also acquire a lock on the target +* to guard against concurrent DROP and SET EXTENSION operations. +*/ + object = get_object_address(stmt->objtype, stmt->objname, stmt->objargs, + &relation, ShareUpdateExclusiveLock); + + /* +* Complain if object is already attached to some extension. +*/ + if (getExtensionOfObject(object.classId, object.objectId) != InvalidOid) + ereport(ERROR, + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), +errmsg("%s is already a member of an extension", + getObjectDescription(&object; + + /* +* OK, add the dependency. +*/ + extension.classId = ExtensionRelationId; + extension.objectId = get_extension_oid(stmt->extname, false); + extension.objectSubId = 0; + + recordDependencyOn(&object, &extension, DEPENDENCY_EXTENSION); + + /* +* If get_object_address() opened the relation for us, we close it to keep +* the reference count correct - but we retain any locks acquired by +* get_object_address() until commit time, to guard against concurrent +* activity. +*/ + if (relation != NULL) + relation_close(relation, NoLock); + } regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
On Tue, Feb 8, 2011 at 10:25 PM, Tom Lane wrote: > Robert Haas writes: >> On Tue, Feb 8, 2011 at 9:48 PM, Tom Lane wrote: >>> Like ALTER THING SET SCHEMA, ALTER THING SET EXTENSION is implicitly >>> assuming that there can be only one owning extension for an object. > >> I would assume that we would enforce that constraint anyway. No? >> Otherwise when you drop one of the two extensions, what happens to the >> object? Seems necessary for sanity. > > Not sure --- what about nested extensions, for instance? Or you could > think about objects that are shared between two extensions, and go away > only if all those extensions are dropped. (RPM has exactly that > behavior for files owned by multiple packages, to take a handy example.) > > My point is that the current restriction to just one containing > extension seems to me to be an implementation restriction, rather than > something inherent in the concept of extensions. I have no intention of > trying to relax that restriction in the near future --- I'm just > pointing out that it could become an interesting thing to do. OK. My point was that I think we should definitely *enforce* that restriction until we have a very clear vision of what it means to do anything else, so it sounds like we're basically in agreement. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Robert Haas writes: > On Tue, Feb 8, 2011 at 9:48 PM, Tom Lane wrote: >> Like ALTER THING SET SCHEMA, ALTER THING SET EXTENSION is implicitly >> assuming that there can be only one owning extension for an object. > I would assume that we would enforce that constraint anyway. No? > Otherwise when you drop one of the two extensions, what happens to the > object? Seems necessary for sanity. Not sure --- what about nested extensions, for instance? Or you could think about objects that are shared between two extensions, and go away only if all those extensions are dropped. (RPM has exactly that behavior for files owned by multiple packages, to take a handy example.) My point is that the current restriction to just one containing extension seems to me to be an implementation restriction, rather than something inherent in the concept of extensions. I have no intention of trying to relax that restriction in the near future --- I'm just pointing out that it could become an interesting thing to do. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] REVIEW: PL/Python table functions
2011/2/9 Jan Urbański : > I hope this version does the right thing, while still avoiding the > performance hit of looking up I/O funcs every time a row is returned. > Actually, PL/Perl *does* look up the I/O funcs every time, so in the > worst case I can just drop this optimisation. But let's hope I got it > right this time :) I tested it on the issue above and things around trigger, and looked good to me. Although I'm not sure if I understand the code overall, but the modification where I'm unclear seems covered by the regression tests. I mark this "Ready for Committer." Regards, -- Hitoshi Harada -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
On Tue, Feb 8, 2011 at 9:48 PM, Tom Lane wrote: > Robert Haas writes: >> ... And you could even allow multiple objects: >> ALTER EXTENSION extension_name ADD object-description [, ...]; >> Which might be handy. > > I just thought of a different way of coming at the question, which might > help us make a choice. > > Like ALTER THING SET SCHEMA, ALTER THING SET EXTENSION is implicitly > assuming that there can be only one owning extension for an object. I would assume that we would enforce that constraint anyway. No? Otherwise when you drop one of the two extensions, what happens to the object? Seems necessary for sanity. > Furthermore, it's not really intended for *removal* of an object from an > extension (a concept that doesn't even exist for SET SCHEMA). We could > take a page from COMMENT ON and use "SET EXTENSION NULL" for that, but > that's surely more of a hack than anything else. True. > In contrast, ALTER EXTENSION ADD doesn't presuppose that you couldn't > add the object to multiple extensions; and it has a natural inverse, > ALTER EXTENSION DROP. I am not necessarily suggesting that we will ever > allow either of those things, but I do suggest that we should pick a > syntax that doesn't look like it's being forced to conform if we ever > want to do it. The DROP case at least seems like it might be wanted > in the relatively near future. Yep. > So that looks to me like a fairly good argument for the ADD syntax. OK by me. There's also the operator class stuff, as a parallel. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Robert Haas writes: > ... And you could even allow multiple objects: > ALTER EXTENSION extension_name ADD object-description [, ...]; > Which might be handy. I just thought of a different way of coming at the question, which might help us make a choice. Like ALTER THING SET SCHEMA, ALTER THING SET EXTENSION is implicitly assuming that there can be only one owning extension for an object. Furthermore, it's not really intended for *removal* of an object from an extension (a concept that doesn't even exist for SET SCHEMA). We could take a page from COMMENT ON and use "SET EXTENSION NULL" for that, but that's surely more of a hack than anything else. In contrast, ALTER EXTENSION ADD doesn't presuppose that you couldn't add the object to multiple extensions; and it has a natural inverse, ALTER EXTENSION DROP. I am not necessarily suggesting that we will ever allow either of those things, but I do suggest that we should pick a syntax that doesn't look like it's being forced to conform if we ever want to do it. The DROP case at least seems like it might be wanted in the relatively near future. So that looks to me like a fairly good argument for the ADD syntax. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Robert Haas writes: > On Tue, Feb 8, 2011 at 6:54 PM, Tom Lane wrote: >> Having now looked at it a bit closer, I think the syntax choice is a >> complete wash from an implementation standpoint: either way, we'll have >> a list of bison productions that build AlterObjectExtensionStmt nodes, >> and it goes through the same way after that. >> >> Preferences anyone? > The closest exstant parallel is probably: > ALTER SEQUENCE foo OWNED BY bar; > I think paralleling that would probably be the most SQL-ish thing to > do, but I can't get excited about it. ALTER SET SCHEMA is a relatively near match as well, I think, from a semantic standpoint. > The ALTER EXTENSION syntax will > be a lot more self-contained, with all of it one part of the grammar > and one part of the documentation. And you could even allow multiple > objects: > ALTER EXTENSION extension_name ADD object-description [, ...]; > Which might be handy. Hmm, that's an interesting thought. It'd require rather more refactoring of the grammar and the parsetree representation than I care to get into right now, but that could be a foreseeable extension in future. On the other hand, it's not *that* exciting, and if multi ADD isn't supported in the very first version then probably nobody will ever want to rely on it in extension scripts. I'm still finding that the "document in one place" angle is the most compelling argument to me. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries
On Tue, Feb 8, 2011 at 8:30 PM, Andrew Dunstan wrote: > Quite right, but the commitfest manager isn't meant to be a substitute for > one. Bug fixes aren't subject to the same restrictions of feature changes. Another option would be to add this here: http://wiki.postgresql.org/wiki/PostgreSQL_9.1_Open_Items -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI patch version 14
On Tue, Feb 8, 2011 at 7:23 PM, Dan Ports wrote: > On Tue, Feb 08, 2011 at 04:04:39PM -0600, Kevin Grittner wrote: >> (2) The predicate lock and lock target initialization code was >> initially copied and modified from the code for heavyweight locks. >> The heavyweight lock code adds 10% to the calculated maximum size. >> So I wound up doing that for PredicateLockTargetHash and >> PredicateLockHash, but didn't do it for SerializableXidHassh. >> Should I eliminate this from the first two, add it to the third, or >> leave it alone? > > Actually, I think for SerializableXidHash we should probably just > initially allocate it at its maximum size. Then it'll match the > PredXact list which is allocated in full upfront, and there's no risk > of being able to allocate a transaction but not register its xid. In > fact, I believe there would be no way for starting a new serializable > transaction to fail. No way to fail is a tall order. If we don't allocate all the memory up front, does that allow memory to be dynamically shared between different hash tables in shared memory? I'm thinking not, but... Frankly, I think this is an example of how our current shared memory model is a piece of garbage. Our insistence on using sysv shm, and only sysv shm, is making it impossible for us to do things that other products can do easily. My first reaction to this whole discussion was "who gives a crap about 2MB of shared memory?" and then I said "oh, right, we do, because it might cause someone who was going to get 24MB of shared buffers to get 16MB instead, and then performance will suck even worse than it does already". But of course the person should really be running with 256MB or more, in all likelihood, and would be happy to have us do that right out of the box if it didn't require them to do tap-dance around their kernel settings and reboot. We really need to fix this. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
On Tue, Feb 8, 2011 at 6:54 PM, Tom Lane wrote: > Dimitri Fontaine writes: >> Tom Lane writes: >>> (I was vaguely imagining that it could share most of the COMMENT >>> infrastructure --- but haven't looked yet). > >> Well the code footprint is quite small already. > > Having now looked at it a bit closer, I think the syntax choice is a > complete wash from an implementation standpoint: either way, we'll have > a list of bison productions that build AlterObjectExtensionStmt nodes, > and it goes through the same way after that. I do think that the > implementation will be a lot more compact if it relies on the COMMENT > infrastructure (ie, get_object_address), but that's an independent > choice. > > So really it boils down to which syntax seems more natural and/or easier > to document. As I said, I think a centralized ALTER EXTENSION syntax > has some advantages from the documentation standpoint; but that's not a > terribly strong argument, especially given that Dimitri has already done > a patch to document things the other way. > > Preferences anyone? The closest exstant parallel is probably: ALTER SEQUENCE foo OWNED BY bar; I think paralleling that would probably be the most SQL-ish thing to do, but I can't get excited about it. The ALTER EXTENSION syntax will be a lot more self-contained, with all of it one part of the grammar and one part of the documentation. And you could even allow multiple objects: ALTER EXTENSION extension_name ADD object-description [, ...]; Which might be handy. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Error attribution in foreign scans
On Mon, Feb 7, 2011 at 22:47, Heikki Linnakangas wrote: > On Mon, Feb 7, 2011 at 21:17, Noah Misch wrote: >> The message does not show which foreign table yielded the error. We could >> evade >> the problem in this case by adding a file name to the error message in the >> COPY >> code, > Yeah, an error context callback like that makes sense. In the case of the > file FDW, though, just including the filename in the error message seems > even better. Especially if the error is directly related to failure in > reading the file. What do you think about filenames in terms of security? We will allow non-superusers to use existing foreign tables of file_fdw. For reference, we hide some path settings in GUC variables. We also reconsider privilege of fdwoptions, umoptions, etc. They could contain password or server-side path, but all users can retrieve the values. It's an existing issue, but will be more serious in 9.1. -- Itagaki Takahiro -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] arrays as pl/perl input arguments [PATCH]
On Tue, Feb 8, 2011 at 10:33, Alex Hunsaker wrote: > On Tue, Feb 8, 2011 at 08:18, Alexey Klyukin wrote: >> >> On Feb 6, 2011, at 9:43 AM, Alex Hunsaker wrote: > >>> So here is a v6 >>> >>> Comments? >> >> Thanks, looks great to me. It passes all the tests on my OS X system. I >> wonder >> what's the purpose of the amagic_call in get_perl_array_ref, instead of >> calling newRV_noinc on the target SV * ? > > Err, even simpler would be to just access the 'array' member directly out of the hash object. Done as the above, an added bonus is we no longer have to SvREF_dec() so its even simpler. >> Also, in array_to_datum (line 1050), if get_perl_array_ref returns NULL for >> the first element of the corresponding dimension, but non-NULL for the second >> one - it would use uninitialized dims[cur_depth] value in comparison (which, >> although, would probably lead to the desired comparison result). > > Good catch, will fix. All of those checks should be outside of the while loop. I clearly needed more caffeine in my system when i wrote that. Partly due to the shadowed "av" variable. I've fixed it by initiating all of the dims to 0. I also renamed the shadowed av var to "nav" for nested av. > While Im at it Ill also rebase against HEAD (im sure there will be > some conflicts with that other plperl patch that just went in ;)). So the merge while not exactly trivial was fairly simple. However it would be great if you could give it another look over. Find attached v7 changes include: - rebased against HEAD - fix potential use of uninitialized dims[cur_depth] - took out accidental (broken) hack to try and support composite types in ::encode_array_literal (added in v4 or something) - make_array_ref() now uses plperl_hash_from_datum() for composite types instead of its own hand rolled version - get_perl_array_ref() now grabs the 'array' directly instead of through the magic interface for simplicity - moved added static declarations to the "bottom" instead of being half on top and half on bottom pg_to_perl_arrays_v7.patch.gz Description: GNU Zip compressed data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Revised patches to add table function support to PL/Tcl (TODO item)
On 02/07/2011 11:30 PM, Robert Haas wrote: On Tue, Dec 28, 2010 at 9:23 PM, Karl Lehenbauer wrote: On Dec 28, 2010, at 7:29 PM, Tom Lane wrote: This patch appears to be changing a whole lot of stuff that in fact pg_indent has never changed, so there's something wrong with the way you are doing it. It looks like a bad typedef list from here. You were right, Tom. The problem was that typedefs "pltcl_interp_desc", "pltcl_proc_key", and "pltcl_proc_ptr" weren't in src/tools/pgindent/typedefs.list. After adding them (and building and installing the netbsd-based, patched indent), pgindent only changes a handful of lines. pltcl-karl-try3-1-of-3-pgindent.patch patches typedefs.list with the three missing typedefs and pltcl.c with the small changes made by pgindent (it shifted some embedded comments left within their lines, mainly). As before, but "try3" now, pltcl-karl-try3-2-of-3-objects.patch converts pltcl.c to use the "Tcl objects" C API. And as before, but "try3" now, pltcl-karl-try3-3-of-3-setof.patch adds returning record and SETOF record. This patch did not get reviewed, because the person who originally planned to review it had a hardware failure that prevented him from doing so. Can anyone pick this up? I will have a look at it. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_dump directory archive format / parallel pg_dump
On Tue, Feb 8, 2011 at 13:34, Robert Haas wrote: > So how close are we to having a committable version of this? Should > we push this out to 9.2? I think so. The feature is pretty attractive, but more works are required: * Re-base on synchronized snapshots patch * Consider to use pipe also on Windows. * Research libpq + fork() issue. We have a warning in docs: http://developer.postgresql.org/pgdocs/postgres/libpq-connect.html | On Unix, forking a process with open libpq connections can lead to unpredictable results -- Itagaki Takahiro -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries
On 02/08/2011 08:19 PM, Itagaki Takahiro wrote: On Wed, Feb 9, 2011 at 10:17, Andrew Dunstan wrote: Isn't this all really a bug fix that should be backpatched, rather than a commitfest item? Sure, but we don't have any bug trackers... Quite right, but the commitfest manager isn't meant to be a substitute for one. Bug fixes aren't subject to the same restrictions of feature changes. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries
On Wed, Feb 9, 2011 at 10:17, Andrew Dunstan wrote: > Isn't this all really a bug fix that should be backpatched, rather than a > commitfest item? Sure, but we don't have any bug trackers... -- Itagaki Takahiro -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GENERAL] Issues with generate_series using integer boundaries
On 02/07/2011 06:38 AM, Thom Brown wrote: On 7 February 2011 09:04, Itagaki Takahiro wrote: On Fri, Feb 4, 2011 at 21:32, Thom Brown wrote: The issue is that generate_series will not return if the series hits either the upper or lower boundary during increment, or goes beyond it. The attached patch fixes this behaviour, but should probably be done a better way. The first 3 examples above will not return. There are same bug in int8 and timestamp[tz] versions. We also need fix for them. =# SELECT x FROM generate_series(9223372036854775807::int8, 9223372036854775807::int8) AS a(x); Yes, of course, int8 functions are separate. I attach an updated patch, although I still think there's a better way of doing this. =# SELECT x FROM generate_series('infinity'::timestamp, 'infinity', '1 sec') AS a(x); =# SELECT x FROM generate_series('infinity'::timestamptz, 'infinity', '1 sec') AS a(x); I'm not sure how this should be handled. Should there just be a check for either kind of infinity and return an error if that's the case? I didn't find anything wrong with using timestamp boundaries: postgres=# SELECT x FROM generate_series('1 Jan 4713 BC 00:00:00'::timestamp, '1 Jan 4713 BC 00:00:05'::timestamp, '1 sec') AS a(x); x 4713-01-01 00:00:00 BC 4713-01-01 00:00:01 BC 4713-01-01 00:00:02 BC 4713-01-01 00:00:03 BC 4713-01-01 00:00:04 BC 4713-01-01 00:00:05 BC (6 rows) Although whether this demonstrates a true timestamp boundary, I'm not sure. postgres=# SELECT x FROM generate_series(1, 9,-1) AS a(x); postgres=# SELECT x FROM generate_series(1, 9,3) AS a(x); They work as expected in 9.1dev. Those 2 were to demonstrate that the changes don't affect existing functionality. My previous patch proposal (v2) caused these to return unexpected output. Isn't this all really a bug fix that should be backpatched, rather than a commitfest item? cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On Tue, 2011-02-08 at 14:25 -0500, Robert Haas wrote: > > The patch is a million little decisions: names, catalog structure, > > interface, representation, general usability, grammar, functionality, > > etc. Without some checkpoint, the chances that everyone agrees with all > > of these decisions at the beginning of the next commitfest is zero. > > > > Is the commitfest not the right place to do this? If not, then when? > > That's a fair question, and I do understand the difficulty. I think a > CommitFest is the right place to do that. On the other hand, as I'm > sure you realize, I'm not keen to hold up 9.1beta for a feature that > isn't going to be committed until 9.2. I'm not asking you to hold it up. Just don't mark it "returned with feedback" when that is not true, and a week still remains. Erik is still looking at it, and that might generate some interesting discussion. > ...everyone who has been thinking about doing something for the release > wakes up and submits it, often half-finished, often at the very last > minute. On the flip side, if we don't provide review to WIP patches during the 3rd commitfest, how do we expect to get anything close to committable on the 1st commitfest of the next cycle? > Although it doesn't > feel like it at the moment, we have actually made great strides in > absorbing large patches. I agree completely. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI patch version 14
On Tue, Feb 08, 2011 at 04:04:39PM -0600, Kevin Grittner wrote: > (2) The predicate lock and lock target initialization code was > initially copied and modified from the code for heavyweight locks. > The heavyweight lock code adds 10% to the calculated maximum size. > So I wound up doing that for PredicateLockTargetHash and > PredicateLockHash, but didn't do it for SerializableXidHassh. > Should I eliminate this from the first two, add it to the third, or > leave it alone? Actually, I think for SerializableXidHash we should probably just initially allocate it at its maximum size. Then it'll match the PredXact list which is allocated in full upfront, and there's no risk of being able to allocate a transaction but not register its xid. In fact, I believe there would be no way for starting a new serializable transaction to fail. Dan -- Dan R. K. Ports MIT CSAILhttp://drkp.net/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Dimitri Fontaine writes: > Tom Lane writes: >> (I was vaguely imagining that it could share most of the COMMENT >> infrastructure --- but haven't looked yet). > Well the code footprint is quite small already. Having now looked at it a bit closer, I think the syntax choice is a complete wash from an implementation standpoint: either way, we'll have a list of bison productions that build AlterObjectExtensionStmt nodes, and it goes through the same way after that. I do think that the implementation will be a lot more compact if it relies on the COMMENT infrastructure (ie, get_object_address), but that's an independent choice. So really it boils down to which syntax seems more natural and/or easier to document. As I said, I think a centralized ALTER EXTENSION syntax has some advantages from the documentation standpoint; but that's not a terribly strong argument, especially given that Dimitri has already done a patch to document things the other way. Preferences anyone? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Extend ALTER TABLE to allow Foreign Keys to be added without ini
On Tue, 2011-02-08 at 22:58 +0100, Bernd Helmle wrote: > > --On 8. Februar 2011 12:24:11 + Simon Riggs > wrote: > > > Extend ALTER TABLE to allow Foreign Keys to be added without initial > > validation. FK constraints that are marked NOT VALID may later be > > VALIDATED, which uses an ShareUpdateExclusiveLock on constraint table and > > RowShareLock on referenced table. Significantly reduces lock strength and > > duration when adding FKs. New state visible from psql. > > Hi Simon, > > It seems this commit misses updates to system catalogs documentation > regarding pg_constraint.conisvalidated. So it does. Thanks I will rectify. -- Simon Riggs http://www.2ndQuadrant.com/books/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] btree_gist (was: CommitFest progress - or lack thereof)
Robert Haas wrote: > On Fri, Feb 4, 2011 at 12:02 PM, Oleg Bartunov wrote: > > Aha, > > > > Teodor sent it to the list Dec 28, see > > http://archives.postgresql.org/message-id/4D1A1677.80300%40sigaev.ru > > > > After a month I didn't see any activity on this patch, so I I added it to > > https://commitfest.postgresql.org/action/patch_view?id=350 Jan 21 > > > > Now, I realised it was too late. Added to current commitfest. > > I think this patch missed the deadline for the current CommitFest. > It's true that it was posted to the list in time, but it's just > madness to think we can do review in a meaningful way and get done in > a reasonable time if every patch that's ever been posted is fair game > to be added to the CommitFest at any point. I believe it's a > generally accepted principle that adding things to the CommitFest > properly is the submitter's responsibility. > > That having been said, this looks like a fairly mechanical change to a > contrib module that you and Teodor wrote. So I'd say if you guys are > confident that it's correct, go ahead and commit. If you need it > reviewed, or if you can't commit it in the next week or so, I think > it's going to have to wait for 9.2. If you need a +1, agreed. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Dimitri Fontaine writes: > Tom Lane writes: >>> [ ALTER object SET EXTENSION versus ALTER EXTENSION ADD object ] >> OK, that seems like an equally reasonable syntax; although doing it the >> way I was thinking might have less of a code and documentation footprint >> (I was vaguely imagining that it could share most of the COMMENT >> infrastructure --- but haven't looked yet). In any case it seems like >> this is a good piece to do next, since the required functionality is >> clear and it's essential for more than one reason. > Well the code footprint is quite small already. I was thinking about it more from the documentation side: touch one man page versus touch nearly all the ALTER pages. In addition, if it's all on the ALTER EXTENSION page then we can reference that as a list of the types of objects managed by extensions, which is something that's documented nowhere right now. Has anybody got any strong preference for one of these alternatives on more abstract grounds? You could cite ALTER OBJECT SET NAMESPACE/OWNER as precedents for the one syntax, but COMMENT seems like a precedent for the other, so that consideration seems like nearly a wash to me. Any other opinions out there? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] REVIEW: PL/Python table functions
On 07/02/11 06:10, Hitoshi Harada wrote: > 2011/2/7 Jan Urbański : >> On 04/02/11 16:26, Hitoshi Harada wrote: >>> 2011/1/28 Jan Urbański : On 27/01/11 00:41, Jan Urbański wrote: > I'm also attaching an updated version that should apply on top of my > github refactor branch (or incrementally over the new set of refactor > patches that I will post shortly to the refactor thread). Attached is a patch for master, as the refactorings have already been merged. >>> >>> Sorry, but could you update your patch? Patching it against HEAD today >>> makes rej. >> >> Sure, here's an updated patch. > > Thanks, > > I revisited the problem of typeinfo cache, and I guess this is not > what you want; > > [problems, now reflected in new regression tests] > > The PL/pgSQL case you pointed earlier is consistent because it fetches > the values positionally. The column name is only an on-demand > labeling. However, for mapping dict of python into the table row > should always map it by key. At least the function author (including > me :P) expects it. Yes, you're right. I tried to be too cute checking if the arguments are binary coercible to the fit the new record description. This time I'm just checking if the record descriptor changed at all, and if so recaching the I/O funcs. I hope this version does the right thing, while still avoiding the performance hit of looking up I/O funcs every time a row is returned. Actually, PL/Perl *does* look up the I/O funcs every time, so in the worst case I can just drop this optimisation. But let's hope I got it right this time :) Thanks again for the review, Jan diff --git a/src/pl/plpython/Makefile b/src/pl/plpython/Makefile index 16d78ae..167393e 100644 *** a/src/pl/plpython/Makefile --- b/src/pl/plpython/Makefile *** REGRESS = \ *** 79,84 --- 79,85 plpython_types \ plpython_error \ plpython_unicode \ + plpython_composite \ plpython_drop # where to find psql for running the tests PSQLDIR = $(bindir) diff --git a/src/pl/plpython/expected/plpython_composite.out b/src/pl/plpython/expected/plpython_composite.out index ...a8f8a62 . *** a/src/pl/plpython/expected/plpython_composite.out --- b/src/pl/plpython/expected/plpython_composite.out *** *** 0 --- 1,355 + CREATE FUNCTION multiout_simple(OUT i integer, OUT j integer) AS $$ + return (1, 2) + $$ LANGUAGE plpythonu; + SELECT multiout_simple(); + multiout_simple + - + (1,2) + (1 row) + + SELECT * FROM multiout_simple(); + i | j + ---+--- + 1 | 2 + (1 row) + + SELECT i, j + 2 FROM multiout_simple(); + i | ?column? + ---+-- + 1 |4 + (1 row) + + SELECT (multiout_simple()).j + 3; + ?column? + -- + 5 + (1 row) + + CREATE FUNCTION multiout_simple_setof(n integer = 1, OUT integer, OUT integer) RETURNS SETOF record AS $$ + return [(1, 2)] * n + $$ LANGUAGE plpythonu; + SELECT multiout_simple_setof(); + multiout_simple_setof + --- + (1,2) + (1 row) + + SELECT * FROM multiout_simple_setof(); + column1 | column2 + -+- +1 | 2 + (1 row) + + SELECT * FROM multiout_simple_setof(3); + column1 | column2 + -+- +1 | 2 +1 | 2 +1 | 2 + (3 rows) + + CREATE FUNCTION multiout_record_as(typ text, +first text, OUT first text, +second integer, OUT second integer, +retnull boolean) RETURNS record AS $$ + if retnull: + return None + if typ == 'dict': + return { 'first': first, 'second': second, 'additionalfield': 'must not cause trouble' } + elif typ == 'tuple': + return ( first, second ) + elif typ == 'list': + return [ first, second ] + elif typ == 'obj': + class type_record: pass + type_record.first = first + type_record.second = second + return type_record + $$ LANGUAGE plpythonu; + SELECT * from multiout_record_as('dict', 'foo', 1, 'f'); + first | second + ---+ + foo | 1 + (1 row) + + SELECT multiout_record_as('dict', 'foo', 1, 'f'); + multiout_record_as + + (foo,1) + (1 row) + + SELECT *, s IS NULL as snull from multiout_record_as('tuple', 'xxx', NULL, 'f') AS f(f, s); + f | s | snull + -+---+--- + xxx | | t + (1 row) + + SELECT *, f IS NULL as fnull, s IS NULL as snull from multiout_record_as('tuple', 'xxx', 1, 't') AS f(f, s); + f | s | fnull | snull + ---+---+---+--- +| | t | t + (1 row) + + SELECT * from multiout_record_as('obj', NULL, 10, 'f'); + first | second + ---+ +| 10 + (1 row) + + CREATE FUNCTION multiout_setof(n integer, +OUT power_of_2 integer, +OUT length integer) RETURNS SETOF record AS $$ + for i in range(n): + power = 2 ** i + length = plpy.execute("select le
Re: [HACKERS] Per-column collation, the finale
On 8 February 2011 21:08, Peter Eisentraut wrote: > On tor, 2011-02-03 at 18:36 -0500, Noah Misch wrote: >> Looks good and tests well. I've attached the same benchmark script >> with updated timings, and I've marked the patch Ready for Committer. > > Committed. Thanks everyone. Awesome work Peter. Few questions: postgres=# create table meow (id serial, stuff text collate "de_XX"); NOTICE: CREATE TABLE will create implicit sequence "meow_id_seq" for serial column "meow.id" ERROR: collation "de_XX" for current database encoding "UTF8" does not exist LINE 1: create table meow (id serial, stuff text collate "de_XX"); I wouldn't expect to see that first notice. Shouldn't that step come a bit later? A bit of weirdness, I'm allowed to specify more than one collation on a single column ordering... postgres=# select * from meow order by stuff collate "en_GB" collate "de_DE" desc; id | stuff +--- 2 | meow2 1 | meow (2 rows) Is this the same principal as casting, where they can be chained? Which one wins in this case? Although if anyone is actually doing this, then it's just silly anyway. (says Thom having just done it) Also, if a locale is installed after initdb, is it then impossible to get pg_collate to pick up that new locale? If a locale is somehow removed from the system, what happens on the database side when attempting to use a collated column? (I don't wish to TIAS on my own system) -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Tom Lane writes: > OK, that seems like an equally reasonable syntax; although doing it the > way I was thinking might have less of a code and documentation footprint > (I was vaguely imagining that it could share most of the COMMENT > infrastructure --- but haven't looked yet). In any case it seems like > this is a good piece to do next, since the required functionality is > clear and it's essential for more than one reason. Well the code footprint is quite small already. When the grammar change and the alter.c addition is in, it boils down to: /* * ALTER OBJECT SET EXTENSION * * This form allows for upgrading from previous PostgreSQL version to one * supporting extensions, or to upgrade user objects to get to use the * extension infrastructure. * * All we have to do is record an INTERNAL dependency between the selected * object and the extension, which must of course already exist. */ void AlterObjectExtension(const char *extname, Oid classId, Oid objectId, Oid objectSubId) { ObjectAddress *extension, *object; if (!superuser()) ereport(ERROR, (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE), (errmsg("must be superuser to SET EXTENSION"; extension = (ObjectAddress *)palloc(sizeof(ObjectAddress)); extension->classId = ExtensionRelationId; extension->objectId = get_extension_oid(extname, false); extension->objectSubId = 0; object = (ObjectAddress *)palloc(sizeof(ObjectAddress)); object->classId = classId; object->objectId = objectId; object->objectSubId = objectSubId; recordDependencyOn(object, extension, DEPENDENCY_INTERNAL); return; } Of course it needs some changes now, but well… I guess you'll be done with that before I get to rebase my patch, I can only target tomorrow now. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Extend ALTER TABLE to allow Foreign Keys to be added without ini
--On 8. Februar 2011 12:24:11 + Simon Riggs wrote: Extend ALTER TABLE to allow Foreign Keys to be added without initial validation. FK constraints that are marked NOT VALID may later be VALIDATED, which uses an ShareUpdateExclusiveLock on constraint table and RowShareLock on referenced table. Significantly reduces lock strength and duration when adding FKs. New state visible from psql. Hi Simon, It seems this commit misses updates to system catalogs documentation regarding pg_constraint.conisvalidated. Bernd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI patch version 14
Heikki Linnakangas wrote: > LOG: shmemsize 3153112 > LOG: actual 2339864 > > Which is a pretty big overestimate, percentage-wise. Taking > RWConflictPool into account in PredicateLockShmemSize() fixes the > underestimate, but makes the overestimate correspondingly larger. > I've never compared the actual and estimated shmem sizes of other > parts of the backend, so I'm not sure how large discrepancies we > usually have, but that seems quite big. I found two things which probably explain that: (1) When HTABs are created, there is the max_size, which is what the PredicateLockShmemSize function must use in its calculations, and the init_size, which is what will initially be allocated (and so, is probably what you see in the usage at the end of the InitPredLocks function). That's normally set to half the maximum. (2) The predicate lock and lock target initialization code was initially copied and modified from the code for heavyweight locks. The heavyweight lock code adds 10% to the calculated maximum size. So I wound up doing that for PredicateLockTargetHash and PredicateLockHash, but didn't do it for SerializableXidHassh. Should I eliminate this from the first two, add it to the third, or leave it alone? So if the space was all in HTABs, you might expect shmemsize to be 110% of the estimated maximum, and actual (at the end of the init function) to be 50% of the estimated maximum. So the shmemsize would be (2.2 * actual) at that point. The difference isn't that extreme because the list-based pools now used for some structures are allocated at full size without padding. In addition to the omission of the RWConflictPool (which is a biggie), the OldSerXidControlData estimate was only for a *pointer* to it, not the structure itself. The attached patch should correct the shmemsize numbers. -Kevin *** a/src/backend/storage/lmgr/predicate.c --- b/src/backend/storage/lmgr/predicate.c *** *** 1190,1200 PredicateLockShmemSize(void) size = add_size(size, hash_estimate_size(max_table_size, sizeof(SERIALIZABLEXID))); /* Head for list of finished serializable transactions. */ size = add_size(size, sizeof(SHM_QUEUE)); /* Shared memory structures for SLRU tracking of old committed xids. */ ! size = add_size(size, sizeof(OldSerXidControl)); size = add_size(size, SimpleLruShmemSize(NUM_OLDSERXID_BUFFERS, 0)); return size; --- 1190,1205 size = add_size(size, hash_estimate_size(max_table_size, sizeof(SERIALIZABLEXID))); + /* rw-conflict pool */ + size = add_size(size, RWConflictPoolHeaderDataSize); + size = add_size(size, mul_size((Size) max_table_size, + RWConflictDataSize)); + /* Head for list of finished serializable transactions. */ size = add_size(size, sizeof(SHM_QUEUE)); /* Shared memory structures for SLRU tracking of old committed xids. */ ! size = add_size(size, sizeof(OldSerXidControlData)); size = add_size(size, SimpleLruShmemSize(NUM_OLDSERXID_BUFFERS, 0)); return size; -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions support for pg_dump, patch v27
Tom Lane writes: > I have gone ahead and committed the core and documentation parts of this Thank you! And I'd like to take the time to thank all of you who helped me reach this goal, but that ranges to about everyone here who I talked to at any conference those last two or three years (I still remember talking about that at PGDay 2008 in Prato, but the ball was already rolling if only in my head). You might also be interested to know that the research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement 258862. > patch, but not as yet any of the contrib changes; that is, all the > contrib modules are still building old-style. I had intended to do it > in two steps all along, so as to get some buildfarm proof for the thesis > that we haven't broken old-style add-on modules. However, there is now > an additional motivation for delay: so long as we haven't pulled the > trigger on changing the contrib modules, this is an experimental feature > that nothing else depends on. Worst case, if we can't get to a > satisfactory resolution of the pg_upgrade and ALTER EXTENSION UPGRADE > issues, we can ship 9.1 as-is and just label the EXTENSION commands as > subject to change. I will however now go work on those issues. Wise move. Again :) Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Dimitri Fontaine writes: > Tom Lane writes: >> 2. Invent a command "ALTER EXTENSION extname ADD object-description" >> that simply adds a pg_depend entry marking an existing object as >> belonging to the extension. I think this was going to be part of the >> plan anyway for ALTER EXTENSION UPGRADE, specifically we need that for >> the bootstrap case of collecting some loose pre-existing objects into >> an extension. > In the upgrade patch, that's spelled ALTER OBJECT foo SET EXTENSION bar; > and does exactly what you're proposing. OK, that seems like an equally reasonable syntax; although doing it the way I was thinking might have less of a code and documentation footprint (I was vaguely imagining that it could share most of the COMMENT infrastructure --- but haven't looked yet). In any case it seems like this is a good piece to do next, since the required functionality is clear and it's essential for more than one reason. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] updated patch for foreach stmt
* Robert Haas (robertmh...@gmail.com) wrote: > I think the syntax Tom suggested before was FOREACH thingy IN ARRAY > arr rather than just FOREACH thingy IN arr. That's probably a good > idea, because it gives us an escape hatch against needing to invent > yet another variant of this syntax - the word immediately following IN > can be known with confidence to be intended as a keyword rather than > as part of the expression. Alright, so, for lack of anything better to do, I went ahead and marked it Ready for Committer. If that was wrong or someone wants to do another review, etc, let me know.. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Extensions support for pg_dump, patch v27
I have gone ahead and committed the core and documentation parts of this patch, but not as yet any of the contrib changes; that is, all the contrib modules are still building old-style. I had intended to do it in two steps all along, so as to get some buildfarm proof for the thesis that we haven't broken old-style add-on modules. However, there is now an additional motivation for delay: so long as we haven't pulled the trigger on changing the contrib modules, this is an experimental feature that nothing else depends on. Worst case, if we can't get to a satisfactory resolution of the pg_upgrade and ALTER EXTENSION UPGRADE issues, we can ship 9.1 as-is and just label the EXTENSION commands as subject to change. I will however now go work on those issues. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [pgsql-fr-generale] create an extension of postgresql 9 with Visual C++ 2008 express edition
Bonjour, On parle plutôt français sur cette liste d'habitude :) To reach an English-speaking forum please consider pgsql-general. michel wildcat writes: > I am a computer student in belgium, for my academic project I am working to > an extension of postgresql 9 under win xp, by creating h DLL in Visual C++ > 2008 - I am new in both environments; To start and understand how does it > works, I tried to compile the example complex.c which is in the > directory /tutorial > of postgresql sources, but unfortunately I have lots of compilation errors > Although I made the various necessary "include ". Please I would like to > know if there could be a kind of tutorial that explains how to develop a DLL > for Postgresql using Visual C + + Express (the necessary config, the > library to include, etc.. ), based on the /tutorial/complex.c ffile for > example. Better try with any contrib first. If possible one that does look like the extension you're wanting to develop. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] create an extension of postgresql 9 with Visual C++ 2008 express edition
Hello, I am a computer student in belgium, for my academic project I am working to an extension of postgresql 9 under win xp, by creating h DLL in Visual C++ 2008 - I am new in both environments; To start and understand how does it works, I tried to compile the example complex.c which is in the directory /tutorial of postgresql sources, but unfortunately I have lots of compilation errors Although I made the various necessary "include ". Please I would like to know if there could be a kind of tutorial that explains how to develop a DLL for Postgresql using Visual C + + Express (the necessary config, the library to include, etc.. ), based on the /tutorial/complex.c ffile for example. Thanks you in the advance.
Re: [HACKERS] postponing some large patches to 9.2
> This discussion reveals that it's time to start making some > discussions about what can be accomplished for 9.1 and what must be > postponed to 9.2. The big ones I think we should postpone are: First off, I think that this is a little premature. As others have pointed out, the unusual schedule this development cycle indicates that we ought to give patch authors *some* leeway, otherwise we're penalizing patch authors who worked hard on 9.0 Beta compared to people who ignored 9.0. I don't think that extends to another commitfest, but I would be OK with extending this commitfest by two weeks to ensure that every feature gets a full review. > - Range Types. This is a large patch which was submitted for the > first time to the last CommitFest of the cycle, and the first version > that had no open TODO items was posted yesterday, three-quarters of > the way through that last CommitFest. Some good review has been done. > While more is probably needed, I think we should feel good about > what's been accomplished and mark this one Returned with Feedback. Aside from it needing more feedback, and being personally disappointed, I think this is inevitable. Jeff seems OK with it too. > - ALTER EXTENSION UPGRADE. This is another large patch which was > submitted for the first time to the last CommitFest of the cycle. The I thought we'd *already* decided to postpone this, due to the spec still being muddy. No? > - FOR KEY LOCK tables. This patch, unfortunately, has not gotten a > lot of review. But it's a major, potentially destabilizing change I think this needs to get a full review before we make any decisions on it. > - SQL/MED. This patch unfortunately kind of stalled for a long time. > However, I believe that Heikki is now working actively on the core > patch, so I'm hopeful for some activity here soon. I'll point out that SQL/MED with File_FDW would be a majorly useful feature, even if other FDWs didn't yet work. So a partial commit is also a possibility. > - Writeable CTEs. Tom has promised to look at this, but doesn't seem > to be in a hurry. I think it would be tremendously unfair to the author to punt this feature unless there's fundamental flaws in it. Its development started back in 9.0, and it's never really gotten enough review/attention. --Josh Berkus -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] updated patch for foreach stmt
* Pavel Stehule (pavel.steh...@gmail.com) wrote: > There is only bad keywords in doc - SCALE instead SLICE and a maybe a > usage of slicing need a example. Err, yeah, a couple of stupid documentation issues, sorry about that. commit 9460c0831f5de71e31823b7e9d8511d2d8124776 Author: Stephen Frost Date: Tue Feb 8 16:15:03 2011 -0500 Add ARRAY keyword to example, ewps. commit 34a8ffd8d4cfe42bb4f698564f16bd468b9f2613 Author: Stephen Frost Date: Tue Feb 8 16:14:17 2011 -0500 Tabs are bad, mmmkay. commit cf1ebcb7e4905cc31cd58b4fd9fa90cd488cc0c0 Author: Stephen Frost Date: Tue Feb 8 16:13:22 2011 -0500 PL/PgSQL documentation cleanups SCALE -> SLICE (no clue where SCALE came from..) and clarify what a SLICE is, really. Thanks, Stephen *** a/doc/src/sgml/plpgsql.sgml --- b/doc/src/sgml/plpgsql.sgml *** *** 300,310 $$ LANGUAGE plpgsql; All variables used in a block must be declared in the declarations section of the block. ! (The only exceptions are that the loop variable of a FOR loop ! iterating over a range of integer values is automatically declared as an ! integer variable, and likewise the loop variable of a FOR loop ! iterating over a cursor's result is automatically declared as a ! record variable.) --- 300,308 All variables used in a block must be declared in the declarations section of the block. ! (The only exceptions are the loop variables of FOR and ! FOREACH loops which are automatically declared as the ! appropriate variable type to match what to loop is over.) *** *** 1359,1375 GET DIAGNOSTICS integer_var = ROW_COUNT; ! A FOR statement sets FOUND true ! if it iterates one or more times, else false. This applies to ! all four variants of the FOR statement (integer ! FOR loops, record-set FOR loops, ! dynamic record-set FOR loops, and cursor ! FOR loops). ! FOUND is set this way when the ! FOR loop exits; inside the execution of the loop, ! FOUND is not modified by the ! FOR statement, although it might be changed by the ! execution of other statements within the loop body. --- 1357,1375 ! A FOR or FOREACH statement sets ! FOUND to true if it iterates one or more times, ! else to false. This applies to all four variants of the ! FOR statement (integer FOR loops, record-set ! FOR loops, dynamic record-set FOR loops, and ! cursor FOR loops) and all variants of the ! FOREACH statement (currently only ARRAY ! FOREACH loops). FOUND is set this ! way when the FOR or FOREACH loop exits; ! inside the execution of the loop, FOUND is not ! modified by the FOR or FOREACH statement, ! although it might be changed by the execution of other statements ! within the loop body. *** *** 1910,1918 END CASE; With the LOOP, EXIT, ! CONTINUE, WHILE, and FOR ! statements, you can arrange for your PL/pgSQL ! function to repeat a series of commands. --- 1910,1918 With the LOOP, EXIT, ! CONTINUE, WHILE, FOR, ! and FOREACH statements, you can arrange for your ! PL/pgSQL function to repeat a series of commands. *** *** 2238,2243 END LOOP label ; --- 2238,2287 + + Looping Through Arrays + + + Similar to a FOR loop is the FOREACH loop. + FOREACH is used to loop over multi-value variables, such + as ARRAYs. Other multi-value variables may be added to FOREACH + later. Note that FOREACH can be thought of horizantally + looping, whereas FOR can be thought of a vertical loop. + The FOREACH statement to loop over an ARRAY is: + + + <
Re: [HACKERS] [pgsql-general 2011-1-21:] Are there any projects interested in object functionality? (+ rule bases)
> o in the recent there are no efforts known to experiment with > reference types, methods, or rule inference on top of PostgreSQL -- > advice that can be given mostly points to the given documented > functionality Correct, AFAIK. > o extensions of PostgreSQL to support such a kind of usage have to be > expected to be expected to be rejected from integration to the code base > core -- i.e., if they are done, students have to be told «you can't > expect this to become a part of PostgreSQL» Not necessarily. We "rule out" *very* few things from PostgreSQL; I think the TODO list only has 3 ideas which are contraindicated. However, the warning is that this particular *set* of ideas has some very high hurdles to jump before it could be considered seriously for core, and that none of the existing committers seem interested in helping with it. So the level of difficulty for the implementer would be considerably greater than for many other patches of less invasiveness and clearer apparent utility. Among the hurdles are: a. performance: you'd have to work out how to make nested object resolution not take forever and burn up the CPUs b. resolution: you'd need to come up with an object naming practice which compliments, intead of conflicts with, the SQL-standard syntax c. utility: you'd have to demonstrate why all this was actually useful. > Is this understood correctly, especially the last point, or did > Robert/Tom just specifically address syntactical conflicts (between > schema and object semantics) with the point notation? Syntactic conflicts are also significant, as anyone who's used EDB's "packages" mod can tell you. So these would need to be worked out as well, and NOT in a way which breaks backwards compatibility. > Otherwise, the striking lack of academical initiatives in the area of OO > and rule inference on top of PostgreSQL appears to me as a demand to Hmmm. I don't know about that; I've never seen that academics *cared* whether or not their code god into -core. -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Per-column collation, the finale
On tor, 2011-02-03 at 18:36 -0500, Noah Misch wrote: > Looks good and tests well. I've attached the same benchmark script > with updated timings, and I've marked the patch Ready for Committer. Committed. Thanks everyone. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] updated patch for foreach stmt
2011/2/8 Stephen Frost : > * Robert Haas (robertmh...@gmail.com) wrote: >> Amen to that! > > Hopefully it helped. :) > >> I think the syntax Tom suggested before was FOREACH thingy IN ARRAY >> arr rather than just FOREACH thingy IN arr. That's probably a good >> idea, because it gives us an escape hatch against needing to invent >> yet another variant of this syntax - the word immediately following IN >> can be known with confidence to be intended as a keyword rather than >> as part of the expression. > > Alright, alright, *I* don't care that much, though I do feel it's a bit > excessive. Updated patch against HEAD attached. I am thinking so it is good idea. Even I have no plans to expand plpgsql in next year :), it really opening a doors for later changes. And it's more secure - we can check if parameter is really array or some else. Regards Pavel Stehule > > commit a5d32fa41fbbbd9ace465f62be714366990061d4 > Author: Stephen Frost > Date: Tue Feb 8 15:57:40 2011 -0500 > > PL/PgSQL FOREACH - Add ARRAY keyword > > Add ARRAY as required after IN when using FOREACH, to > future-proof against later kinds of FOREACH commands. > > Thanks, > > Stephen > > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.10 (GNU/Linux) > > iEYEARECAAYFAk1RrtUACgkQrzgMPqB3kigt6gCffjFcE4ddo76ECj+kB+iaM7DV > c2UAnRDMh1B7r+4ZrnJtIeoRUXJy42+f > =ZwQa > -END PGP SIGNATURE- > > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] updated patch for foreach stmt
2011/2/8 Stephen Frost : > Greetings, > > * Pavel Stehule (pavel.steh...@gmail.com) wrote: >> I resend a patch with last update of this patch > > Alright, so, like I said, I really like this feature and would like to > see it included. To that end, I've done perhaps a bit more than a > review of the patch. Pavel, if you could go over the changes I've made > and review them and let me know if you see any problems, I'd appreciate > it. I've tried to get it ready for a committer as much as I can without > being one. :) > > I moved the array iteration over into arrayfuncs.c, cleaned it up quite > a bit, cleaned up the pl/pgsql foreach function, improved the PL/PgSQL > documentation to understand FOREACH as another top-level command, added > comments all over the place, etc. > It's looking well - thank you. There is only bad keywords in doc - SCALE instead SLICE and a maybe a usage of slicing need a example. It is nice. Regards Pavel Stehule > Passes all regressions too. > > commit 19deaf69a4dabfa4a223a6dcd36570866ad0bd3c > Author: Stephen Frost > Date: Tue Feb 8 15:15:48 2011 -0500 > > PL/PgSQL FOREACH cleanup > > Define and rename element OID to be more consistant, ensure > that the right name gets returned in error messages, and fix > regression output to match new error message (grammar cleanup). > > commit f88fd2ab5419f9a2784677038b3fb01053c69163 > Merge: f191af1 8c6e3ad > Author: Stephen Frost > Date: Tue Feb 8 14:28:18 2011 -0500 > > Merge branch 'master' of git://git.postgresql.org/git/postgresql into > plpgsql_foreach > > commit f191af16f9d3e5ae0072e61c1b58713040cc8d64 > Author: Stephen Frost > Date: Tue Feb 8 14:27:05 2011 -0500 > > PL/PgSQL FOREACH Minor Whitespace Cleanup > > commit 612cf5485f202a49aec70cf32f74d19d0d130b6b > Author: Stephen Frost > Date: Tue Feb 8 14:06:06 2011 -0500 > > Improving FOREACH, code and documentation > > This patch moves and reworks much of the array iteration code > that FOREACH had been implemented with to be part of arrayfuncs.c > and exported through utils/array.h. It also cleans up the error > handling and set up pieces of the FOREACH handling in pl_exec.c > Lastly, the documentation and comments are updated and improved. > > commit 89058b79e43311e8f37af16c3fc17b622dc97578 > Author: Stephen Frost > Date: Sun Feb 6 14:14:04 2011 -0500 > > Add FOREACH top-level PL/PgSQL command > > This patch adds a new top-level PL/PgSQL command called FOREACH which > is intended to be for iterating over multi-value variables. This also > includes the first FOREACH type, an ARRAY iteration capability. > > Patch by Pavel Stehule. > > Thanks, > > Stephen > > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.10 (GNU/Linux) > > iEYEARECAAYFAk1Rpu8ACgkQrzgMPqB3kiiuTQCfdY8Cwg5DVuvu2xKpcv6M7QQ1 > +TwAnR5ZFXsGdAwgHwQEprcYIlp8t0wy > =DAjZ > -END PGP SIGNATURE- > > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] updated patch for foreach stmt
* Robert Haas (robertmh...@gmail.com) wrote: > Amen to that! Hopefully it helped. :) > I think the syntax Tom suggested before was FOREACH thingy IN ARRAY > arr rather than just FOREACH thingy IN arr. That's probably a good > idea, because it gives us an escape hatch against needing to invent > yet another variant of this syntax - the word immediately following IN > can be known with confidence to be intended as a keyword rather than > as part of the expression. Alright, alright, *I* don't care that much, though I do feel it's a bit excessive. Updated patch against HEAD attached. commit a5d32fa41fbbbd9ace465f62be714366990061d4 Author: Stephen Frost Date: Tue Feb 8 15:57:40 2011 -0500 PL/PgSQL FOREACH - Add ARRAY keyword Add ARRAY as required after IN when using FOREACH, to future-proof against later kinds of FOREACH commands. Thanks, Stephen *** a/doc/src/sgml/plpgsql.sgml --- b/doc/src/sgml/plpgsql.sgml *** *** 300,310 $$ LANGUAGE plpgsql; All variables used in a block must be declared in the declarations section of the block. ! (The only exceptions are that the loop variable of a FOR loop ! iterating over a range of integer values is automatically declared as an ! integer variable, and likewise the loop variable of a FOR loop ! iterating over a cursor's result is automatically declared as a ! record variable.) --- 300,308 All variables used in a block must be declared in the declarations section of the block. ! (The only exceptions are the loop variables of FOR and ! FOREACH loops which are automatically declared as the ! appropriate variable type to match what to loop is over.) *** *** 1359,1375 GET DIAGNOSTICS integer_var = ROW_COUNT; ! A FOR statement sets FOUND true ! if it iterates one or more times, else false. This applies to ! all four variants of the FOR statement (integer ! FOR loops, record-set FOR loops, ! dynamic record-set FOR loops, and cursor ! FOR loops). ! FOUND is set this way when the ! FOR loop exits; inside the execution of the loop, ! FOUND is not modified by the ! FOR statement, although it might be changed by the ! execution of other statements within the loop body. --- 1357,1375 ! A FOR or FOREACH statement sets ! FOUND to true if it iterates one or more times, ! else to false. This applies to all four variants of the ! FOR statement (integer FOR loops, record-set ! FOR loops, dynamic record-set FOR loops, and ! cursor FOR loops) and all variants of the ! FOREACH statement (currently only ARRAY ! FOREACH loops). FOUND is set this ! way when the FOR or FOREACH loop exits; ! inside the execution of the loop, FOUND is not ! modified by the FOR or FOREACH statement, ! although it might be changed by the execution of other statements ! within the loop body. *** *** 1910,1918 END CASE; With the LOOP, EXIT, ! CONTINUE, WHILE, and FOR ! statements, you can arrange for your PL/pgSQL ! function to repeat a series of commands. --- 1910,1918 With the LOOP, EXIT, ! CONTINUE, WHILE, FOR, ! and FOREACH statements, you can arrange for your ! PL/pgSQL function to repeat a series of commands. *** *** 2238,2243 END LOOP label ; --- 2238,2285 + + Looping Through Arrays + + + Similar to a FOR loop is the FOREACH loop. + FOREACH is used to loop over multi-value variables, such + as ARRAYs. Other multi-value variables may be added to FOREACH + later. Note that FOREACH can be thought of horizantally + looping, whereas FOR can be thought of a vertical loop. + The FOREACH statement to loop over an ARRAY is: + + + <
Re: [HACKERS] Extensions versus pg_upgrade
Tom Lane writes: > Personally I find the extension stuff to be a bigger deal than anything > else remaining in the commitfest. Also, I've fixed a number of > pre-existing bugs in passing, and I'd have to go extract those changes > out of the current extensions patch if we abandon it now. So I'd like > to keep pushing ahead on this. Stealing words from Kevin, WOO-HOO :) I'll try to continue devote as much time as I did already to assist as much as I can here. > After further reflection I think that the pg_upgrade situation can be > handled like this: > > 1. Provide a way for CREATE EXTENSION to not run the script --- either > add a grammar option for that, or just have a back-door flag that > pg_upgrade can set via one of its special kluge functions. (Hm, > actually we'd want it to install the old version number and comment > too, so maybe just have a kluge function to create a pg_extension > entry that agrees with the old entry.) In the upgrade patch there's provision for not running the script: CREATE WRAPPER EXTENSION foo; That's pretty useful indeed. What it does now is read the control file to init the comment and other fields, but extversion is forced NULL. That allows to have special rules in how UPGRADE will pick a script. There's even code to do CREATE WRAPPER EXTENSION foo WITH SYSID 12345; We could add version and comment here for purposes of pg_upgrade. > 2. Invent a command "ALTER EXTENSION extname ADD object-description" > that simply adds a pg_depend entry marking an existing object as > belonging to the extension. I think this was going to be part of the > plan anyway for ALTER EXTENSION UPGRADE, specifically we need that for > the bootstrap case of collecting some loose pre-existing objects into > an extension. In the upgrade patch, that's spelled ALTER OBJECT foo SET EXTENSION bar; and does exactly what you're proposing. > 3. In --binary-upgrade mode, pg_dump doesn't suppress the individual > member objects, but instead emits ALTER EXTENSION ADD after creating > each member object. > > So that gets us to the point of being able to run pg_upgrade without > changing the contents of the extension. After that we can look at > ALTER EXTENSION UPGRADE. Well, or begin there as the code you need is already written. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI patch version 14
One other nit re. the predicate lock table size GUCs: the out-of-memory case in RegisterPredicateLockingXid (predicate.c:1592 in my tree) gives the hint to increase max_predicate_locks_per_transaction. I don't think that's correct, since that GUC isn't used to size SerializableXidHash. In fact, that error shouldn't arise at all because if there was room in PredXact to register the transaction, then there should be room to register it's xid in SerializableXidHash. Except that it's possible for something else to allocate all of our shared memory and thus prevent SerializbleXidHash from reaching its intended max capacity. In general, it might be worth considering making a HTAB's max_size a hard limit, but that's a larger issue. Here, it's probably worth just removing the hint. Dan -- Dan R. K. Ports MIT CSAILhttp://drkp.net/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Tom Lane writes: > that they might be out on disk. Suppose that you have the cube > extension installed and there are some user tables containing columns of > type cube[]. Those arrays are going to have type cube's OID embedded in > them. If cube has a different OID after pg_upgrade then the arrays are > broken. Forgot about the internals of arrays. Sure, that's a problem here. > Even letting an extension script run and create data types that weren't > there at all before is problematic, since those types could easily claim > OIDs that need to be reserved for pre-existing types that appear later > in the dump script. > > Similar issues arise for the other cases where pg_upgrade is forcing OID > assignments; it's not doing that just for fun. There's provision to forcing the OID of an extension at CREATE EXTENSION time in the UPGRADE patch, but it does not handle the extension objects. I though system OIDs and user OIDs can't clash because of a 16384 counter magic assignment at initdb, so I'm having a hard time getting to understand exactly the problem case. Will spend time on it tomorrow, if that still helps. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Named restore points
On 8 February 2011 19:53, Simon Riggs wrote: > On Tue, 2011-02-08 at 14:07 -0300, Euler Taveira de Oliveira wrote: > >> Because named restore point is a noop xlog record; besides, transaction and >> time involves xlog records that contain data. > > Committed. Thanks for the patch and the review. > > I changed the patch to require wal_level > minimal, rather than > archive_mode = on. This could do with a bit more documentation about usage. Below the Backup Control Functions table (http://developer.postgresql.org/pgdocs/postgres/functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE), each function has a paragraph detailing what it does. Also, I notice you can easily write over a label. The case I'm thinking of is someone in psql creating a named restore point, then later on, they go in again, accidentally cursor up and select the previous statement and create it again. Would this mean that the previous label is lost, or would it be the case that any subsequent duplicate labels would have no effect unless the WAL files with the original label in were consumed? In either case, a note in the docs about this would be useful. And I don't see these label creations getting logged either. Could we output that to the log because at least then users can grep the directory for labels, and, in most cases, the time they occurred? -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions support for pg_dump, patch v27
Tom Lane writes: > Yeah, I deleted that relocatable test because it's redundant: > control->schema cannot be set for a relocatable extension, > cf the test in read_extension_control_file. I missed that you kept this test in your version of the patch. Sorry for noise. Regardsm -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] updated patch for foreach stmt
On Tue, Feb 8, 2011 at 3:26 PM, Stephen Frost wrote: > Greetings, > > * Pavel Stehule (pavel.steh...@gmail.com) wrote: >> I resend a patch with last update of this patch > > Alright, so, like I said, I really like this feature and would like to > see it included. To that end, I've done perhaps a bit more than a > review of the patch. Pavel, if you could go over the changes I've made > and review them and let me know if you see any problems, I'd appreciate > it. I've tried to get it ready for a committer as much as I can without > being one. :) Amen to that! I think the syntax Tom suggested before was FOREACH thingy IN ARRAY arr rather than just FOREACH thingy IN arr. That's probably a good idea, because it gives us an escape hatch against needing to invent yet another variant of this syntax - the word immediately following IN can be known with confidence to be intended as a keyword rather than as part of the expression. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Sync Rep for 2011CF1
On Tue, 2011-02-08 at 13:53 -0500, Robert Haas wrote: > That having been said, there is at least one part of this patch which > looks to be in pretty good shape and seems independently useful > regardless of what happens to the rest of it, and that is the code > that sends replies from the standby back to the primary. This allows > pg_stat_replication to display the write/flush/apply log positions on > the standby next to the sent position on the primary, which as far as > I am concerned is pure gold. Simon had this set up to happen only > when synchronous replication or XID feedback in use, but I think > people are going to want it even with plain old asynchronous > replication, because it provides a FAR easier way to monitor standby > lag than anything we have today. I've extracted this portion of the > patch, cleaned it up a bit, written docs, and attached it here. Score! +1 JD -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] updated patch for foreach stmt
Greetings, * Pavel Stehule (pavel.steh...@gmail.com) wrote: > I resend a patch with last update of this patch Alright, so, like I said, I really like this feature and would like to see it included. To that end, I've done perhaps a bit more than a review of the patch. Pavel, if you could go over the changes I've made and review them and let me know if you see any problems, I'd appreciate it. I've tried to get it ready for a committer as much as I can without being one. :) I moved the array iteration over into arrayfuncs.c, cleaned it up quite a bit, cleaned up the pl/pgsql foreach function, improved the PL/PgSQL documentation to understand FOREACH as another top-level command, added comments all over the place, etc. Passes all regressions too. commit 19deaf69a4dabfa4a223a6dcd36570866ad0bd3c Author: Stephen Frost Date: Tue Feb 8 15:15:48 2011 -0500 PL/PgSQL FOREACH cleanup Define and rename element OID to be more consistant, ensure that the right name gets returned in error messages, and fix regression output to match new error message (grammar cleanup). commit f88fd2ab5419f9a2784677038b3fb01053c69163 Merge: f191af1 8c6e3ad Author: Stephen Frost Date: Tue Feb 8 14:28:18 2011 -0500 Merge branch 'master' of git://git.postgresql.org/git/postgresql into plpgsql_foreach commit f191af16f9d3e5ae0072e61c1b58713040cc8d64 Author: Stephen Frost Date: Tue Feb 8 14:27:05 2011 -0500 PL/PgSQL FOREACH Minor Whitespace Cleanup commit 612cf5485f202a49aec70cf32f74d19d0d130b6b Author: Stephen Frost Date: Tue Feb 8 14:06:06 2011 -0500 Improving FOREACH, code and documentation This patch moves and reworks much of the array iteration code that FOREACH had been implemented with to be part of arrayfuncs.c and exported through utils/array.h. It also cleans up the error handling and set up pieces of the FOREACH handling in pl_exec.c Lastly, the documentation and comments are updated and improved. commit 89058b79e43311e8f37af16c3fc17b622dc97578 Author: Stephen Frost Date: Sun Feb 6 14:14:04 2011 -0500 Add FOREACH top-level PL/PgSQL command This patch adds a new top-level PL/PgSQL command called FOREACH which is intended to be for iterating over multi-value variables. This also includes the first FOREACH type, an ARRAY iteration capability. Patch by Pavel Stehule. Thanks, Stephen *** a/doc/src/sgml/plpgsql.sgml --- b/doc/src/sgml/plpgsql.sgml *** *** 300,310 $$ LANGUAGE plpgsql; All variables used in a block must be declared in the declarations section of the block. ! (The only exceptions are that the loop variable of a FOR loop ! iterating over a range of integer values is automatically declared as an ! integer variable, and likewise the loop variable of a FOR loop ! iterating over a cursor's result is automatically declared as a ! record variable.) --- 300,308 All variables used in a block must be declared in the declarations section of the block. ! (The only exceptions are the loop variables of FOR and ! FOREACH loops which are automatically declared as the ! appropriate variable type to match what to loop is over.) *** *** 1359,1375 GET DIAGNOSTICS integer_var = ROW_COUNT; ! A FOR statement sets FOUND true ! if it iterates one or more times, else false. This applies to ! all four variants of the FOR statement (integer ! FOR loops, record-set FOR loops, ! dynamic record-set FOR loops, and cursor ! FOR loops). ! FOUND is set this way when the ! FOR loop exits; inside the execution of the loop, ! FOUND is not modified by the ! FOR statement, although it might be changed by the ! execution of other statements within the loop body. --- 1357,1375 ! A FOR or FOREACH statement sets ! FOUND to true if it iterates one or more times, ! else to false. This applies to all four variants of the ! FOR statement (integer FOR loops, record-set ! FOR loops, dynamic record-set FOR loops, and ! cursor FOR loops) and all variants of the ! FOREACH statement (currently only ARRAY ! FOREACH loops). FOUND is set this ! way when the FOR or FOREACH loop exits; ! inside the execution of the loop, FOUND is not ! modified by the FOR or FOREACH statement, ! although it might be changed by the execution of other statements ! within the loop body. *** *** 1910,1918 END CASE; With the LOOP, E
Re: [HACKERS] postponing some large patches to 9.2
pg...@j-davis.com (Jeff Davis) writes: > On Tue, 2011-02-08 at 06:57 -0500, Stephen Frost wrote: >> * Robert Haas (robertmh...@gmail.com) wrote: >> > - Range Types. This is a large patch which was submitted for the >> > first time to the last CommitFest of the cycle, and the first version >> > that had no open TODO items was posted yesterday, three-quarters of >> > the way through that last CommitFest. Some good review has been done. >> > While more is probably needed, I think we should feel good about >> > what's been accomplished and mark this one Returned with Feedback. >> >> I don't agree w/ punting Range Types. Range Types were discussed as far >> back as the 2010 developer meeting, were discussed quite a bit again >> starting in October and throughout the fall, and Jeff has regularly >> been posting updates to it. Given how thorough Jeff is, my feeling is >> that this patch is more than ready for beta. > > I appreciate the sentiment, but in addition to some cleanup, any patch > like this at least requires some discussion. It's a language change > we'll be supporting for a long time. > > At minimum, we're a couple hundred emails shy of a real consensus on the > naming ;) It's more than a bit sad... The RangeType change has the massive merit of enabling some substantial development changes, where we can get rid of whole classes of comparison clauses, and hopefully whole classes of range errors. That was my favorite would-be feature for 9.1. It'll take some time to get code changes into systems to use this; the sooner the feature's in a deployable version of Postgres, the earlier that kind of thing may start. -- (format nil "~S@~S" "cbbrowne" "gmail.com") http://www3.sympatico.ca/cbbrowne/internet.html Colorless green ideas sleep furiously. -- Noam Chomsky -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Sync Rep plan from here
>From here, I consider myself free and clear to work on Sync Rep as my final contribution to this Commit Fest. There's still patches I'm interested in, but priority and time means I won't be reviewing anything further. I'm mostly unreachable for next few days, but expect to be working on Sync rep as much of next week as it takes. -- Simon Riggs http://www.2ndQuadrant.com/books/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Named restore points
On Tue, 2011-02-08 at 14:07 -0300, Euler Taveira de Oliveira wrote: > Because named restore point is a noop xlog record; besides, transaction and > time involves xlog records that contain data. Committed. Thanks for the patch and the review. I changed the patch to require wal_level > minimal, rather than archive_mode = on. -- Simon Riggs http://www.2ndQuadrant.com/books/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Sync Rep for 2011CF1
On Tue, Feb 8, 2011 at 2:34 PM, Magnus Hagander wrote: > I would usually not worry about the bandwidth, really, I'd be more > worried about potentially increasing latency somewhere. The time to read and write the socket doesn't seem like it should be significant, unless the network buffers fill up or I'm missing something. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On Tue, Feb 8, 2011 at 2:13 PM, David Fetter wrote: >> I agree that we have some problems in that area - particularly with >> writeable CTEs - but prolonging the schedule isn't going to fix that >> problem. > > What is? I think the best solution would probably be to find corporate sponsors for more PostgreSQL committers, or other senior community members who might become committers if they had more time to spend on it. The problem is that there are basically two people who understand the executor well enough to commit that patch: Tom, and Heikki. And Tom doesn't really like the feature (for reasons that are totally baffling to me, BTW). If I had three weeks to spend on that patch, it would be in by now, and I'd know the executor a lot better too, which would make things easier for the next guy who wants to do somethign of that type. But I (with some exceptions) don't get paid to commit other people's patches, so that limits the amount of time I can spend on it to (a) time when I'm not working on something that's a higher priority for my employer and (b) evenings and weekends. I invest as much of that time as I can in community work (which is why I have "nerd" tatooed on my forehead) but there's a limited amount of it, and I tend to invest it in patches where I'm already somewhat familiar with the relevant portion of the code, because that lets me get through more patches, if not always the best patches. > This development cycle was a change from other development cycles in > that it "began" before development had closed in the previous cycle. > I will not take a position at the moment as to the wisdom of making > this change, but it's been clear from feedback from lots of > developers, even ones who'd be expected to be "in the know," that this > vital piece of information did not gotten out in time. The schedule was posted publicly, so I'm not sure how that communication gap happened. > Let's assume for the moment that we're going with overlapping > development cycles into the future. I'd submit that given the > propagation delay of this change, the schedule for the first iteration > of this never was reasonable, and "slipping" two months is a small > price to pay for the huge flock of things we're epsilon short of > getting. I think you're being considerably over-optimistic about how close the remaining patches are to commit, and even more optimistic about the effects of another two months on the quality of the release. Here's my prediction: if we slip the schedule for two months, all the pressure that now exists to get things wrapped up will disappear, and we'll be back in approximately the same place two months from now. A few more things will have been committed, but at least as many new patches will materialize, so that the remaining volume of work is the same as before, or even larger. We'll still end up punting the same number of patches, but in the meantime we'll have managed to rush a few things in that will destabilize the tree and require months of extra beta time during which no new work will be able to be committed. The point is this: We're not going to punt major patches because they are close to commit but yet some arbitrary deadline has expired. We're going to punt them because they're NOT close to commit and a long-agreed deadline has expired. I don't think I'd get much support if I proposed punting everything that isn't yet committed at 2011-02-15 00:00:00, but it's just a mistake to think that if we only wait another day or another week, we'll get it all done. We've tried that before and it usually doesn't work out. The limiting factor isn't primarily the amount of time that it takes to write the code; it's the amount of motivation to get it done. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] MVCC doc typo fix
On 08.02.2011 20:40, Kevin Grittner wrote: attached Fixed. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] REVIEW Range Types
On Sun, February 6, 2011 07:41, Jeff Davis wrote: > New patch. All known TODO items are closed, although I should do a > cleanup pass over the code and docs. > > Fixed in this patch: > > * Many documentation improvements > * Added INT8RANGE > * Renamed PERIOD[TZ] -> TS[TZ]RANGE > * Renamed INTRANGE -> INT4RANGE > * Improved parser's handling of whitespace and quotes > * Support for PL/pgSQL functions with ANYRANGE arguments/returns > * Make "subtype_float" function no longer a requirement for GiST, > but it should still be supplied for the penalty function to be > useful. > I'm afraid even the review is WIP, but I thought I'd post what I have. Context: At the moment we use postbio (see below) range functionality, to search ranges and overlap in large DNA databases ('genomics'). We would be happy if a core data type could replace that. It does look like the present patch is ready to do those same tasks, of which the main one for us is gist-indexed ranges. We also use btree_gist with that, so to include that in core would make sense in this regard. test config: ./ configure \ --prefix=/var/data1/pg_stuff/pg_installations/pgsql.range_types \ --with-pgport=6563 \ --enable-depend \ --enable-cassert \ --enable-debug \ --with-perl \ --with-openssl \ --with-libxml \ --enable-dtrace compile, make, check, install all OK. -- Submission review: -- * Is the patch in context diff format? Yes. * Does it apply cleanly to the current git master? It applied cleanly. (after the large serialisation commit 2011.02.08 it will need some changes/rebase) * Does it include reasonable tests, necessary doc patches, etc? Yes, there are many tests; the documentation is good. Small improvements below. - Usability review - Read what the patch is supposed to do, and consider: * Does the patch actually implement that? Yes. * Do we want that? Yes. * Do we already have it? contrib/seg has some similar functionalities: "seg is a data type for representing line segments, or floating point intervals". And on pgFoundry there is a seg spin-off "postbio", tailored to genomic data (with gist-indexing). (see postbio manual: http://postbio.projects.postgresql.org/ ) * Does it follow SQL spec, or the community-agreed behavior? I don't know - I couldn't find much in the SQL-spec on a range datatype. The ranges behaviour has been discussed on -hackers. * Does it include pg_dump support (if applicable)? dump/restore were fine in the handful of range-tables which I moved between machines. * Are there dangers? Not that I could find. * Have all the bases been covered? I think the functionality looks fairly complete. - Feature test: - * Does the feature work as advertised? The patch seems very stable. My focus has been mainly on the intranges. I tested by parsing documentation- and regression examples, and parametrising them in a perl harness, to generate many thousands of range combinations. I found only a single small problem (posted earlier - Jeff Davis solved it already apparently). see: http://archives.postgresql.org/pgsql-hackers/2011-02/msg00387.php * Are there corner cases the author has failed to consider? No. * Are there any assertion failures or crashes? I haven't seen a single one. -- Documentation: -- Section 9.18: table 9-42. range functions: The following functions are missing (I encountered them in the regression tests): contained_by() range_eq() section 'Constructing Ranges' (8.16.6): In the code example, remove the following line: "-- the int4range result will appear in the canonical format" it doesn't make sense there. At this place "canonical format" has not been discussed; maybe it is not even discussed anywhere. also (same place): 'where "_" is used to mean "exclusive" and "" is used to mean "inclusive".' should be: 'where "_" is used to mean "exclusive" and "i" is used to mean "inclusive".' And btw: it should mention here that the range*inf* functions, an underscore to separate 'range' from the rest of the function name, e.g.: range_linfi_() => infinite lower bound, inclusive upper bound I still want to do Performance review and Coding review. FWIW, I would like to repeat that my impression is that the patch is very stable, especially with regard to the intranges (tested extensively). regards, Erik Rijkers -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Sync Rep for 2011CF1
On Tue, Feb 8, 2011 at 19:53, Robert Haas wrote: > On Mon, Feb 7, 2011 at 1:20 PM, Robert Haas wrote: >> On Sat, Jan 15, 2011 at 4:40 PM, Simon Riggs wrote: >>> Here's the latest patch for sync rep. >> >> Here is a rebased version of this patch which applies to head of the >> master branch. I haven't tested it yet beyond making sure that it >> compiles and passes the regression tests -- but this fixes the bitrot. > > As I mentioned yesterday that I would do, I spent some time working on > this. I think that there are somewhere between three and six > independently useful features in this patch, plus a few random changes > to the documentation that I'm not sure whether want or not (e.g. > replacing master by primary in a few places, or the other way around). > > One problem with the core synchronous replication technology is that > walreceiver cannot both receive WAL and write WAL at the same time. > It switches back and forth between reading WAL from the network socket > and flushing it to disk. The impact of that is somewhat mitigated in > the current patch because it only implements the "fsync" level of > replication, and chances are that the network read time is small > compared to the fsync time. But it would certainly suck for the > "receive" level we've talked about having in the past, because after > receiving each batch of WAL, the WAL receiver wouldn't be able to send > any more acknowledgments until the fsync completed, and that's bound > to be slow. I'm not really sure how bad it will be in "fsync" mode; > it may be tolerable, but as Simon noted in a comment, in the long run > it'd certainly be nicer to have the WAL writer process running during > recovery. > > As a general comment on the quality of the code, I think that the > overall logic is probably sound, but there are an awful lot of > debugging leftovers and inconsistencies between various parts of the > patch. For example, when I initially tested it, *asynchronous* > replication kept breaking between the master and the standby, and I > couldn't figure out why. I finally realized that there was a ten > second pause that had been inserting into the WAL receiver loop as a > debugging tool which was allowing the standby to get far enough behind > that the master was able to recycle WAL segments the standby still > needed. Under ordinary circumstances, I would say that a patch like > this was not mature enough to submit for review, let alone commit. > For that reason, I am pretty doubtful about the chances of getting > this finished for 9.1 without some substantial prolongation of the > schedule. > > That having been said, there is at least one part of this patch which > looks to be in pretty good shape and seems independently useful > regardless of what happens to the rest of it, and that is the code > that sends replies from the standby back to the primary. This allows > pg_stat_replication to display the write/flush/apply log positions on > the standby next to the sent position on the primary, which as far as > I am concerned is pure gold. Simon had this set up to happen only > when synchronous replication or XID feedback in use, but I think > people are going to want it even with plain old asynchronous > replication, because it provides a FAR easier way to monitor standby > lag than anything we have today. I've extracted this portion of the > patch, cleaned it up a bit, written docs, and attached it here. +1. I haven't actually looked at the patch, but having this ability would be *great*. I also agree with the general idea of trying to break it into smaller parts - even if they only provide small parts each on it's own. That also makes it easier to get an overview of exactly how much is left, to see where to focus. > The only real complaint I can imagine about offering this > functionality all the time is that it uses extra bandwidth. I'm > inclined to think that the ability to shut it off completely is > sufficient answer to that complaint. Yes, agreed. I would usually not worry about the bandwidth, really, I'd be more worried about potentially increasing latency somewhere. > The ones with little rocketships on them? -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [pgsql-general 2011-1-21:] Are there any projects interested in object functionality? (+ rule bases)
On lör, 2011-02-05 at 12:26 +0100, Nick Rudnick wrote: > o extensions of PostgreSQL to support such a kind of usage have to > be expected to be expected to be rejected from integration to the code > base core -- i.e., if they are done, students have to be told «you > can't expect this to become a part of PostgreSQL» There are very varied interests in this community. But you have to keep in mind that PostgreSQL is a production software package, not a research platform. This doesn't mean that we discourage research with the PostgreSQL code base. But that should be done in a fork. Code aimed for inclusion in a proper PostgreSQL release should have demonstrable real life use and should be developed according to certain standards and in close corporation with the community. If a research project can deal with that, great, but typically, research projects are there to determine whether something could have real-life value in the first place, and development follows different standards of quality. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On Tue, Feb 8, 2011 at 2:02 PM, Jeff Davis wrote: > On Mon, 2011-02-07 at 22:37 -0500, Robert Haas wrote: >> - Range Types. This is a large patch which was submitted for the >> first time to the last CommitFest of the cycle, and the first version >> that had no open TODO items was posted yesterday, three-quarters of >> the way through that last CommitFest. Some good review has been done. >> While more is probably needed, I think we should feel good about >> what's been accomplished and mark this one Returned with Feedback. > > I submitted this clearly marked WIP, so I expected that it would likely > be pushed to 9.2. > > However, I don't feel like I have the kind of feedback that will help me > get it committed next commitfest. I did get some review, and that was > helpful, but it was mostly on isolated details. > > The patch is a million little decisions: names, catalog structure, > interface, representation, general usability, grammar, functionality, > etc. Without some checkpoint, the chances that everyone agrees with all > of these decisions at the beginning of the next commitfest is zero. > > Is the commitfest not the right place to do this? If not, then when? That's a fair question, and I do understand the difficulty. I think a CommitFest is the right place to do that. On the other hand, as I'm sure you realize, I'm not keen to hold up 9.1beta for a feature that isn't going to be committed until 9.2. In the 9.0 and 9.1 cycles, it was my observation that patches which were submitted to the first or second CommitFest got a lot of this kind of review and went onto be committed without a problem, usually in the next CommitFest. Absorbing patches at the end of the cycle is a lot harder, because everyone who has been thinking about doing something for the release wakes up and submits it, often half-finished, often at the very last minute. Furthermore, it takes more time to review them even if they ARE code-complete, because it takes more time to do a real review-to-commit than it does to tell someone "call it a whisker instead of a potato and delete all the comments about sweet potatoes". I really don't think that getting this into 9.2 is going to be a problem. Given the level of interest in this patch, there will be no problem at all finding someone to do a detailed review when we're not quite so pressed for time, and I'm willing to put some time into it myself when I'm not quite so pressed for time. Although it doesn't feel like it at the moment, we have actually made great strides in absorbing large patches. We've already absorbed true seriailizability and SE-Linux integration (cut down basic version) this release cycle, and it looks like we're going to absorb extensions and hopefully SQL/MED also. Those are all big, big pieces of work *by non-committers*, and the fact that they've sailed in with hardly a cross word is, to me, a very good sign for the future of our development community. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI patch version 14
Heikki Linnakangas wrote: > Taking RWConflictPool into account in PredicateLockShmemSize() fixes the > underestimate, but makes the overestimate correspondingly larger. I've > never compared the actual and estimated shmem sizes of other parts of > the backend, so I'm not sure how large discrepancies we usually have, > but that seems quite big. Looking into it... -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On Tue, 2011-02-08 at 11:56 -0500, Robert Haas wrote: > It's a 5400 line patch that wasn't completed until the middle of the > current CommitFest. Nobody has ever submitted a major feature patch > of that size that got done in a single CommitFest, to my recollection, > or even half that size. My concern is that, aside from code, my patch didn't make much progress this commitfest. And the code progress was mostly me working through my own TODO list on things like GiST support -- which didn't represent any real decisions, it was mostly just a matter of code. Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On Tue, Feb 08, 2011 at 02:04:04PM -0500, Robert Haas wrote: > On Tue, Feb 8, 2011 at 1:02 PM, David Fetter wrote: > > Given how things worked, i.e. that people were not clear that 9.1 > > development had actually started, etc., I am again proposing that we > > have one more CF starting March 15 to get this all cleaned up. Yes, I > > know that wasn't the plan, but I also know that we're really, really > > close on a whole bunch of things, some of which have been in the > > offing for years at this point, and we risk giving people the > > impression, if they don't already have it, that if they're not in the > > "inner circle," their patches get lower priority no matter what their > > merits. > > I agree that we have some problems in that area - particularly with > writeable CTEs - but prolonging the schedule isn't going to fix that > problem. What is? > I don't think that's entirely fair to people who planned their work > over the last eight months so that their patches would be committed > before the deadline. I both worked hard to make sure the stuff I > cared most about got committed in time, and passed over projects that > I could not get done in time, even though I *could* have gotten them > done given another two months. I realize there are all sorts of good > reasons why people didn't get things done on time, like, say, the need > to earn a living - but having a time frame and sticking with it is > ultimately better for the project, at least in my opinion. If we have > to slip the end of the CommitFest a little to get a few extra things > in, OK, but adding another two months to the development schedule > that's been published for most of a year is a pretty drastic change. This development cycle was a change from other development cycles in that it "began" before development had closed in the previous cycle. I will not take a position at the moment as to the wisdom of making this change, but it's been clear from feedback from lots of developers, even ones who'd be expected to be "in the know," that this vital piece of information did not gotten out in time. Let's assume for the moment that we're going with overlapping development cycles into the future. I'd submit that given the propagation delay of this change, the schedule for the first iteration of this never was reasonable, and "slipping" two months is a small price to pay for the huge flock of things we're epsilon short of getting. Cheers, David. -- David Fetter http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
* Jeff Davis (pg...@j-davis.com) wrote: > I appreciate the sentiment, but in addition to some cleanup, any patch > like this at least requires some discussion. It's a language change > we'll be supporting for a long time. My feeling was that we have had at least some of that discussion this past fall... It's not like you went out and developed this completely outside of any community contact. Perhaps it's just not as controversial as you're expecting. :) > At minimum, we're a couple hundred emails shy of a real consensus on the > naming ;) *smirk. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] postponing some large patches to 9.2
On Tue, 2011-02-08 at 06:57 -0500, Stephen Frost wrote: > * Robert Haas (robertmh...@gmail.com) wrote: > > - Range Types. This is a large patch which was submitted for the > > first time to the last CommitFest of the cycle, and the first version > > that had no open TODO items was posted yesterday, three-quarters of > > the way through that last CommitFest. Some good review has been done. > > While more is probably needed, I think we should feel good about > > what's been accomplished and mark this one Returned with Feedback. > > I don't agree w/ punting Range Types. Range Types were discussed as far > back as the 2010 developer meeting, were discussed quite a bit again > starting in October and throughout the fall, and Jeff has regularly > been posting updates to it. Given how thorough Jeff is, my feeling is > that this patch is more than ready for beta. I appreciate the sentiment, but in addition to some cleanup, any patch like this at least requires some discussion. It's a language change we'll be supporting for a long time. At minimum, we're a couple hundred emails shy of a real consensus on the naming ;) Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On Tue, Feb 8, 2011 at 1:02 PM, David Fetter wrote: > Given how things worked, i.e. that people were not clear that 9.1 > development had actually started, etc., I am again proposing that we > have one more CF starting March 15 to get this all cleaned up. Yes, I > know that wasn't the plan, but I also know that we're really, really > close on a whole bunch of things, some of which have been in the > offing for years at this point, and we risk giving people the > impression, if they don't already have it, that if they're not in the > "inner circle," their patches get lower priority no matter what their > merits. I agree that we have some problems in that area - particularly with writeable CTEs - but prolonging the schedule isn't going to fix that problem. I don't think that's entirely fair to people who planned their work over the last eight months so that their patches would be committed before the deadline. I both worked hard to make sure the stuff I cared most about got committed in time, and passed over projects that I could not get done in time, even though I *could* have gotten them done given another two months. I realize there are all sorts of good reasons why people didn't get things done on time, like, say, the need to earn a living - but having a time frame and sticking with it is ultimately better for the project, at least in my opinion. If we have to slip the end of the CommitFest a little to get a few extra things in, OK, but adding another two months to the development schedule that's been published for most of a year is a pretty drastic change. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On Mon, 2011-02-07 at 22:37 -0500, Robert Haas wrote: > - Range Types. This is a large patch which was submitted for the > first time to the last CommitFest of the cycle, and the first version > that had no open TODO items was posted yesterday, three-quarters of > the way through that last CommitFest. Some good review has been done. > While more is probably needed, I think we should feel good about > what's been accomplished and mark this one Returned with Feedback. I submitted this clearly marked WIP, so I expected that it would likely be pushed to 9.2. However, I don't feel like I have the kind of feedback that will help me get it committed next commitfest. I did get some review, and that was helpful, but it was mostly on isolated details. The patch is a million little decisions: names, catalog structure, interface, representation, general usability, grammar, functionality, etc. Without some checkpoint, the chances that everyone agrees with all of these decisions at the beginning of the next commitfest is zero. Is the commitfest not the right place to do this? If not, then when? Regards, Jeff Davis -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI patch version 14
On 08.02.2011 18:14, Kevin Grittner wrote: I wrote: The multiplier of 10 PredXactList structures per connection is kind of arbitrary. It affects the point at which information is pushed to the lossy summary, so any number from 2 up will work correctly; it's a matter of performance and false positive rate. We might want to put that on a GUC and default it to something lower. If the consensus is that we want to add this knob, I can code it up today. If we default it to something low, we can knock off a large part of the 2MB increase in shared memory used by SSI in the default configuration. For those not using SERIALIZABLE transactions the only impact is that less shared memory will be reserved for something they're not using. For those who try SERIALIZABLE transactions, the smaller the number, the sooner performance will start to drop off under load -- especially in the face of a long-running READ WRITE transaction. Since it determines shared memory allocation, it would have to be a restart-required GUC. I do have some concern that if this defaults to too low a number, those who try SSI without bumping it and restarting the postmaster will not like the performance under load very much. SSI performance would not be affected by a low setting under light load when there isn't a long-running READ WRITE transaction. Hmm, comparing InitPredicateLocks() and PredicateLockShmemSize(), it looks like RWConflictPool is missing altogether from the calculations in PredicateLockShmemSize(). I added an elog to InitPredicateLocks() and PredicateLockShmemSize(), to print the actual and estimated size. Here's what I got with max_predicate_locks_per_transaction=10 and max_connections=100: LOG: shmemsize 635467 LOG: actual 1194392 WARNING: out of shared memory FATAL: not enough shared memory for data structure "shmInvalBuffer" (67224 bytes requested) On the other hand, when I bumped max_predicate_locks_per_transaction to 100, I got: LOG: shmemsize 3153112 LOG: actual 2339864 Which is a pretty big overestimate, percentage-wise. Taking RWConflictPool into account in PredicateLockShmemSize() fixes the underestimate, but makes the overestimate correspondingly larger. I've never compared the actual and estimated shmem sizes of other parts of the backend, so I'm not sure how large discrepancies we usually have, but that seems quite big. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Sync Rep for 2011CF1
On Mon, Feb 7, 2011 at 1:20 PM, Robert Haas wrote: > On Sat, Jan 15, 2011 at 4:40 PM, Simon Riggs wrote: >> Here's the latest patch for sync rep. > > Here is a rebased version of this patch which applies to head of the > master branch. I haven't tested it yet beyond making sure that it > compiles and passes the regression tests -- but this fixes the bitrot. As I mentioned yesterday that I would do, I spent some time working on this. I think that there are somewhere between three and six independently useful features in this patch, plus a few random changes to the documentation that I'm not sure whether want or not (e.g. replacing master by primary in a few places, or the other way around). One problem with the core synchronous replication technology is that walreceiver cannot both receive WAL and write WAL at the same time. It switches back and forth between reading WAL from the network socket and flushing it to disk. The impact of that is somewhat mitigated in the current patch because it only implements the "fsync" level of replication, and chances are that the network read time is small compared to the fsync time. But it would certainly suck for the "receive" level we've talked about having in the past, because after receiving each batch of WAL, the WAL receiver wouldn't be able to send any more acknowledgments until the fsync completed, and that's bound to be slow. I'm not really sure how bad it will be in "fsync" mode; it may be tolerable, but as Simon noted in a comment, in the long run it'd certainly be nicer to have the WAL writer process running during recovery. As a general comment on the quality of the code, I think that the overall logic is probably sound, but there are an awful lot of debugging leftovers and inconsistencies between various parts of the patch. For example, when I initially tested it, *asynchronous* replication kept breaking between the master and the standby, and I couldn't figure out why. I finally realized that there was a ten second pause that had been inserting into the WAL receiver loop as a debugging tool which was allowing the standby to get far enough behind that the master was able to recycle WAL segments the standby still needed. Under ordinary circumstances, I would say that a patch like this was not mature enough to submit for review, let alone commit. For that reason, I am pretty doubtful about the chances of getting this finished for 9.1 without some substantial prolongation of the schedule. That having been said, there is at least one part of this patch which looks to be in pretty good shape and seems independently useful regardless of what happens to the rest of it, and that is the code that sends replies from the standby back to the primary. This allows pg_stat_replication to display the write/flush/apply log positions on the standby next to the sent position on the primary, which as far as I am concerned is pure gold. Simon had this set up to happen only when synchronous replication or XID feedback in use, but I think people are going to want it even with plain old asynchronous replication, because it provides a FAR easier way to monitor standby lag than anything we have today. I've extracted this portion of the patch, cleaned it up a bit, written docs, and attached it here. I wasn't too sure how to control the timing of the replies. It's worth noting that you have to send them pretty frequently for the distinction between xlog written and xlog flushed to have any value. What I've done here is made it so that every time we read all available data on the socket, we send a reply. After flushing, we send another reply. And then just for the heck of it we send a reply at least every 10 seconds (configurable), which causes the last-known-apply position to eventually get updated on the master. This means the apply position can lag reality by a bit. Simon's version adds a latch, so that the startup process can poke the WAL receiver to send a reply when the apply position moves. But this is substantially more complex and I'm not sure it's worth it. If we were implementing the "apply" level of synchronized replication, we'd clearly need that for performance not to stink. But since the patch is only implementing "fsync" anyway, it doesn't seem necessary for now. The only real complaint I can imagine about offering this functionality all the time is that it uses extra bandwidth. I'm inclined to think that the ability to shut it off completely is sufficient answer to that complaint. Comments? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company wal-receiver-replies.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Scheduled maintenance affecting gitmaster
Hi! The PostgreSQL Infrastructure Team will be performing scheduled maintenance on the server that is hosting gitmaster.postgresql.org the coming sunday, Feb 13th. While we expect this to only cause a very short downtime for the service, one can never be entirely sure with remote maintenance.. For this reason, please avoid using the service during this time. Note that only the git *master* server is affected, not the git.postgresql.org anonymous mirror is on another host and will not be affected. We will post a note here when the maintenance has been completed. Date: 2011-02-13 Start: 12:00 UTC End: ~13:00 UTC, but may drag out Affected services: gitmaster.postgresql.org and misc. internal services -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SQL/MED - file_fdw
On Tue, Feb 08, 2011 at 09:25:29PM +0900, Itagaki Takahiro wrote: > Here is a revised patch, that including jagged csv support. > A new exported function is named as NextCopyFromRawFields. Seems a bit incongruous to handle the OID column in that function. That part fits with the other per-column code in NextCopyFrom(). > On Mon, Feb 7, 2011 at 21:16, Noah Misch wrote: > > Perhaps I'm missing something. ??The new API does not expose a "processed" > > count > > at all; that count is used for the command tag of a top-level COPY. ??This > > part > > of the patch is just changing how we structure the code to maintain that > > tally, > > and it has no implications for observers outside copy.c. ??Right? > > True, but the counter is tightly bound with INSERT-side of COPY FROM. > | copy.c (1978) > | * We count only tuples not suppressed by a BEFORE INSERT trigger; > > I think it is cleaner way to separate it from CopyState > because it is used as a file reader rather than a writer. > However, if there are objections, I'd revert it. No significant objection. I just wished to be clear on whether it was pure refactoring, or something more. > > ExecEvalExpr(), used to acquire default values, will still use the > > per-tuple context of CopyState->estate. ??That per-tuple context will never > > get > > reset explicitly, so default value computations leak until EndCopyFrom(). > > Ah, I see. I didn't see the leak because we rarely use per-tuple memory > context in the estate. We just use CurrentMemoryContext in many cases. > But the leak could occur, and the code is misleading. > I moved ResetPerTupleExprContext() into NextCopyFrom(), but > CurrentMemoryContext still used in it to the result values. The code still violates the contract of ExecEvalExpr() by calling it with CurrentMemoryContext != econtext->ecxt_per_tuple_memory. > Another possible design might be passing EState as an argument of > NextCopyFrom and remove estate from CopyState. It seems a much cleaner way > in terms of control flow, but it requires more changes in file_fdw. > Comments? Seems sensible and more-consistent with the typical structure of executor code. Do we place any constraints on sharing of EState among different layers like this? I could not identify any, offhand. The new API uses EState for two things. First, BeginCopyFrom() passes it to ExecPrepareExpr(). Instead, let's use expression_planner() + ExecInitExpr() and require that we've been called with a memory context of suitable longevity. Things will break anyway if BeginCopyFrom()'s CurrentMemoryContext gets reset too early. This way, we no longer need an EState in BeginCopyFrom(). Second, NextCopyFrom() sends the per-output-tuple ExprContext of the EState to ExecEvalExpr(). It really needs a specific ExprContext, not an EState. A top-level COPY has a bijection between input and output tuples, making the distinction unimportant. GetPerTupleExprContext() is wrong for a file_fdw scan, though. We need the ExprContext of the ForeignScanState or another of equivalent lifecycle. NextCopyFrom() would then require that it be called with CurrentMemoryContext == econtext->ecxt_per_tuple_memory. Sorry, I missed a lot of these memory details on my first couple of reviews. nm -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Add ENCODING option to COPY
On tis, 2011-02-08 at 10:53 -0500, Tom Lane wrote: > Peter Eisentraut writes: > > On fre, 2011-02-04 at 10:47 -0500, Tom Lane wrote: > >> The reason that we use quotes in CREATE DATABASE is that encoding > >> names aren't assumed to be valid SQL identifiers. If this patch isn't > >> following the CREATE DATABASE precedent, it's the patch that's wrong, > >> not CREATE DATABASE. > > > Since encoding names are built-in and therefore well known, and the > > names have been aligned with the SQL standard names, which are > > identifiers, I don't think this argument is valid (anymore). > > What about "UTF-8"? The canonical name of that is UTF8. But you can quote it if you want to spell it differently. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On tis, 2011-02-08 at 00:16 -0500, Steve Singer wrote: > On 11-02-07 10:37 PM, Robert Haas wrote: > > > - The PL/python extravaganza. I'm not really clear where we stand > > with this. There are a lot of patches here. > > > > Some of the patches have been committed a few others are ready (or > almost ready) for a committer. The table function one is the only one > in 'waiting for author' > > 4 of the patches haven't yet received any review. Jan Urbanski has been > pretty good about posting updated patches as the dependent patches get > updated. It would be good if a few people grabbed these. The individual > patches tend to not be that large. I'm available to commit these if someone else can do the initial review. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] MVCC doc typo fix
attached -Kevin *** a/doc/src/sgml/mvcc.sgml --- b/doc/src/sgml/mvcc.sgml *** *** 670,676 ERROR: could not serialize access due to read/write dependencies among transact permanent database writes within Serializable transactions on the master will ensure that all standbys will eventually reach a consistent state, a Repeatable Read transaction run on the standby can sometimes ! see a transient state which in inconsistent with any serial execution of serializable transactions on the master. --- 670,676 permanent database writes within Serializable transactions on the master will ensure that all standbys will eventually reach a consistent state, a Repeatable Read transaction run on the standby can sometimes ! see a transient state which is inconsistent with any serial execution of serializable transactions on the master. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI patch version 14
On Tue, Feb 08, 2011 at 10:14:44AM -0600, Kevin Grittner wrote: > I do have some concern that if this defaults to too low a number, > those who try SSI without bumping it and restarting the postmaster > will not like the performance under load very much. SSI performance > would not be affected by a low setting under light load when there > isn't a long-running READ WRITE transaction. If we're worried about this, we could add a log message the first time SummarizeOldestCommittedXact is called, to suggest increasing the GUC for number of SerializableXacts. This also has the potential benefit of alerting the user that there's a long-running transaction, in case that's unexpected (say, if it were caused by a wedged client) I don't have any particular opinion on what the default value of the GUC should be. Dan -- Dan R. K. Ports MIT CSAILhttp://drkp.net/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] arrays as pl/perl input arguments [PATCH]
On Tue, Feb 8, 2011 at 10:33, Alex Hunsaker wrote: > On Tue, Feb 8, 2011 at 08:18, Alexey Klyukin wrote: >> Thanks, looks great to me. It passes all the tests on my OS X system. I >> wonder >> what's the purpose of the amagic_call in get_perl_array_ref, instead of >> calling newRV_noinc on the target SV * ? > > Well, you can't AV *av = (AV *)SvRV(sv); And the SV * amagic_call > returns is already a reference, so the newRV_noinc() would be > redundant no? It occurs to me instead of doing the amagic_call we > could just call the to_array method directly using perl_call_pv(). > That would look more normal and less magic-- thats probably a good > thing? Err, even simpler would be to just access the 'array' member directly out of the hash object. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
On Feb 8, 2011, at 10:24 AM, Tom Lane wrote: >> Ah, right, of course. I don't suppose CREATE OR REPLACE would work, either, >> in some cases at least? > > But figuring out just what commands to issue is pretty nearly AI-complete. > The whole point of ALTER EXTENSION UPGRADE is to have a human do that > and then give you a script to run. Damn. Okay, was just trying to think outside the box a bit here. Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
"David E. Wheeler" writes: > On Feb 8, 2011, at 10:03 AM, Robert Haas wrote: >> No, this is not doable, or at least not in a way that provides any >> benefit over just dropping and reinstalling. The problem is that it >> is going to fall down all over the place if other objects are >> depending on objects provided by the extension. Like: >> >> CREATE VIEW v AS SELECT extensionfunc(1); > Ah, right, of course. I don't suppose CREATE OR REPLACE would work, either, > in some cases at least? But figuring out just what commands to issue is pretty nearly AI-complete. The whole point of ALTER EXTENSION UPGRADE is to have a human do that and then give you a script to run. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
On Feb 8, 2011, at 10:03 AM, Robert Haas wrote: > No, this is not doable, or at least not in a way that provides any > benefit over just dropping and reinstalling. The problem is that it > is going to fall down all over the place if other objects are > depending on objects provided by the extension. Like: > > CREATE VIEW v AS SELECT extensionfunc(1); Ah, right, of course. I don't suppose CREATE OR REPLACE would work, either, in some cases at least? Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Robert Haas writes: > I guess I'm sort of coming to the conclusion that ALTER EXTENSION .. > UPGRADE is pretty much a must-have for a useful feature regardless of > this issue. I had previously thought that we might be able to limp > along with half a feature for one release - if you're not actually > depending on anything in the extension, you could always drop it and > the install the new version. But the more I think about it, the less > realistic that sounds; dependencies on the extension are going to be > very common, and while it may be practical to drop and recreate the > dependent objects in some cases, it's certainly going to annoy a lot > of people. > So I think the choices, realistically, are boot both halves of this to > 9.2, or do it all now. I don't really have an opinion on which way to > go with it. We still have more than 40 patches to deal with for this > CommitFest, and certainly booting this would free up a bunch of your > time to go work on other things. On the other hand, if you are fired > up about getting this done, maybe it makes sense to get it knocked out > while it's fresh in your mind instead of letting it linger for another > six months. It's clearly a good feature, and I know that Dimitri has > put a lot of work into getting it this far. Personally I find the extension stuff to be a bigger deal than anything else remaining in the commitfest. Also, I've fixed a number of pre-existing bugs in passing, and I'd have to go extract those changes out of the current extensions patch if we abandon it now. So I'd like to keep pushing ahead on this. After further reflection I think that the pg_upgrade situation can be handled like this: 1. Provide a way for CREATE EXTENSION to not run the script --- either add a grammar option for that, or just have a back-door flag that pg_upgrade can set via one of its special kluge functions. (Hm, actually we'd want it to install the old version number and comment too, so maybe just have a kluge function to create a pg_extension entry that agrees with the old entry.) 2. Invent a command "ALTER EXTENSION extname ADD object-description" that simply adds a pg_depend entry marking an existing object as belonging to the extension. I think this was going to be part of the plan anyway for ALTER EXTENSION UPGRADE, specifically we need that for the bootstrap case of collecting some loose pre-existing objects into an extension. 3. In --binary-upgrade mode, pg_dump doesn't suppress the individual member objects, but instead emits ALTER EXTENSION ADD after creating each member object. So that gets us to the point of being able to run pg_upgrade without changing the contents of the extension. After that we can look at ALTER EXTENSION UPGRADE. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
On Tue, Feb 8, 2011 at 12:57 PM, David E. Wheeler wrote: > On Feb 8, 2011, at 9:36 AM, Robert Haas wrote: > >> I guess I'm sort of coming to the conclusion that ALTER EXTENSION .. >> UPGRADE is pretty much a must-have for a useful feature regardless of >> this issue. I had previously thought that we might be able to limp >> along with half a feature for one release - if you're not actually >> depending on anything in the extension, you could always drop it and >> the install the new version. But the more I think about it, the less >> realistic that sounds; dependencies on the extension are going to be >> very common, and while it may be practical to drop and recreate the >> dependent objects in some cases, it's certainly going to annoy a lot >> of people. > > I was just thinking about the upgrade issue, and was wondering if this was > do-able: > > * No upgrade scripts. Nothing to concatenate, include, or maintain. > * No version tracking > > Yeah, what I'm saying is, throw out the whole thing. Instead, what would > happen is > > ALTER EXTENSION foo UPGRADE; > > Would do some trickery behind the scenes to just delete all those objects > that are part of the extension (just like DROP EXTENSION should do), and then > run the installation SQL script. So the objects would be exactly as specified > in the installation script, with no need for the extension maintainer to > worry about how to run upgrades, and no need for PostgreSQL to track version > numbers. > > Is this do-able? I recognize that there could be some issues with settings > tables and what-not, but surely that's no different than for dump and reload. > The question in my mind is whether it would even be possible for the > extension code to unload every bit of an extension and load the new stuff, > inside a transaction. No, this is not doable, or at least not in a way that provides any benefit over just dropping and reinstalling. The problem is that it is going to fall down all over the place if other objects are depending on objects provided by the extension. Like: CREATE VIEW v AS SELECT extensionfunc(1); -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postponing some large patches to 9.2
On Mon, Feb 07, 2011 at 10:37:06PM -0500, Robert Haas wrote: > On Mon, Feb 7, 2011 at 3:57 PM, Chris Browne wrote: > > It sure looks to me like there are going to be a bunch of items that, > > based on the recognized policies, need to get deferred to 9.2, and the > > prospects for Sync Rep getting into 9.1 don't look notably good to me. > > > > It's definitely readily arguable that fairness requires that: > > > > - Items not committable by 2011-02-15 be deferred to the 2011-Next fest > > > > There are around 25 items right now that are sitting with [Waiting > > for Author] and [Returned with Feedback] statuses. They largely seem > > like pretty fair game for "next fest." > > > > - Large items that weren't included in the 2010-11 fest be considered > > problematic to try to integrate into 9.1 > > > > There sure seem to be some large items in the 2011-01 fest, which I > > thought wasn't supposed to be the case. > > This discussion reveals that it's time to start making some > discussions about what can be accomplished for 9.1 and what must be > postponed to 9.2. Given how things worked, i.e. that people were not clear that 9.1 development had actually started, etc., I am again proposing that we have one more CF starting March 15 to get this all cleaned up. Yes, I know that wasn't the plan, but I also know that we're really, really close on a whole bunch of things, some of which have been in the offing for years at this point, and we risk giving people the impression, if they don't already have it, that if they're not in the "inner circle," their patches get lower priority no matter what their merits. > The big ones I think we should postpone are: > > - Range Types. This is a large patch which was submitted for the > first time to the last CommitFest of the cycle, and the first version > that had no open TODO items was posted yesterday, three-quarters of > the way through that last CommitFest. Some good review has been done. > While more is probably needed, I think we should feel good about > what's been accomplished and mark this one Returned with Feedback. This one's a product differentiator for PostgreSQL. We can own this space for at least a year before it gets on any other RDBMS's roadmap. That the work wasn't submitted until it was in pretty good shape is a mark in Jeff's favor, not a reason to punt for another year. > - ALTER EXTENSION UPGRADE. This is another large patch which was > submitted for the first time to the last CommitFest of the cycle. > The prerequisite patch to provide basic extension support has not > been committed yet, although it sounds like that will happen soon. > However, since that patch is undergoing a fair amount of surgery, > it's reasonably certain that this will require significant rebasing. > I think it would also be an overstatement to say that we have > consensus on the design. My feeling is that, unless Tom thinks that > failing to get this committed now is going to leave us with a mess > later, we should mark this one Returned with Feedback and revisit it > for 9.2. If we're going to leave this out, we should probably pull EXTENSIONs out entirely. Is that really where we want to go? > - FOR KEY LOCK tables. This patch, unfortunately, has not gotten a > lot of review. But it's a major, potentially destabilizing change > that was, like the last two, submitted for the first time to the last > CommitFest of the cycle. Even if we assume for the sake of argument > that the code is unusually good for a feature of this type, I don't > think it's the right time to commit something like this. I would > argue for putting this off until 9.2, though preferably with a bit > more review than it's gotten so far. > > The other remaining "big" patches are: > > - extension support for pg_upgrade. Tom is planning to have this > committed within a day or so, per latest news. See above. > - synchronous replication. Based on some looking at this today, I am > somewhat doubtful about the possibility of me or anyone else beating > this completely into shape in time for 9.2, unless we choose to extend > the deadline by several weeks. +1 for extending that deadline. This is another product differentiator, and we can have it as that for about a year if we make sure it gets into 9.1. > Simon said that he would have time to > finish this in the next two weeks, but, as noted, the CommitFest is > scheduled to be over in ONE week, and it looks to me like this is > still pretty rough. However, there's a lot of good stuff in here, and > I think it might be practical to get some portion of it committed even > if we can't agree on all of it. I recommend we give this one a little > more time to shake out before giving up on it. > > - SQL/MED. This patch unfortunately kind of stalled for a long time. > However, I believe that Heikki is now working actively on the core > patch, so I'm hopeful for some activity here soon. > > - Writeable CTEs. Tom has promised
Re: [HACKERS] Re: patch: fix performance problems with repated decomprimation of varlena values in plpgsql
On Tue, Feb 8, 2011 at 11:38 AM, Noah Misch wrote: > It's not as if this patch raised complex questions that folks need more time > to > digest. For a patch this small and simple, we minimally owe Pavel a direct > answer about its rejection. Well, I don't see how we can give a totally straightforward answer at this point. There are several proposals on the table, and they have different pros and cons. Nobody is completely happy with any of them, AFAICT. I think as far as the original patch goes, it's rejected. Is there a variant of that approach that gives the same benefit with better style? I don't know. I might be able to figure something out if I spent an afternoon on it, but why is that my job? There is sometimes a perception among non-committers that committers are hiding the ball, as if the criteria for patch acceptance were purely arbitrary and we make people guess what we want and then complain when we don't get it. I've even had that perception myself a time or two, but I try hard not to do that and I think (hope) that other committers do as well. I've had my own share of ideas that I thought were good and then had to abandon them either because they had some deficiency which someone pointed out to me and I had to give up, or else because I couldn't get consensus that the new behavior was better than the old, even though it emphatically seemed so to me. I've also had plenty of ideas that got shot down multiple times before finally being accepted.I can't, and don't, accept that there isn't some way to improve the repeated detoasting situation, but I do not know what the best solution is technically. I don't even have an opinion, without a lot more work. And I certainly don't have the ability to know what Tom or someone else will think about the code that that solution requires. The only thing I think IS clear is that despite three weeks of discussion, we have no consensus on any of the proposed patches, nor is there any clear path to reaching a consensus. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
On Feb 8, 2011, at 9:36 AM, Robert Haas wrote: > I guess I'm sort of coming to the conclusion that ALTER EXTENSION .. > UPGRADE is pretty much a must-have for a useful feature regardless of > this issue. I had previously thought that we might be able to limp > along with half a feature for one release - if you're not actually > depending on anything in the extension, you could always drop it and > the install the new version. But the more I think about it, the less > realistic that sounds; dependencies on the extension are going to be > very common, and while it may be practical to drop and recreate the > dependent objects in some cases, it's certainly going to annoy a lot > of people. I was just thinking about the upgrade issue, and was wondering if this was do-able: * No upgrade scripts. Nothing to concatenate, include, or maintain. * No version tracking Yeah, what I'm saying is, throw out the whole thing. Instead, what would happen is ALTER EXTENSION foo UPGRADE; Would do some trickery behind the scenes to just delete all those objects that are part of the extension (just like DROP EXTENSION should do), and then run the installation SQL script. So the objects would be exactly as specified in the installation script, with no need for the extension maintainer to worry about how to run upgrades, and no need for PostgreSQL to track version numbers. Is this do-able? I recognize that there could be some issues with settings tables and what-not, but surely that's no different than for dump and reload. The question in my mind is whether it would even be possible for the extension code to unload every bit of an extension and load the new stuff, inside a transaction. Thoughts? Best, David -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
On Tue, Feb 8, 2011 at 11:53 AM, Tom Lane wrote: > It just occurred to me that the extensions patch thoroughly breaks > pg_upgrade, because pg_upgrade imagines that it can control the specific > OIDs assigned to certain SQL objects such as data types. That is of > course not gonna happen for objects within an extension. In fact, since > part of the point of an extension is that the set of objects it packages > might change across versions, it wouldn't be sensible to try to force > OID assignments even if there were a practical way to do it. > > Offhand the only escape route that I can see is for a binary upgrade to > not try to install a newer version of an extension, but just duplicate > all the contained objects exactly. Which is probably less than trivial > to do --- we could get part of the way there by having pg_dump not > suppress the member objects in --binary-upgrade mode, but then we need > some awful kluge to re-establish their pg_depend links to the extension. > In any case that would ratchet the priority of ALTER EXTENSION UPGRADE > back up to a must-have-for-9.1, since pg_upgrade would then leave you > with a non-upgraded extension. If we're going to commit any part of this for 9.1, it doesn't seem like there's much choice but to insert the above-mentioned awful kludge. It's certainly not going to work if we fail to preserve the OIDs in cases where pg_upgrade otherwise thinks it's necessary. I guess I'm sort of coming to the conclusion that ALTER EXTENSION .. UPGRADE is pretty much a must-have for a useful feature regardless of this issue. I had previously thought that we might be able to limp along with half a feature for one release - if you're not actually depending on anything in the extension, you could always drop it and the install the new version. But the more I think about it, the less realistic that sounds; dependencies on the extension are going to be very common, and while it may be practical to drop and recreate the dependent objects in some cases, it's certainly going to annoy a lot of people. So I think the choices, realistically, are boot both halves of this to 9.2, or do it all now. I don't really have an opinion on which way to go with it. We still have more than 40 patches to deal with for this CommitFest, and certainly booting this would free up a bunch of your time to go work on other things. On the other hand, if you are fired up about getting this done, maybe it makes sense to get it knocked out while it's fresh in your mind instead of letting it linger for another six months. It's clearly a good feature, and I know that Dimitri has put a lot of work into getting it this far. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] arrays as pl/perl input arguments [PATCH]
On Tue, Feb 8, 2011 at 08:18, Alexey Klyukin wrote: > > On Feb 6, 2011, at 9:43 AM, Alex Hunsaker wrote: >> So here is a v6 >> >> Comments? > > Thanks, looks great to me. It passes all the tests on my OS X system. I wonder > what's the purpose of the amagic_call in get_perl_array_ref, instead of > calling newRV_noinc on the target SV * ? Well, you can't AV *av = (AV *)SvRV(sv); And the SV * amagic_call returns is already a reference, so the newRV_noinc() would be redundant no? It occurs to me instead of doing the amagic_call we could just call the to_array method directly using perl_call_pv(). That would look more normal and less magic-- thats probably a good thing? > Also, in array_to_datum (line 1050), if get_perl_array_ref returns NULL for > the first element of the corresponding dimension, but non-NULL for the second > one - it would use uninitialized dims[cur_depth] value in comparison (which, > although, would probably lead to the desired comparison result). Good catch, will fix. All of those checks should be outside of the while loop. While Im at it Ill also rebase against HEAD (im sure there will be some conflicts with that other plperl patch that just went in ;)). -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Extensions versus pg_upgrade
Dimitri Fontaine writes: > Tom Lane writes: >> Now what? > What would be the problem with pg_upgrade acting the same as a > dump&reload cycle as far as extensions are concerned? Um, how about "it doesn't work"? The reason that data type OIDs have to be preserved, for example, is that they might be out on disk. Suppose that you have the cube extension installed and there are some user tables containing columns of type cube[]. Those arrays are going to have type cube's OID embedded in them. If cube has a different OID after pg_upgrade then the arrays are broken. Even letting an extension script run and create data types that weren't there at all before is problematic, since those types could easily claim OIDs that need to be reserved for pre-existing types that appear later in the dump script. Similar issues arise for the other cases where pg_upgrade is forcing OID assignments; it's not doing that just for fun. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Allow pg_archivecleanup to ignore extensions
Em 08-02-2011 04:57, Greg Smith escreveu: We recenty got some on-list griping that pg_standby doesn't handle archive files that are compressed, either. Given how the job I'm working on this week is going, I'll probably have to add that feature next. That's actually an easier source code hack than this one, because of how the pg_standby code modularizes the concept of a restore command. This was already proposed a few years ago [1]. I have used a modified pg_standby with this feature for a year or so. [1] http://archives.postgresql.org/message-id/e4ccc24e0810222010p12bae2f4xa3a11cb2bc51bd89%40mail.gmail.com -- Euler Taveira de Oliveira http://www.timbira.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers