Re: [HACKERS] clog_redo causing very long recovery time
On 05/05/2011 09:00 PM, Tom Lane wrote: Joe Conway m...@joeconway.com writes: Right -- I think another similar problem exists in GetNewMultiXactId where ExtendMultiXactOffset could succeed and write an XLOG entry and then ExtendMultiXactMember could fail before advancing nextMXact. The problem in this case is that they both write XLOG entries, so a simple reversal doesn't help. Hmm. Maybe we need a real fix then. I was just sitting here speculating about whether we'd ever decide we need to WAL-log pg_subtrans --- because if we did, my solution would fail. I still think that the right fix is to avoid emitting redundant XLOG records in the first place, rather than hacking recovery to not process them. Possibly we could modify slru.c so that it could be determined whether zeroing of the current page had already happened. In a quick look, it looks like noting whether latest_page_number had already been advanced to that page might do the trick. Thanks -- I'll test that out. Joe -- Joseph E Conway President/CEO credativ LLC www.credativ.us 616 Burnham Street El Cajon, CA 92019 Office: +1 619 270 8787 Mobile: +1 619 843 8340 signature.asc Description: OpenPGP digital signature
Re: [HACKERS] GSoC 2011: Fast GiST index build
As I understood it's because we can't move root to another page. Actually not. Path to node could change completely, For example, for page on second level path is 0-234 (where 0 is a root page, 234 is a page on second level). After root split path will be 0-new_page-234. If algorithm could be able to change root then new path could be looked as new_root-new_page-234 because old root could be splitted to old_root_page and new_page. -- Teodor Sigaev E-mail: teo...@sigaev.ru WWW: http://www.sigaev.ru/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Why not install pgstattuple by default?
Hackers, I've run into a couple of occasions lately where I really wanted pgstattuple on a production server in order to check table/index bloat. However, in the production environment at a large site installing a contrib module can involve a process which takes days or weeks. Is there some reason why the stattuple functions aren't just available as core functions? Are they unsafe somehow? -- -- Josh Berkus - Josh Berkus PostgreSQL Experts Inc. CEO database professionals josh.ber...@pgexperts.com www.pgexperts.com 1-888-743-9778 x.508 San Francisco -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 00:34, Josh Berkus josh.ber...@pgexperts.com wrote: Hackers, I've run into a couple of occasions lately where I really wanted pgstattuple on a production server in order to check table/index bloat. However, in the production environment at a large site installing a contrib module can involve a process which takes days or weeks. That can be said for a lot of things in contrib. pg_standby in 8.4 for example. Or adminpack. Or dblink. Or hstore. There's a mix of example stuff and actually pretty darn useful in production stuff. I'm sure you can find a couple of hundred emails in the archives on this very topic. From 9.1, it'll be a simple CREATE EXTENSION command - so much of the problem goes away. Well. It doesn't go away, but it gets a lot more neatly swept under the rug. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GSoC 2011: Fast GiST index build
2011/5/6 Teodor Sigaev teo...@sigaev.ru As I understood it's because we can't move root to another page. Actually not. Path to node could change completely, For example, for page on second level path is 0-234 (where 0 is a root page, 234 is a page on second level). After root split path will be 0-new_page-234. If algorithm could be able to change root then new path could be looked as new_root-new_page-234 because old root could be splitted to old_root_page and new_page. Ok. Thank you for explanation. With best regards, Alexander Korotkov.
Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock
On 30.04.2011 01:04, Kevin Grittner wrote: TRUNCATE TABLE and DROP TABLE should generate a rw-conflict *in* to the enclosing transaction (if it is serializable) from all transactions holding predicate locks on the table or its indexes. Note that this could cause a transactions which is running one of these statements to roll back with a serialization error. This seems correct to me, since these operations essentially delete all rows. If you don't want the potential rollback, these operations should be run at another isolation level. The difference between these two statements is that I think that TRUNCATE TABLE should also move the existing predicate locks to relation locks on the table while DROP TABLE (for obvious reasons) should just delete the predicate locks. Note that TRUNCATE has never been MVCC-safe anyway. Perhaps it's best to just treat it like DROP TABLE. Or can we use the predicate lock mechanism to abort serializable transactions that incorrectly see the table as empty? DROP DATABASE should quietly clean up any predicate locks from committed transactions which haven't yet hit their cleanup point because of overlapping transactions in other databases. This is just an optimization, right? The predicate locks will eventually go away anyway. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SIREAD lock versus ACCESS EXCLUSIVE lock
Heikki Linnakangas wrote: On 30.04.2011 01:04, Kevin Grittner wrote: TRUNCATE TABLE and DROP TABLE should generate a rw-conflict *in* to the enclosing transaction (if it is serializable) from all transactions holding predicate locks on the table or its indexes. Note that this could cause a transactions which is running one of these statements to roll back with a serialization error. This seems correct to me, since these operations essentially delete all rows. If you don't want the potential rollback, these operations should be run at another isolation level. The difference between these two statements is that I think that TRUNCATE TABLE should also move the existing predicate locks to relation locks on the table while DROP TABLE (for obvious reasons) should just delete the predicate locks. Note that TRUNCATE has never been MVCC-safe anyway. Yeah, Dan pointed out that another REPEATABLE READ or SERIALIZABLE transaction *will* see the work of a committed TRUNCATE TABLE statement, so it is not semantically identical to DELETE FROM with no WHERE clause, which was something I somehow had gotten into my head. Perhaps it's best to just treat it like DROP TABLE. We had been leaning that way based on the above observation. Or can we use the predicate lock mechanism to abort serializable transactions that incorrectly see the table as empty? Predicate locks only interact with writes; it'd be a rather nasty wart to try to bend them to the above use. I think we just have to stick with the dictum which has controlled so far -- Serializable Snapshot Isolation can only serialize those things which follow the semantics of Snapshot Isolation. TRUNCATE TABLE doesn't. DROP DATABASE should quietly clean up any predicate locks from committed transactions which haven't yet hit their cleanup point because of overlapping transactions in other databases. This is just an optimization, right? The predicate locks will eventually go away anyway. Yes, correct. The only way they could create a problem besides just taking up predicate lock slots is if a new database was created with an identical OID before overlapping serializable transactions in other databases completed; that seems rather far-fetched. Perhaps DROP DATABASE is so infrequent that it's not worth the extra code to do the early cleanup? The code to do that part might not carry its own weight on a cost/benefit basis. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Process wakeups when idle and power consumption
On 5 May 2011 21:05, Tom Lane t...@sss.pgh.pa.us wrote: The major problem I'm aware of for getting rid of periodic wakeups is the need for child processes to notice when the postmaster has died unexpectedly. Your patch appears to degrade the archiver's response time for that really significantly, like from O(1 sec) to O(1 min), which I don't think is acceptable. We've occasionally kicked around ideas for mechanisms that would solve this problem, but nothing's gotten done. It doesn't seem to be an easy problem to solve portably... Could you please expand upon this? Why is it of any consequence if the archiver notices that the postmaster is dead after 60 seconds rather than after 1? So control in the archiver is going to stay in its event loop for longer than it would have before, until pgarch_MainLoop() finally returns. The DBA might be required to kill the archiver where before they wouldn't have been (they wouldn't have had time to), but they are also required to kill other backends anyway before deleting postmaster.pid, or there will be dire consequences. Nothing important happens after waiting on the latch but before checking PostmasterIsAlive(), and nothing important happens after the postmaster is found to be dead. ISTM that it wouldn't be particularly bad if the archiver was SIGKILL'd while waiting on a latch. The only salient thread I found concerning the problem of making children know when the postmaster died is this one: http://archives.postgresql.org/pgsql-hackers/2010-12/msg00401.php Fujii Masao suggests removing wal_sender_delay in that thread, and replacing it with a generic default. That does work well with my suggestion to unify these sorts of timeouts under a single GUC. -- Peter Geoghegan http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Backpatching of Teach the regular expression functions to do case-insensitive matching
On Friday, May 06, 2011 04:30:01 AM Robert Haas wrote: On Thu, May 5, 2011 at 5:21 AM, Andres Freund and...@anarazel.de wrote: In my opinion this is actually a bug in 9.0. As its a (imo) low impact fix thats constrained to two files it seems sensible to backpatch it now that the solution has proven itself in the field? The issue is hard to find and has come up several times in the field. And it has been slightly embarassing more than once ;) Can you share some more details about your experiences? About the embarassing or hard to find part? One of the hard to find part parts involved a search (constraining word order after a tsearch search) where slightly fewer than usual search results were returned in production. Nobody had noticed during testing that case insensitive search worked for most things except multibyte chars as the tested case was something like: SELECT 'ÖFFENTLICHKEIT' ~* 'Öffentlichkeit' and the regex condition was only relevant when searching for multiple words. One of the emarassing examples was that I suggested moving away from a solution using several ILIKE rules to one case insenitive regular expression. Totally forgetting that I knew that this was only fixed in 9.0. This turned out to be faster. And it turned out to be wrong. In production :-(. Both sum up that the problem is often not noticed as most of the people realizing that that case could be a problem don't have a knowledge of the content and don't notice the problem until later... Andres -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Process wakeups when idle and power consumption
Peter Geoghegan pe...@2ndquadrant.com writes: On 5 May 2011 21:05, Tom Lane t...@sss.pgh.pa.us wrote: The major problem I'm aware of for getting rid of periodic wakeups is the need for child processes to notice when the postmaster has died unexpectedly. Could you please expand upon this? Why is it of any consequence if the archiver notices that the postmaster is dead after 60 seconds rather than after 1? Because somebody might try to start a new postmaster before that, and it's not really a good idea to have a rogue archiver running in addition to the new one. You might be able to construct an argument about how that was safe, but it would be a fragile one. What's more, it would not apply to any other child process, and we need a solution that scales to all the children or we're going nowhere in terms of saving power. In the case of the children that are connected to shared memory, such as bgwriter, a long delay until child exit means a long delay until a new postmaster can start at all --- which means you're effectively creating a denial of service, with the length directly proportional to how aggressively you're trying to avoid unnecessary wakeups. So that's not a tradeoff I want to be making. I'd rather have a solution in which children somehow get notified of postmaster death without having to wake up just to poll for it. Then, once we fix the other issues, there are no timeouts needed at all, which is obviously the ideal situation for power consumption as well as response time. The only salient thread I found concerning the problem of making children know when the postmaster died is this one: http://archives.postgresql.org/pgsql-hackers/2010-12/msg00401.php You didn't look terribly hard then. Here are two recent threads: http://archives.postgresql.org/pgsql-hackers/2011-01/msg01011.php http://archives.postgresql.org/pgsql-hackers/2011-02/msg02142.php The pipe solution mentioned in the first one would work on all Unixen, and we could possibly optimize things a bit on Linux using the second method. (There was also a bit of speculation about relying on SEM_UNDO, but I don't think we followed that idea far.) I don't know however what we'd need on Windows. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Buildfarm client 4.5 released
I have just released version 4.5 of the buildfarm client. Notable features and improvements include: * Suppport for building docs * Support running optional steps * Make SCM module totally generic so it can be used for things other than PostgreSQL core. * Run isolation checks. * Avoid giving Storable a regex object to freeze, which modern versions such as in perl 5.12 choke on. * Factor out options processing into a separate module. * new run_branches.pl program that runs all designated branches * New README and license files. * Don't quote configure options that contain quotes, allowing options like --with-libs to point to multiple locations. * Clean up data and temp install directories as we go. This makes us much less profligate with disk space. The release can be downloaded from http://pgfoundry.org/frs/?group_id=140release_id=1817 enjoy. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Process wakeups when idle and power consumption
On Fri, May 6, 2011 at 8:16 AM, Peter Geoghegan pe...@2ndquadrant.com wrote: Could you please expand upon this? Why is it of any consequence if the archiver notices that the postmaster is dead after 60 seconds rather than after 1? So control in the archiver is going to stay in its event loop for longer than it would have before, until pgarch_MainLoop() finally returns. The DBA might be required to kill the archiver where before they wouldn't have been (they wouldn't have had time to), but they are also required to kill other backends anyway before deleting postmaster.pid, or there will be dire consequences. Nothing important happens after waiting on the latch but before checking PostmasterIsAlive(), and nothing important happens after the postmaster is found to be dead. ISTM that it wouldn't be particularly bad if the archiver was SIGKILL'd while waiting on a latch. Well, IMHO, the desirable state of affairs is for all child processes, including regular backends, to exit near-instantaneously once the postmaster dies. Among many other problems, once the postmaster is gone, there's no guard against shared memory corruption. And as long as there is at least one backend kicking around attached to shared memory, you won't be able to restart postmaster, which is something you typically want to do as quickly as humanly possible. http://www.postgresql.org/support/submitbug -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Process wakeups when idle and power consumption
On Fri, May 6, 2011 at 10:13 AM, Robert Haas robertmh...@gmail.com wrote: On Fri, May 6, 2011 at 8:16 AM, Peter Geoghegan pe...@2ndquadrant.com wrote: Could you please expand upon this? Why is it of any consequence if the archiver notices that the postmaster is dead after 60 seconds rather than after 1? So control in the archiver is going to stay in its event loop for longer than it would have before, until pgarch_MainLoop() finally returns. The DBA might be required to kill the archiver where before they wouldn't have been (they wouldn't have had time to), but they are also required to kill other backends anyway before deleting postmaster.pid, or there will be dire consequences. Nothing important happens after waiting on the latch but before checking PostmasterIsAlive(), and nothing important happens after the postmaster is found to be dead. ISTM that it wouldn't be particularly bad if the archiver was SIGKILL'd while waiting on a latch. Well, IMHO, the desirable state of affairs is for all child processes, including regular backends, to exit near-instantaneously once the postmaster dies. Among many other problems, once the postmaster is gone, there's no guard against shared memory corruption. And as long as there is at least one backend kicking around attached to shared memory, you won't be able to restart postmaster, which is something you typically want to do as quickly as humanly possible. http://www.postgresql.org/support/submitbug The apparently irrelevant link at the bottom of this email is the result of a cut-and-paste into the wrong email window. Sorry -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Visibility map and hint bits
On Thu, May 5, 2011 at 2:20 PM, Merlin Moncure mmonc...@gmail.com wrote: On Thu, May 5, 2011 at 2:00 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote: Merlin Moncure mmonc...@gmail.com wrote: a small cache that remembers the commit/cancel status of recently seen transactions. How is that different from the head of the clog SLRU? several things: *) any slru access requires lock (besides the lock itself, you are spending cycles in critical path) *) cache access happens at different stage of processing in HeapTupleSatisfiesMVCC: both TransactionIdIsCurrentTransactionId and TransactionIdIsInProgress have to be checked first. Logically, it's extension of hint bit check itself, not expansion of lower levels of caching *) in tqual.c you can sneak in some small optimizations like only caching the bit if it's known good in the WAL (XlogNeedsFlush). That way you don't need to keep checking it over and over for the same trasaction *) slru level accesses happen too late to give much benefit: I can't stress enough how tight HeapTupleSatisfiesMVCC is. On my workstation VM, each non inline function call shows up measurably in profiling. I think anything you do here has to be inline, hand rolled, and very tight (you can forget anything around dynahash). Delegating the cache management to transam or (even worse) slru level penalizes some workloads non-trivially. An updated patch is attached. It's still WIP, but I need a little guidance before going further. What I did: *) Added a lot of source level comments that should explain better what's happening and why *) Fixed a significant number of goofs in the earlier patch. *) Reorganized the interaction with HeapTupleSatisfiesMVCC. In particular SetHintBits() is returning if it actually set the bit because I can use that information. What's not done: *) Only commit bits are cached, and caching action is only happening in HeapTupleSatisfiesMVCC. I'm not sure yet if it's better to store invalid bits in the same cache or in a separate one. I'm not sure if the other satisfies routines should also be engaging the cache. Translated from nerd speak, that means I haven't yet done the research to see when they are fired and if they are bottlenecks :-). *) I'd like to reach some sort of consensus with Tom if there is any point in going further in direction. Not so much on how the mechanics of how the cache work, but that it is at the tqual.c level and the changes to HeapTuplesSatisfiesMVCC. In particular. I think caching at transam.c level is a dead end on performance grounds regardless of how you implement the cache. Some points of note: *) Is it acceptable to use static definition of memory like that. If not, should there be a more standard allocation under CacheMemoryContext? *) Testing for the benefit is simple: just create a bunch of records and seqscan the table (select count(*)). Without the patch the first scan is slower and does a bunch of i/o. With it, it does not. *) The cache overhead is *almost* not measurable. As best I can tell we are looking at maybe 1% ish overhead in synthetic scan heavy workloads (i think this is a fair price to pay for all the i/o savings). The degenerate case of repeated 'rollups' is really difficult to generate, even synthetically -- if the cache is performing lousily the regular hint bit action tends to protect it. Performance testing under real workloads is going to give better info here. merlin hbache.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
Em 06-05-2011 05:06, Magnus Hagander escreveu: On Fri, May 6, 2011 at 00:34, Josh Berkusjosh.ber...@pgexperts.com wrote: Hackers, I've run into a couple of occasions lately where I really wanted pgstattuple on a production server in order to check table/index bloat. However, in the production environment at a large site installing a contrib module can involve a process which takes days or weeks. I already faced that problem too. From 9.1, it'll be a simple CREATE EXTENSION command - so much of the problem goes away. Well. It doesn't go away, but it gets a lot more neatly swept under the rug. That's half of the history. Admin needs to install postgresql-contrib package. Sometimes it takes too much time to convince clients that some additional supplied modules are useful for them. Now that we have extensions, why not build and package the contrib modules by default? 'make world' is not the answer. There is not an option for install all pieces of software. Let's install pg+contrib and leave only 'CREATE EXTENSION foo' for the admins. -- Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Compiling a PostgreSQL 7.3.2 project with Eclipse
First, I want to apologize if this is a topic for one of the other mailing lists (or completely inappropriate for PostgreSQL mailing lists), but I looked them over, and this one seemed most appropriate. I am working on a project based on TelegraphCQ 2.1 Data Stream Management System which is in turn based on PostgreSQL 7.3.2. I've decided to use Eclipse. I have managed to import source into Eclipse and to run it, using instructions given here: http://wiki.postgresql.org/wiki/Working_with_Eclipse. However, where I compile and run the project, in the Eclipse console I get: POSTGRES backend interactive interface $Revision: 1.115 $ $Date: 2006/02/06 01:19:46 $ I don't know what to do with this backend interface. I would like to get a postmaster running and to connect to a data base with psql. However, when i try to start psql, it says that there is no postmaster running. Can someone please give me a hand, K.K.
Re: [HACKERS] Debug contrib/cube code
Em 06-05-2011 02:14, Nick Raj escreveu: I am using postgresql-8.4.6. I want to debug the contrib/cube code. Can we able to debug that cube code? Because there is no .configure file to enable debug. Is there is any way to change make file to enable debug? What do you want to debug? AFAICS you need to change the code to achieve what you want. -- Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] increasing collapse_limits?
On 01/05/11 21:16, Joshua Berkus wrote: Speaking of which, what happened to replacing GEQO with Simulated Annealing? Where did that project go? It stayed on github (https://github.com/wulczer/saio) and stagnated a bit after I got my degree. It's on the top of my list of things to pick up after the summer (or maybe even during the summer). Cheers, Jan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Compiling a PostgreSQL 7.3.2 project with Eclipse
2011/5/6 Krešimir Križanović kresimir.krizano...@fer.hr: I don't know what to do with this backend interface. I would like to get a postmaster running and to connect to a data base with psql. However, when i try to start psql, it says that there is no postmaster running. I would not assume that it is handled the same way as with PostgreSQL, as it's a fork that has quite distinct implementation and behaviour. Does the TelegraphCQ documentation not include some documentation as to how to start up the backend? Answering that, it surely does: http://telegraph.cs.berkeley.edu/telegraphcq/v0.2/gettingstarted.html -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: Why is RegisterPredicateLockingXid called while holding XidGenLock?
Kevin Grittner kevin.gritt...@wicourts.gov writes: Tom Lane wrote: Yeah, I was thinking that it'd be better to pull it out of GetNewTransactionId and put it in a higher level. As long as it is always called when an xid is assigned. Since this function appears to be on the only path to that, it should be fine. Well, the division of labor between GetNewTransactionId and AssignTransactionId isn't terribly well-defined, but to the extent that there is any bright line it is that the former does what needs to be done while holding XidGenLock. So I'd prefer to see the call out of there entirely. The fact that varsup.c has no other connection to the SSI code is an additional argument that it doesn't belong there. No strong preference about where in AssignTransactionId to put it. Is there any chance that it would be significant whether we do it before or after taking the lock on the XID (XactLockTableInsert)? No, but since we need to do it only on a top level assignment, we could save a couple cycles by putting it on an else on line 456. Didn't particularly care for that, since this action is not the inverse of, nor in any other way related to, pushing the XID into pg_subtrans. After some thought I did this instead: if (isSubXact) SubTransSetParent(s-transactionId, s-parent-transactionId, false); /* * If it's a top-level transaction, the predicate locking system needs to * be told about it too. */ if (!isSubXact) RegisterPredicateLockingXid(s-transactionId); A reasonably bright compiler will optimize that into the same thing, and if the compiler doesn't catch it, it's an insignificant cost anyway. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for new feature: Buffer Cache Hibernation
Hi, I revised the patch against HEAD, it's available at: http://people.freebsd.org/~iwasaki/postgres/buffer-cache-hibernation-postgresql-20110506.patch Implemented hibernation file validations: - comparison with pg_control At shutdown: pg_control state should be DB_SHUTDOWNED. At startup: pg_control state should be DB_SHUTDOWNED. hibernation files should be newer than pg_control. - CRC check At shutdown: compute CRC values for hibernation files and store them into a file. At startup: CRC values for hibernation files should be the same with read from the file created at shutdown. - file size At startup: The size of hibernation file should be the same with calculated file size based on shared_buffers. - buffer descriptors validation At startup: The descriptor flags should not include BM_DIRTY, BM_IO_IN_PROGRESS, BM_IO_ERROR, BM_JUST_DIRTIED and BM_PIN_COUNT_WAITER. Sanity checks for usage_count and usage_count should be done. (wait_backend_pid is zero-cleared because the process was terminated already) - system call error checking At shutdown and startup: Evaluation for return value system call (eg. open(), read(), write() and etc) should be done. How do you protect against the cached buffers getting out-of-sync with the actual disk files (especially during recovery scenarios)? What Saving DB buffer cahce is called at shutdown after finishing bgwriter's final checkpoint process, so dirty-buffers should not exist I believe. For recovery scenarios, I need to research it though... Could you describe what is need to be consider? I think hibernation should be allowed only when the system is shutdown normaly by checking pg_control state. And once the abnormal shutdown was detected, the hibernation files should be ignored. The latest patch includes this. # modifications for xlog.c:ReadControlFile() was required though... about crash-induced corruption in the cache file itself (consider the not-unlikely possibility that init will kill the database before it's had time to dump all the buffers during a system shutdown)? Do you have I think this is important point. I'll implement validation function for hibernation file. Added validations seem enough for me. # because my understanding on postgres is not enough ;) If any other considerations are required, please point them out. Thanks -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: Why is RegisterPredicateLockingXid called while holding XidGenLock?
Tom Lane t...@sss.pgh.pa.us wrote: After some thought I did this instead: Thanks! I can see why that's better. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 18:22, Euler Taveira de Oliveira eu...@timbira.com wrote: Em 06-05-2011 05:06, Magnus Hagander escreveu: On Fri, May 6, 2011 at 00:34, Josh Berkusjosh.ber...@pgexperts.com wrote: Hackers, I've run into a couple of occasions lately where I really wanted pgstattuple on a production server in order to check table/index bloat. However, in the production environment at a large site installing a contrib module can involve a process which takes days or weeks. I already faced that problem too. From 9.1, it'll be a simple CREATE EXTENSION command - so much of the problem goes away. Well. It doesn't go away, but it gets a lot more neatly swept under the rug. That's half of the history. Admin needs to install postgresql-contrib package. Sometimes it takes too much time to convince clients that some additional supplied modules are useful for them. Now that we have extensions, why not build and package the contrib modules by default? 'make world' is not the answer. There is not an option for install all pieces of software. Let's install pg+contrib and leave only 'CREATE EXTENSION foo' for the admins. That's mostly an issue to be solved by the packagers. Some contrib modules add dependencies, but those that don't could easily be packaged in the main server package. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 1:32 PM, Magnus Hagander mag...@hagander.net wrote: On Fri, May 6, 2011 at 18:22, Euler Taveira de Oliveira eu...@timbira.com wrote: Em 06-05-2011 05:06, Magnus Hagander escreveu: On Fri, May 6, 2011 at 00:34, Josh Berkusjosh.ber...@pgexperts.com wrote: Hackers, I've run into a couple of occasions lately where I really wanted pgstattuple on a production server in order to check table/index bloat. However, in the production environment at a large site installing a contrib module can involve a process which takes days or weeks. I already faced that problem too. From 9.1, it'll be a simple CREATE EXTENSION command - so much of the problem goes away. Well. It doesn't go away, but it gets a lot more neatly swept under the rug. That's half of the history. Admin needs to install postgresql-contrib package. Sometimes it takes too much time to convince clients that some additional supplied modules are useful for them. Now that we have extensions, why not build and package the contrib modules by default? 'make world' is not the answer. There is not an option for install all pieces of software. Let's install pg+contrib and leave only 'CREATE EXTENSION foo' for the admins. That's mostly an issue to be solved by the packagers. Some contrib modules add dependencies, but those that don't could easily be packaged in the main server package. It seems to me that there's something of a packaging policy question to this. A long time ago, on a pre-buildfarm planet, far, far away, it was pretty uncertain what contrib modules could be hoped to run on what platform. At Afilias, we used to have to be *really* picky, because the subset that ran on Solaris and AIX were not even close to all of them. pgstattuples *was* one that the DBAs always wanted, but what would compile was alway hit-and-miss. Once we got AIX running a buildfarm node, that led to getting *ALL* of contrib working there, and I'm pretty sure that similar happened with other platforms at around the same time (I'm thinking this was 7.4, but it might have been 8.0) Be that all as it may, there has been a sea change, where we have moved from sporadic usability of contrib to it being *continually* tested on *all* buildfarm platforms, which certainly adds to the confidence level. But people are evidently still setting packaging policies based on how things were back in 7.3, even though that perhaps isn't necessary anymore. Certainly it's not a huge amount of code; less than 2MB these days. - % wc `dpkg -L postgresql-contrib-9.0` | tail -1 15952 67555 1770987 total I'm getting paper cuts quite a bit these days over the differences between what different packaging systems decide to install. The one *I* get notably bit on, of late, is that I have written code that expects to have pg_config to do some degree of self-discovery, only to find production folk complaining that they only have psql available in their environment. I don't expect the extension system to help with any of this, since if production folk try to install minimal sets of packages, they're liable to consciously exclude extension support. The improvement would come from drawing contrib a bit closer to core, and encouraging packagers (dpkg, rpm, ports) to fold contrib into base rather than separating it. I'm sure that would get some pushback, though. -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On 05/06/2011 01:55 PM, Christopher Browne wrote: Once we got AIX running a buildfarm node, that led to getting *ALL* of contrib working there, and I'm pretty sure that similar happened with other platforms at around the same time (I'm thinking this was 7.4, but it might have been 8.0) FYI, the buildfarm started in late 2004, near the end of the 8.0 development cycle. It quickly led to a number of contrib fixes. Time flies when you're having fun ... cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
Em 06-05-2011 14:55, Christopher Browne escreveu: The improvement would come from drawing contrib a bit closer to core, and encouraging packagers (dpkg, rpm, ports) to fold contrib into base rather than separating it. I'm sure that would get some pushback, though. I'm in favor of find out what are the popular extensions and make them into base; the other ones could be moved to PGXN. -- Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [CORE] list of members for CA nonprofit
On a not unrelated point, I still haven't seen any mention of this deal in pgsql-hackers, and I really think that presenting it to them as a fait accompli would be a Seriously Bad Idea. I wanted a list of initial board members before I presented it. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [CORE] list of members for CA nonprofit
On 5/6/11 11:29 AM, Josh Berkus wrote: On a not unrelated point, I still haven't seen any mention of this deal in pgsql-hackers, and I really think that presenting it to them as a fait accompli would be a Seriously Bad Idea. I wanted a list of initial board members before I presented it. Well, looks like thanks to a mail client spaz, I'm presenting it today. Email coming in a few minutes. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Compiling a PostgreSQL 7.3.2 project with Eclipse
Krešimir Križanović wrote: However, where I compile and run the project, in the Eclipse console I get: POSTGRES backend interactive interface $Revision: 1.115 $ $Date: 2006/02/06 01:19:46 $ I don’t know what to do with this backend interface. I would like to get a postmaster running and to connect to a data base with psql. However, when i try to start psql, it says that there is no postmaster running. An example of a session with that interface is at http://archives.postgresql.org/pgsql-hackers/2000-01/msg01471.php ; you may find it useful at some point. Your problem is caused by a change made to PostgreSQL's naming convention made after the 7.3 fork you're using. In earlier versions, postgres meant start the server in single user mode: http://www.postgresql.org/docs/7.3/static/app-postgres.html While postmaster started it as a proper server: http://www.postgresql.org/docs/7.3/static/app-postmaster.html In modern versions, they are the same thing. The Eclipse example uses postgres, which starts the regular server now, but in 7.3 only started single user mode. Change where you run the program to use postmaster instead and it should work more like what you're expecting. Doing something useful with the TelegraphCQ code is probably going to take you a while. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
Christopher Browne wrote: I'm getting paper cuts quite a bit these days over the differences between what different packaging systems decide to install. The one *I* get notably bit on, of late, is that I have written code that expects to have pg_config to do some degree of self-discovery, only to find production folk complaining that they only have psql available in their environment. Given the other improvements in being able to build extensions in 9.1, we really should push packagers to move pg_config from the PostgreSQL development package into the main one starting in that version. I've gotten bit by this plenty of times. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] New Canadian nonprofit for trademark, postgresql.org domain, etc.
Hackers, Community members: As some of you already know, several important community assets are held in the personal name of Marc Fournier for historical reasons. These assets include several DNS domains (including postgresql.org), our SSL key, and a Canadian trademark, and possibly a server as well in the future. For years, we have had the issue that if anything happened to Marc, getting control of these assets could be difficult and cause us weeks of wasted time, and perhaps even result in www.postgresql.org being offline for days or weeks. Even to date, we've had issues where problems have happened while Marc was away and been unable to resolve them quickly. We have, however, come up with a potential plan to change this. Marc has agreed to transfer the community assets to a new Canadian nonprofit which we set up for the purpose. The PostgreSQL Core Team supports this solution, and as such I've been talking to Canadian attorneys about setting up the NPO (we need an entity in Canada because of the trademark). The Funds Group has approved spending SPI money to pay for legal and operational fees for the corporation. Of course, a Canadian nonprofit could also act as a regional fundraiser/funder for events in Canada if anyone gets motivated to carry this out. For simplicity, the new NPO would initially be run by a small appointed board, initially consisting of Marc Fournier, Dave Page, Chris Browne and myself. We'd have a first board meeting after incorporation and select additional/alternate board members at that time. If someone gets motivated to build up Canadian community activity, the membership of the NPO could be expanded in the future, and new board members could be elected. Otherwise, the nonprofit could run under a stewardship board indefinitely. At this point, I am talking to attorneys about incorporation and bylaws. So now is a very good time for anyone in the community to voice questions, objections, ideas, concerns, or alternatives, now would be a good time to present them. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [pgsql-advocacy] New Canadian nonprofit for trademark, postgresql.org domain, etc.
Josh, This would imply improvement in the terms of the U.S. embargo on Cuba. What would be real beneficial to all regional communities. -- Saludos, Gilberto Castillo Edificio Beijing. Miramar Trade Center. Etecsa. Miramar, La Habana.Cuba. --- This message was processed by Kaspersky Mail Gateway 5.6.28/RELEASE running at host imx2.etecsa.cu Visit our web-site: http://www.kaspersky.com, http://www.viruslist.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 2:32 PM, Greg Smith g...@2ndquadrant.com wrote: Christopher Browne wrote: I'm getting paper cuts quite a bit these days over the differences between what different packaging systems decide to install. The one *I* get notably bit on, of late, is that I have written code that expects to have pg_config to do some degree of self-discovery, only to find production folk complaining that they only have psql available in their environment. Given the other improvements in being able to build extensions in 9.1, we really should push packagers to move pg_config from the PostgreSQL development package into the main one starting in that version. I've gotten bit by this plenty of times. I'm agreeable to that, in general. If there's a server package and a client package, it likely only fits with the server package. On a host where only the client is installed, they won't be able to install extensions, so it's pretty futile to have it there. -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On 05/06/2011 03:14 PM, Christopher Browne wrote: On Fri, May 6, 2011 at 2:32 PM, Greg Smithg...@2ndquadrant.com wrote: Christopher Browne wrote: I'm getting paper cuts quite a bit these days over the differences between what different packaging systems decide to install. The one *I* get notably bit on, of late, is that I have written code that expects to have pg_config to do some degree of self-discovery, only to find production folk complaining that they only have psql available in their environment. Given the other improvements in being able to build extensions in 9.1, we really should push packagers to move pg_config from the PostgreSQL development package into the main one starting in that version. I've gotten bit by this plenty of times. I'm agreeable to that, in general. If there's a server package and a client package, it likely only fits with the server package. On a host where only the client is installed, they won't be able to install extensions, so it's pretty futile to have it there. I don't agree. It can be useful even there, to see how the libraries are configured, for example. I'd be inclined to bundle it with postgresql-libs or the moral equivalent. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 21:19, Andrew Dunstan and...@dunslane.net wrote: On 05/06/2011 03:14 PM, Christopher Browne wrote: On Fri, May 6, 2011 at 2:32 PM, Greg Smithg...@2ndquadrant.com wrote: Christopher Browne wrote: I'm getting paper cuts quite a bit these days over the differences between what different packaging systems decide to install. The one *I* get notably bit on, of late, is that I have written code that expects to have pg_config to do some degree of self-discovery, only to find production folk complaining that they only have psql available in their environment. Given the other improvements in being able to build extensions in 9.1, we really should push packagers to move pg_config from the PostgreSQL development package into the main one starting in that version. I've gotten bit by this plenty of times. I'm agreeable to that, in general. If there's a server package and a client package, it likely only fits with the server package. On a host where only the client is installed, they won't be able to install extensions, so it's pretty futile to have it there. I don't agree. It can be useful even there, to see how the libraries are configured, for example. I'd be inclined to bundle it with postgresql-libs or the moral equivalent. +1. And it's not like it wastes huge amount of space... -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] pg_upgrade's bindir options could be optional
Just a thought: To make things a bit easier, the bindir options of pg_upgrade (-b/-B, --old-bindir/--new-bindir) could be made optional, I think. The new bindir should normally be the one that pg_upgrade itself is in. And the old bindir could be found out by looking at the postmaster.opts file in the old data directory. At least by default, this would make the pg_upgrade invocation a lot more compact; it would just be: pg_upgrade -d oldir -D newdir. Comments? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [pgsql-advocacy] New Canadian nonprofit for trademark, postgresql.org domain, etc.
On Fri, May 6, 2011 at 3:01 PM, Gilberto Castillo Martínez gilberto.casti...@etecsa.cu wrote: Josh, This would imply improvement in the terms of the U.S. embargo on Cuba. What would be real beneficial to all regional communities. I wouldn't get overly optimistic about that - the purpose of this is a bit less sweeping than you may be thinking. The trademarks and domain names that exist are already, today, held in Canada, so this isn't a move of everything to Canada. If the corporation became *really active* in extra senses (e.g. - handling donations, operating events), that would add some Canadian activity that doesn't exist today, but it's not clear that it would necessarily get really active that way. And it is quite possible that such activity would be pretty Canada-specific, as the legalities of doing things in foreign countries are always discouragingly complicated. Adding regional activity is typically a good thing, but trying to cross borders is always a complicating factor, even under ideal circumstances. -- When confronted by a difficult problem, solve it by reducing it to the question, How would the Lone Ranger handle this? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] New Canadian nonprofit for trademark, postgresql.org domain, etc.
On Fri, May 6, 2011 at 2:41 PM, Josh Berkus j...@agliodbs.com wrote: We have, however, come up with a potential plan to change this. Marc has agreed to transfer the community assets to a new Canadian nonprofit which we set up for the purpose. The PostgreSQL Core Team supports this solution, and as such I've been talking to Canadian attorneys about setting up the NPO (we need an entity in Canada because of the trademark). The Funds Group has approved spending SPI money to pay for legal and operational fees for the corporation. Nice! -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [pgsql-advocacy] New Canadian nonprofit for trademark, postgresql.org domain, etc.
On 05/06/2011 03:30 PM, Christopher Browne wrote: If the corporation became *really active* in extra senses (e.g. - handling donations, operating events), that would add some Canadian activity that doesn't exist today, but it's not clear that it would necessarily get really active that way. I'd be just as happy if it didn't. It's much more attractive as an entity that does almost nothing. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [pgsql-advocacy] New Canadian nonprofit for trademark, postgresql.org domain, etc.
Christopher Browne cbbro...@gmail.com writes: On Fri, May 6, 2011 at 3:01 PM, Gilberto Castillo Martínez gilberto.casti...@etecsa.cu wrote: This would imply improvement in the terms of the U.S. embargo on Cuba. I wouldn't get overly optimistic about that - the purpose of this is a bit less sweeping than you may be thinking. The trademarks and domain names that exist are already, today, held in Canada, so this isn't a move of everything to Canada. Yes. The proposed change would have no effect whatsoever on the legal situation of anyone who's subject to export control laws. To my mind, there's precisely one reason for setting this up as a Canadian non-profit rather than anything else; namely that one of the assets Marc is offering to donate is the Canadian trademark on PostgreSQL, and we need a Canadian entity to own that. (FWIW, I doubt that that trademark has any great value in itself. But as long as it exists and is held in community hands, that will make it much harder for someone hostile to register the name elsewhere and then use it against the community. I wouldn't be surprised to find the USPTO clueless enough to allow, say, Oracle to trademark the name --- except that a trademark name search would turn up the Canadian mark, and that would at least get them to ask some questions first.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [pgsql-advocacy] New Canadian nonprofit for trademark, postgresql.org domain, etc.
2011/5/6 Josh Berkus j...@agliodbs.com: Hackers, Community members: As some of you already know, several important community assets are held in the personal name of Marc Fournier for historical reasons. These assets include several DNS domains (including postgresql.org), our SSL key, and a Canadian trademark, and possibly a server as well in the future. For years, we have had the issue that if anything happened to Marc, getting control of these assets could be difficult and cause us weeks of wasted time, and perhaps even result in www.postgresql.org being offline for days or weeks. Even to date, we've had issues where problems have happened while Marc was away and been unable to resolve them quickly. We have, however, come up with a potential plan to change this. Marc has agreed to transfer the community assets to a new Canadian nonprofit which we set up for the purpose. The PostgreSQL Core Team supports this solution, and as such I've been talking to Canadian attorneys about setting up the NPO (we need an entity in Canada because of the trademark). The Funds Group has approved spending SPI money to pay for legal and operational fees for the corporation. Of course, a Canadian nonprofit could also act as a regional fundraiser/funder for events in Canada if anyone gets motivated to carry this out. For simplicity, the new NPO would initially be run by a small appointed board, initially consisting of Marc Fournier, Dave Page, Chris Browne and myself. We'd have a first board meeting after incorporation and select additional/alternate board members at that time. If someone gets motivated to build up Canadian community activity, the membership of the NPO could be expanded in the future, and new board members could be elected. Otherwise, the nonprofit could run under a stewardship board indefinitely. I think it might be better if the association don't need (or have ) activity other than 'technical' and to set up another nonprofit association for real activity. At this point, I am talking to attorneys about incorporation and bylaws. So now is a very good time for anyone in the community to voice questions, objections, ideas, concerns, or alternatives, now would be a good time to present them. no. it is a good idea and great you handle that. Thank you. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-advocacy mailing list (pgsql-advoc...@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-advocacy -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
Magnus Hagander mag...@hagander.net writes: On Fri, May 6, 2011 at 21:19, Andrew Dunstan and...@dunslane.net wrote: On 05/06/2011 03:14 PM, Christopher Browne wrote: If there's a server package and a client package, it likely only fits with the server package. On a host where only the client is installed, they won't be able to install extensions, so it's pretty futile to have it there. I don't agree. It can be useful even there, to see how the libraries are configured, for example. I'd be inclined to bundle it with postgresql-libs or the moral equivalent. +1. Well, actually, I think packagers have generally put it into a -devel subpackage. If it were in either a server or client package there would be much less of an issue. Bundling pg_config into a -libs package is probably not going to happen, at least not on Red Hat systems, because it would create multilib issues (ie, you're supposed to be able to install 32-bit and 64-bit libraries concurrently, but there's noplace to put a /usr/bin file without causing a conflict). FWIW, I did move pg_config from -devel to the main (really client) postgresql package in Fedora, as of 9.0. That will ensure it's present in either client or server installations. Eventually that packaging will reach RHEL ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On 05/06/2011 04:06 PM, Tom Lane wrote: Magnus Hagandermag...@hagander.net writes: On Fri, May 6, 2011 at 21:19, Andrew Dunstanand...@dunslane.net wrote: On 05/06/2011 03:14 PM, Christopher Browne wrote: If there's a server package and a client package, it likely only fits with the server package. On a host where only the client is installed, they won't be able to install extensions, so it's pretty futile to have it there. I don't agree. It can be useful even there, to see how the libraries are configured, for example. I'd be inclined to bundle it with postgresql-libs or the moral equivalent. +1. Well, actually, I think packagers have generally put it into a -devel subpackage. If it were in either a server or client package there would be much less of an issue. Bundling pg_config into a -libs package is probably not going to happen, at least not on Red Hat systems, because it would create multilib issues (ie, you're supposed to be able to install 32-bit and 64-bit libraries concurrently, but there's noplace to put a /usr/bin file without causing a conflict). FWIW, I did move pg_config from -devel to the main (really client) postgresql package in Fedora, as of 9.0. That will ensure it's present in either client or server installations. Eventually that packaging will reach RHEL ... That's reasonable, and certainly better than having it in -devel. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] VARIANT / ANYTYPE datatype
Excerpts from Darren Duncan's message of mié may 04 15:33:33 -0300 2011: I see VARIANT/ANYTYPE as the most general case of supporting union types, which, say, could have more specific examples of allow any number or date here but nothing else. If VARIANT is supported, unions in general ought to be also. Okay, so aside from the performance (storage reduction) gained, there's this argument for having variant/union types. It seems to me that this is indeed possible to build. Completely general VARIANT, though, is rather complex. A declared union, where you specify exactly which types can be part of the union, can be catalogued, so that the system knows exactly where to look when a type needs to be modified. A general VARIANT however looks complex to me to solve. The problem is this: if an user attempts to drop a type, and this type is used in a variant somewhere, we would lose the stored data. So the drop needs to be aborted. Similarly, if we alter a type (easy example: a composite type) used in a variant, we need to cascade to modify all rows using that composite. If the unions that use a certain type are catalogued, we at least know what tables to scan to cascade. In a general variant, the system catalogs do not have the information of what type each variant masquerades as. We would need to examine the variant's masqueraded types on each insert; if the current type is not found, add it. This seems a bit expensive. -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade's bindir options could be optional
Peter Eisentraut pete...@gmx.net writes: Just a thought: To make things a bit easier, the bindir options of pg_upgrade (-b/-B, --old-bindir/--new-bindir) could be made optional, I think. The new bindir should normally be the one that pg_upgrade itself is in. And the old bindir could be found out by looking at the postmaster.opts file in the old data directory. At least by default, this would make the pg_upgrade invocation a lot more compact; it would just be: pg_upgrade -d oldir -D newdir. Comments? I don't think we should rely on postmaster.opts being there, let alone being trustworthy. It's probably not unreasonable to let --new-bindir default to the directory pg_upgrade is in, but I'm much less comfortable with allowing --old-bindir to be defaulted. As an example, the proposed defaults would be not only wrong, but disastrous in the perfectly-reasonable situation where the user has moved the old installation aside and then installed the new executables in the same place the old ones used to be. My current RPM packaging of pg_upgrade would be at risk for the same reason. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] VARIANT / ANYTYPE datatype
On 05/06/2011 04:08 PM, Alvaro Herrera wrote: Excerpts from Darren Duncan's message of mié may 04 15:33:33 -0300 2011: I see VARIANT/ANYTYPE as the most general case of supporting union types, which, say, could have more specific examples of allow any number or date here but nothing else. If VARIANT is supported, unions in general ought to be also. Okay, so aside from the performance (storage reduction) gained, there's this argument for having variant/union types. It seems to me that this is indeed possible to build. Completely general VARIANT, though, is rather complex. A declared union, where you specify exactly which types can be part of the union, can be catalogued, so that the system knows exactly where to look when a type needs to be modified. A general VARIANT however looks complex to me to solve. The problem is this: if an user attempts to drop a type, and this type is used in a variant somewhere, we would lose the stored data. So the drop needs to be aborted. Similarly, if we alter a type (easy example: a composite type) used in a variant, we need to cascade to modify all rows using that composite. If the unions that use a certain type are catalogued, we at least know what tables to scan to cascade. In a general variant, the system catalogs do not have the information of what type each variant masquerades as. We would need to examine the variant's masqueraded types on each insert; if the current type is not found, add it. This seems a bit expensive. So how is a declared union going to look and operate? Something like this? CREATE TYPE foo AS UNION (ival int, tval text, tsval timestamptz): CREATE TABLE bar (myunion foo); INSERT INTO bar (myunion) VALUES (ival=1), (tval='some text'); UPDATE bar SET myunion.tsval = CURRENT_TIMESTAMP; Something like that could actually be quite nice for a number of purposes. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] VARIANT / ANYTYPE datatype
Alvaro Herrera alvhe...@commandprompt.com writes: Excerpts from Darren Duncan's message of mié may 04 15:33:33 -0300 2011: I see VARIANT/ANYTYPE as the most general case of supporting union types, which, say, could have more specific examples of allow any number or date here but nothing else. If VARIANT is supported, unions in general ought to be also. Okay, so aside from the performance (storage reduction) gained, there's this argument for having variant/union types. It seems to me that this is indeed possible to build. Completely general VARIANT, though, is rather complex. Yeah. I have no particular objection to a UNION over a specified set of types, but am not very happy with the idea of an unconstrained union. Also, a UNION declaration would allow attaching a field name to each containable type, as I see Andrew just posted about. That seems like potentially a good idea. In a general variant, the system catalogs do not have the information of what type each variant masquerades as. We would need to examine the variant's masqueraded types on each insert; if the current type is not found, add it. This seems a bit expensive. Not to mention race-condition-prone. How do you ensure someone is not inserting another instance of the variant, with some previously not used content type, while this is going on? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] VARIANT / ANYTYPE datatype
On Fri, May 6, 2011 at 3:18 PM, Andrew Dunstan and...@dunslane.net wrote: On 05/06/2011 04:08 PM, Alvaro Herrera wrote: Excerpts from Darren Duncan's message of mié may 04 15:33:33 -0300 2011: I see VARIANT/ANYTYPE as the most general case of supporting union types, which, say, could have more specific examples of allow any number or date here but nothing else. If VARIANT is supported, unions in general ought to be also. Okay, so aside from the performance (storage reduction) gained, there's this argument for having variant/union types. It seems to me that this is indeed possible to build. Completely general VARIANT, though, is rather complex. A declared union, where you specify exactly which types can be part of the union, can be catalogued, so that the system knows exactly where to look when a type needs to be modified. A general VARIANT however looks complex to me to solve. The problem is this: if an user attempts to drop a type, and this type is used in a variant somewhere, we would lose the stored data. So the drop needs to be aborted. Similarly, if we alter a type (easy example: a composite type) used in a variant, we need to cascade to modify all rows using that composite. If the unions that use a certain type are catalogued, we at least know what tables to scan to cascade. In a general variant, the system catalogs do not have the information of what type each variant masquerades as. We would need to examine the variant's masqueraded types on each insert; if the current type is not found, add it. This seems a bit expensive. So how is a declared union going to look and operate? Something like this? CREATE TYPE foo AS UNION (ival int, tval text, tsval timestamptz): CREATE TABLE bar (myunion foo); INSERT INTO bar (myunion) VALUES (ival=1), (tval='some text'); UPDATE bar SET myunion.tsval = CURRENT_TIMESTAMP; Something like that could actually be quite nice for a number of purposes. using your hypothetical example, could you cast types to the union? select 1::int::foo; record types would presumably work as well? you could do a lot of *really* neat stuff that way... merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Compiling a PostgreSQL 7.3.2 project with Eclipse
Thanks for the help, that is the problem I think. However, I didn’t manage to test it yet. When I compile the project in Eclipse, I get postgres executable, but not the postmaster. I found somewhere on the net that postmaster and postgres are actually the same program, and when Install TelegraphCQ2.1 through shell, i get a postmaster, which is a link to a file. (since my linux knowledge rather poor, I didn’t manage yet to find out which file, but I strongly suspect postgres). Do you maybe have an idea how to make that link, or how to start postgres as a daemon? Sorry for the bother, again. Concerning TelegraphCQ, my aim is to add geospatial capabilities, to actually merge it with PostGIS. Its a part of my PhD Thesis. Thanks, K.K. -Original Message- From: Greg Smith [mailto:g...@2ndquadrant.com] Sent: Friday, May 06, 2011 8:32 PM To: Krešimir Križanović Cc: pgsql-hackers@postgresql.org Subject: Re: [HACKERS] Compiling a PostgreSQL 7.3.2 project with Eclipse Krešimir Križanović wrote: However, where I compile and run the project, in the Eclipse console I get: POSTGRES backend interactive interface $Revision: 1.115 $ $Date: 2006/02/06 01:19:46 $ I don’t know what to do with this backend interface. I would like to get a postmaster running and to connect to a data base with psql. However, when i try to start psql, it says that there is no postmaster running. An example of a session with that interface is at http://archives.postgresql.org/pgsql-hackers/2000-01/msg01471.php ; you may find it useful at some point. Your problem is caused by a change made to PostgreSQL's naming convention made after the 7.3 fork you're using. In earlier versions, postgres meant start the server in single user mode: http://www.postgresql.org/docs/7.3/static/app-postgres.html While postmaster started it as a proper server: http://www.postgresql.org/docs/7.3/static/app-postmaster.html In modern versions, they are the same thing. The Eclipse example uses postgres, which starts the regular server now, but in 7.3 only started single user mode. Change where you run the program to use postmaster instead and it should work more like what you're expecting. Doing something useful with the TelegraphCQ code is probably going to take you a while. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for new feature: Buffer Cache Hibernation
On 05/05/2011 05:06 AM, Mitsuru IWASAKI wrote: In summary, PgFincore's target is File System Buffer Cache, Buffer Cache Hibernation's target is DB Buffer Cache(shared buffers). Right. The thing to realize is that shared_buffers is becoming a smaller fraction of the total RAM used by the database every year. On Windows it's been stuck at useful settings being less than 512MB for a while now. And on UNIX systems, around 8GB seems to be effective upper limit. Best case, shared_buffers is only going to be around 25% of total RAM; worst-case, approximately, you might have Windows server with 64GB of RAM where shared_buffers is less than 1% of total RAM. There's nothing wrong with the general idea you're suggesting. It's just only targeting a small (and shrinking) subset of the real problem here. Rebuilding cache state starts with shared_buffers, but that's not enough of the problem to be an effective tweak on many systems. I think that all the complexity with CRCs etc. is unlikely to lead anywhere too, and those two issues are not completely unrelated. The simplest, safest thing here is the right way to approach this, not the most complicated one, and a simpler format might add some flexibility here to reload more cache state too. The bottleneck on reloading the cache state is reading everything from disk. Trying to micro-optimize any other part of that is moving in the wrong direction to me. I doubt you'll ever measure a useful benefit that overcomes the expense of maintaining the code. And you seem to be moving to where someone can't restore cache state when they change shared_buffers. A simpler implementation might still work in that situation; reload until you run out of buffers if shared_buffers shrinks, reload until you're done with the original size. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Prefered Types
Excerpts from Tom Lane's message of mié may 04 17:14:25 -0300 2011: Robert Haas robertmh...@gmail.com writes: It's not clear to me whether in any of this there is a solution to the problem of int2 being a second-class citizen. I've always felt that the basic problem int2 has got is that the parser initially types integer-looking constants as int4 or larger, even if they'd fit in int2. If it typed them as int2 then the unintuitive behaviors would largely go away, without any need for allowing implicit down-casting from int4 to int2. I actually tried that once, probably close to ten years ago, and it blew up real good because many cases that formerly were considered an exact match no longer were, and the parser started making some pretty surprising (or at least not backwards compatible) resolution choices. Maybe with a more controllable type-promotion mechanism we could get better results there. BTW, not to rain on the parade or anything, but I'll bet that rejiggering anything at all here will result in whining that puts the 8.3-era removal of a few implicit casts to shame. If the new behavior is really significantly better *for users* then we can probably withstand the complaints; but if it's just marginal improvements or just improves life for hypothetical future extensions, it's not going to fly. I remember that one of the problems put forth against this idea was that stuff like int2+int2 which currently returns int2 would have to be changed to return int4, otherwise it risks overflow which it currently doesn't (not because the operator would change, but rather because some expressions would be lexed differently). And so on with other operators. I am not sure how severe this problem is for users in practice -- my uneducated guess is that mostly they will not care about such changes. -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] crash-safe visibility map, take five
On Wed, Mar 30, 2011 at 8:52 AM, Heikki Linnakangas heikki.linnakan...@enterprisedb.com wrote: Another question: To address the problem in http://archives.postgresql.org/pgsql-hackers/2010-02/msg02097.php , should we just clear the vm before the log of insert/update/delete? This may reduce the performance, is there another solution? Yeah, that's a straightforward way to fix it. I don't think the performance hit will be too bad. But we need to be careful not to hold locks while doing I/O, which might require some rearrangement of the code. We might want to do a similar dance that we do in vacuum, and call visibilitymap_pin first, then lock and update the heap page, and then set the VM bit while holding the lock on the heap page. Here's an attempt at implementing the necessary gymnastics. Comments? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company visibility-map-v2.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
Christopher Browne cbbro...@gmail.com writes: But people are evidently still setting packaging policies based on how things were back in 7.3, even though that perhaps isn't necessary anymore. FWIW, once you get past the client versus server distinction, I think most subpackaging decisions are based on either the idea that only a minority of people will want this, or a desire to limit how many dependencies are pulled in by the main package(s). Both of those concerns apply to various subsets of -contrib, which means it's going to be hard to persuade packagers to fold -contrib into the -server package altogether. Nor would you gain their approval by trying to pre-empt the decision. We might get somewhere by trying to identify a small set of particularly popular contrib modules that don't add any extra dependencies, and then recommending to packagers that those ones get bundled into the main server package. Certainly it's not a huge amount of code; less than 2MB these days. - % wc `dpkg -L postgresql-contrib-9.0` | tail -1 15952 67555 1770987 total Well, to add some concrete facts rather than generalities to my own post, here are the sizes of the built RPMs from my last build for Fedora: -rw-r--r--. 1 tgl tgl 3839458 Apr 18 10:50 postgresql-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 490788 Apr 18 10:50 postgresql-contrib-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 27337677 Apr 18 10:51 postgresql-debuginfo-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 961660 Apr 18 10:50 postgresql-devel-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 7569048 Apr 18 10:50 postgresql-docs-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 246506 Apr 18 10:50 postgresql-libs-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl64940 Apr 18 10:50 postgresql-plperl-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl65776 Apr 18 10:50 postgresql-plpython-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl45941 Apr 18 10:50 postgresql-pltcl-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 5302117 Apr 18 10:50 postgresql-server-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 1370509 Apr 18 10:50 postgresql-test-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 3644113 Apr 18 10:50 postgresql-upgrade-9.0.4-1.fc13.x86_64.rpm The separate debuginfo package is distro policy enforced by toolchain; I couldn't do anything about that even if I wanted to. The separate -libs subpackage is also hard to avoid because of distro policy about multilib installations. Separating devel support files (such as headers) is also standard practice. The other subdivisions are either my fault or those of my predecessors. plperl, plpython, and pltcl are split out for dependency reasons, ie to not have the -server package require you to install those languages and their respective ecosystems. I think the separation of the -docs, -test, and -upgrade subpackages is also pretty easy to defend on the grounds that they're big and not everyone wants 'em, especially not in production. That leaves us with these three subpackages about which there's room for argument: -rw-r--r--. 1 tgl tgl 3839458 Apr 18 10:50 postgresql-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 490788 Apr 18 10:50 postgresql-contrib-9.0.4-1.fc13.x86_64.rpm -rw-r--r--. 1 tgl tgl 5302117 Apr 18 10:50 postgresql-server-9.0.4-1.fc13.x86_64.rpm Merging -contrib into the server package would increase the size of the latter by almost 10%, which is enough to bother people. Also, a bit of dependency extraction shows that -contrib has these dependencies beyond the ones in the two main packages: libcrypt.so.1 libossp-uuid.so.16 libxslt.so.1 That's not a particularly large list, I guess, but they're still the sorts of dependencies that don't win any friends when it's time to get the distro to fit on a DVD. Bottom line is that I'd rather have a smaller postgresql-server package that gets included in the shipping DVD than a complete one that gets kicked off because it's too large and pulls in too many other non-core dependencies. So, again, some selective migration of contrib modules into the main -server package might be doable, but the key word there is selective. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Prefered Types
2011/5/4 Tom Lane t...@sss.pgh.pa.us: Perhaps it would be adequate to allow automatic resolution of an overloading conflict only when one of the available alternatives dominates all others, ie, none of the argument positions requires a longer distance cast than is used in that position by any other available alternative. I'm just throwing that out as a possibility, I haven't tried it. That works OK for most things, but there's one case where I think we might need a better solution - suppose A is a subtype of B. It's fairly common to define a function or operator f(A,A) and f(B,B), and to want f(A,B) or f(B,A) to be interpreted as a the latter rather than the former. For example, let A=int2, B=int4, f=+. Now, we can (and currently do) handle that by just defining all the combinations explicitly, but people don't always want to do that. It's not clear to me whether in any of this there is a solution to the problem of int2 being a second-class citizen. I've always felt that the basic problem int2 has got is that the parser initially types integer-looking constants as int4 or larger, even if they'd fit in int2. If it typed them as int2 then the unintuitive behaviors would largely go away, without any need for allowing implicit down-casting from int4 to int2. I actually tried that once, probably close to ten years ago, and it blew up real good because many cases that formerly were considered an exact match no longer were, and the parser started making some pretty surprising (or at least not backwards compatible) resolution choices. Maybe with a more controllable type-promotion mechanism we could get better results there. Maybe, but I'm not convinced. I think that's using the lexer to do the type system's job. Suppose we add a type uint4, for example: what then? BTW, not to rain on the parade or anything, but I'll bet that rejiggering anything at all here will result in whining that puts the 8.3-era removal of a few implicit casts to shame. If the new behavior is really significantly better *for users* then we can probably withstand the complaints; but if it's just marginal improvements or just improves life for hypothetical future extensions, it's not going to fly. Yeah, I share that fear, which is why I think the idea of generalizing typispreferred to an integer has more than no merit: it's less likely to break in ways we can't anticipate. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] VARIANT / ANYTYPE datatype
Thanks for picking up on my mentioning union types; I wasn't sure if anyone did. Merlin Moncure wrote: On Fri, May 6, 2011 at 3:18 PM, Andrew Dunstan and...@dunslane.net wrote: On 05/06/2011 04:08 PM, Alvaro Herrera wrote: Excerpts from Darren Duncan's message of mié may 04 15:33:33 -0300 2011: I see VARIANT/ANYTYPE as the most general case of supporting union types, which, say, could have more specific examples of allow any number or date here but nothing else. If VARIANT is supported, unions in general ought to be also. Okay, so aside from the performance (storage reduction) gained, there's this argument for having variant/union types. It seems to me that this is indeed possible to build. Completely general VARIANT, though, is rather complex. A declared union, where you specify exactly which types can be part of the union, can be catalogued, so that the system knows exactly where to look when a type needs to be modified. A general VARIANT however looks complex to me to solve. The problem is this: if an user attempts to drop a type, and this type is used in a variant somewhere, we would lose the stored data. So the drop needs to be aborted. Similarly, if we alter a type (easy example: a composite type) used in a variant, we need to cascade to modify all rows using that composite. If the unions that use a certain type are catalogued, we at least know what tables to scan to cascade. In a general variant, the system catalogs do not have the information of what type each variant masquerades as. We would need to examine the variant's masqueraded types on each insert; if the current type is not found, add it. This seems a bit expensive. So how is a declared union going to look and operate? Something like this? CREATE TYPE foo AS UNION (ival int, tval text, tsval timestamptz): CREATE TABLE bar (myunion foo); INSERT INTO bar (myunion) VALUES (ival=1), (tval='some text'); UPDATE bar SET myunion.tsval = CURRENT_TIMESTAMP; Something like that could actually be quite nice for a number of purposes. using your hypothetical example, could you cast types to the union? select 1::int::foo; record types would presumably work as well? you could do a lot of *really* neat stuff that way... Like with other respondents to this topic, I consider it much more useful and important, as well as practical, to support explicitly defined type unions than a wide-open ANYTYPE. As for how it would operate: The example above implies a union type implementation that is like C's concept of such, where you have to explicitly state how you want the value bit pattern to be interpreted, by naming ival/etc, rather than the DBMS just knowing that a particular value is of a specific type, because per-value stored metadata says so (like with SQLite). While that might be best in practice for implementation, I had envisioned something more like set unions, so you could instead do it like this: CREATE TYPE foo AS UNION (int, text, timestamptz): CREATE TABLE bar (myunion foo); INSERT INTO bar (myunion) VALUES (1), ('some text'); UPDATE bar SET myunion = CURRENT_TIMESTAMP; Unlike a record type, where multiple attributes may have the same time, presumably with a union, they would all be distinct, and so you could use the type name itself to refer to each option; you don't have to make up ival or whatever ... unless there are situations where types don't have names. When doing operations that are type-generic, such as equality test or assignment, especially assignment, between 2 things that are both declared to be of type foo, you could just do it with no verbosity same as if you were doing 2 int or text or whatever. When doing operations specific to int or text or whatever, or if you are assigning a foo-declared thing to an int/text-declared thing, then you'd want an explicit cast or assertion, such as select myunion::int + 3 as answer from bar where is_a(myunion,'int'). Whether you want an explicit cast to go the other way, I would assume you don't need to, like when you have DOMAINs; eg, I would expect the 4th line above to just work, because the system knows the type of CURRENT_TIMESTAMP and it knows that this is a member of the union type of myunion. I see a UNION type as being like a DOMAIN type in reverse. -- Darren Duncan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for new feature: Buffer Cache Hibernation
On Fri, May 6, 2011 at 5:31 PM, Greg Smith g...@2ndquadrant.com wrote: On 05/05/2011 05:06 AM, Mitsuru IWASAKI wrote: In summary, PgFincore's target is File System Buffer Cache, Buffer Cache Hibernation's target is DB Buffer Cache(shared buffers). Right. The thing to realize is that shared_buffers is becoming a smaller fraction of the total RAM used by the database every year. On Windows it's been stuck at useful settings being less than 512MB for a while now. And on UNIX systems, around 8GB seems to be effective upper limit. Best case, shared_buffers is only going to be around 25% of total RAM; worst-case, approximately, you might have Windows server with 64GB of RAM where shared_buffers is less than 1% of total RAM. There's nothing wrong with the general idea you're suggesting. It's just only targeting a small (and shrinking) subset of the real problem here. Rebuilding cache state starts with shared_buffers, but that's not enough of the problem to be an effective tweak on many systems. I think that all the complexity with CRCs etc. is unlikely to lead anywhere too, and those two issues are not completely unrelated. The simplest, safest thing here is the right way to approach this, not the most complicated one, and a simpler format might add some flexibility here to reload more cache state too. The bottleneck on reloading the cache state is reading everything from disk. Trying to micro-optimize any other part of that is moving in the wrong direction to me. I doubt you'll ever measure a useful benefit that overcomes the expense of maintaining the code. And you seem to be moving to where someone can't restore cache state when they change shared_buffers. A simpler implementation might still work in that situation; reload until you run out of buffers if shared_buffers shrinks, reload until you're done with the original size. Yeah, I'm pretty well convinced this whole approach is a dead end. Priming the OS buffer cache seems way more useful. I also think saving the blocks to be read rather than the actual blocks makes a lot more sense. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
All, We might get somewhere by trying to identify a small set of particularly popular contrib modules that don't add any extra dependencies, and then recommending to packagers that those ones get bundled into the main server package. Yeah, I wasn't thinking of including all of contrib. There's a lot of reasons not to do that. I was asking about pgstattuple in particular, since it's: (a) small (b) has no external dependancies (c) adds no stability risk or performance overhead (d) is usually needed on production systems when it's needed at all It's possible that we have one or two other diagnostic utilities which meet the above profile. pageinspect, maybe? The reason why this is such an issue is that for big users with high-demand production environments, installing any software, even postgresql-devel or postgresql-contrib packages, are big major IT deals which require weeks of advance scheduling. As a result, diagnostic tools from contrib tend not to be used because the problem they need to diagnose is much more urgent than that. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Prefered Types
Robert Haas robertmh...@gmail.com writes: 2011/5/4 Tom Lane t...@sss.pgh.pa.us: Perhaps it would be adequate to allow automatic resolution of an overloading conflict only when one of the available alternatives dominates all others, ie, none of the argument positions requires a longer distance cast than is used in that position by any other available alternative. I'm just throwing that out as a possibility, I haven't tried it. That works OK for most things, but there's one case where I think we might need a better solution - suppose A is a subtype of B. It's fairly common to define a function or operator f(A,A) and f(B,B), and to want f(A,B) or f(B,A) to be interpreted as a the latter rather than the former. For example, let A=int2, B=int4, f=+. Now, we can (and currently do) handle that by just defining all the combinations explicitly, but people don't always want to do that. That case still works as long as downcasts (int4 - int2) are either not allowed to be invoked implicitly at all, or heavily penalized in the distance assignments. BTW, not to rain on the parade or anything, but I'll bet that rejiggering anything at all here will result in whining that puts the 8.3-era removal of a few implicit casts to shame. Yeah, I share that fear, which is why I think the idea of generalizing typispreferred to an integer has more than no merit: it's less likely to break in ways we can't anticipate. Well, if you change it to an int and then don't change any of the values from what they were before, I agree. But then there's no point. Presumably, the reason we are doing this is so that we can assign some other preferredness values besides 0/1, and that will change the behavior. We'd better be damn sure that the new behavior is really better. Which is why it seems a bit premature to be working on an implementation when we don't have even a suggestion as to what the behavioral changes ought to be. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 5:58 PM, Josh Berkus j...@agliodbs.com wrote: Yeah, I wasn't thinking of including all of contrib. There's a lot of reasons not to do that. Slightly off-topic, but I really think we would benefit from trying to divide up contrib. Right now it's a mixture of (a) debugging and instrumentation tools (e.g. pgstattuple, pageinspect, pgrowlocks, pg_freespacemap, pg_buffercache), (b) server functionality that is generally useful but considered worth including in core (e.g. hstore, citext, pg_trgm), (c) deprecated modules that we keep around mostly for hysterical reasons (tsearch2, xml2, intagg), and (d) examples and regression test support (dummy_seclabel, spi, start-scripts). I think it would make things a lot easier for both packagers and actual users if we separated these things into different directories, e.g.: debugging and instrumentation tools - src/debug server functionality - contrib server functionality (deprecated) - contrib/deprecated examples regression test suport - src/test/examples -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] VARIANT / ANYTYPE datatype
To follow-up, an additional feature that would be useful and resembles union types is the variant where you could declare a union type first and then separately other types could declare they are a member of the union. I'm talking about loosely what mixins or type-roles or interfaces etc are in other languages. The most trivial example would be declaring an ENUM-alike first and then separately declaring the component values where the latter declare they are part of the ENUM, and this could make it easier to add or change ENUM values. But keep in mind that this is a distinct concept from what we're otherwise talking about as being union types. -- Darren Duncan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Prefered Types
2011/5/6 Tom Lane t...@sss.pgh.pa.us: Robert Haas robertmh...@gmail.com writes: 2011/5/4 Tom Lane t...@sss.pgh.pa.us: Perhaps it would be adequate to allow automatic resolution of an overloading conflict only when one of the available alternatives dominates all others, ie, none of the argument positions requires a longer distance cast than is used in that position by any other available alternative. I'm just throwing that out as a possibility, I haven't tried it. That works OK for most things, but there's one case where I think we might need a better solution - suppose A is a subtype of B. It's fairly common to define a function or operator f(A,A) and f(B,B), and to want f(A,B) or f(B,A) to be interpreted as a the latter rather than the former. For example, let A=int2, B=int4, f=+. Now, we can (and currently do) handle that by just defining all the combinations explicitly, but people don't always want to do that. That case still works as long as downcasts (int4 - int2) are either not allowed to be invoked implicitly at all, or heavily penalized in the distance assignments. Not at all works, but heavily penalized doesn't. Suppose A-B has distance 1 and B-A has distance 1000. Then f(A,B) can match f(A,A) with distances (0,1000) or f(B,B) with distances (1,0). If you add up the *total* distance it's easy to say that the latter wins, but if you compare position-by-position as you proposed (and, generally, I agree that's the better route, BTW) then each candidate is superior to the other in one of the two available positions. BTW, not to rain on the parade or anything, but I'll bet that rejiggering anything at all here will result in whining that puts the 8.3-era removal of a few implicit casts to shame. Yeah, I share that fear, which is why I think the idea of generalizing typispreferred to an integer has more than no merit: it's less likely to break in ways we can't anticipate. Well, if you change it to an int and then don't change any of the values from what they were before, I agree. But then there's no point. Presumably, the reason we are doing this is so that we can assign some other preferredness values besides 0/1, and that will change the behavior. We'd better be damn sure that the new behavior is really better. Which is why it seems a bit premature to be working on an implementation when we don't have even a suggestion as to what the behavioral changes ought to be. Well, sure, to some degree. But if you keep the currently preferred types as having the highest level of preferred-ness in their same categories, then the only effect (I think) will be to make some cases work that don't now; and that's unlikely to break anything too badly. Going to some whole new system will almost inevitably involve more breakage. Which is why it seems a bit premature to be working on an implementation when we don't have even a suggestion as to what the behavioral changes ought to be. I'm in complete agreement on this point. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On 05/06/2011 05:58 PM, Josh Berkus wrote: Yeah, I wasn't thinking of including all of contrib. There's a lot of reasons not to do that. I was asking about pgstattuple in particular, since it's: (a) small (b) has no external dependancies (c) adds no stability risk or performance overhead (d) is usually needed on production systems when it's needed at all It's possible that we have one or two other diagnostic utilities which meet the above profile. pageinspect, maybe? I use pgstattuple, pageinspect, pg_freespacemap, and pg_buffercache regularly enough that I wish they were more common. Throw in pgrowlocks and you've got the whole group Robert put into the debug set. It makes me sad every time I finish a utility using one of these and realize I'll have to include the whole make sure you have the contrib modules installed disclaimer in its documentation again. These are the only ones I'd care about moving into a more likely place. The rest of the contrib modules are the sort where if you need them, you realize that early and get them installed. These are different by virtue of their need popping up most often during emergencies. The fact that I believe they all match the low impact criteria too makes it even easier to consider. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
Robert Haas robertmh...@gmail.com writes: On Fri, May 6, 2011 at 5:58 PM, Josh Berkus j...@agliodbs.com wrote: Yeah, I wasn't thinking of including all of contrib. There's a lot of reasons not to do that. Slightly off-topic, but I really think we would benefit from trying to divide up contrib. [ snip ] I think it would make things a lot easier for both packagers and actual users if we separated these things into different directories, e.g.: debugging and instrumentation tools - src/debug server functionality - contrib server functionality (deprecated) - contrib/deprecated examples regression test suport - src/test/examples From a packager's standpoint, that would be entirely worthless. The source tree's just a source tree, they don't care what lives where within it. I was just thinking about what it'd take to actually repackage things for Fedora, and the main problem is here: %files contrib ... %{_datadir}/pgsql/contrib/ ... If you're not adept at reading RPM specfiles, what that is saying is that everything that make install has stuck under ${prefix}/share/pgsql/contrib/ is to be included in the contrib RPM. To selectively move some stuff to the server RPM, I'd have to replace that one line with a file-by-file list of *everything* in share/contrib, and then move some of those lines to the %files server section, and then look forward to having to maintain that list in future versions. I'm already maintaining a file-by-file list of contrib's .so's, and I can tell you it's a PITA. As a packager, what I'd really want to see from a division into recommended and not-so-recommended packages is that they get installed into different subdirectories by make install. Then I could just point RPM at those directories and I'd be done. I don't know how practical this is from our development standpoint, nor from a user's standpoint --- I doubt we want to ask people to use different CREATE EXTENSION commands depending on the preferredness of the extension. A possibly workable compromise would be to provide two separate makefile installation targets for preferred and less preferred modules. The RPM script could then do something like make install-contrib-preferred ls -R .../sharedir contrib.files.for.server-package make install-contrib-second-class-citizens ls -R .../sharedir all.contrib.files ... and then some magic with comm to separate out the contrib ... files not mentioned in contrib.files.for.server-package ... Pretty grotty but it would work. Anyway my point is that this is all driven off the *installed* file tree. A specfile writer doesn't know nor want to know where make install is getting things from in the source tree. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On 5/6/11 3:19 PM, Robert Haas wrote: Slightly off-topic, but I really think we would benefit from trying to divide up contrib. I don't agree, unless by divide up you mean move several things to extensions. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
These are the only ones I'd care about moving into a more likely place. The rest of the contrib modules are the sort where if you need them, you realize that early and get them installed. These are different by virtue of their need popping up most often during emergencies. The fact that I believe they all match the low impact criteria too makes it even easier to consider. Yes, precisely. If I need intarray, I'm going to need it for a development push, which is planned well in advance. But if I need pageinspect, it's almost certainly because an emergency has arisen. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Fix incorrect description at SECURITY LABEL documentation
I found a few misdescriptions at SECURITY LABEL documentation. It might come from the COMMENT ON page. The attached patch will fix them. Thanks, -- KaiGai Kohei kai...@kaigai.gr.jp pgsql-fix-security-label-doc.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Compiling a PostgreSQL 7.3.2 project with Eclipse
Kresimir, Concerning TelegraphCQ, my aim is to add geospatial capabilities, to actually merge it with PostGIS. Its a part of my PhD Thesis. Hmmm. Given that the TCQ code was never production quality in the first place (i.e. crashed every 60 minutes), this seems like kind of a futile task. Especially since nothing you develop against 7.3 is going to be accepted into either PostgreSQL or PostGIS. Seems like updating the TCQ code would be more useful. Although not as exciting. Too bad Truviso is closed source. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 11:32 PM, Greg Smith g...@2ndquadrant.com wrote: I use pgstattuple, pageinspect, pg_freespacemap, and pg_buffercache regularly enough that I wish they were more common. Throw in pgrowlocks and you've got the whole group Robert put into the debug set. It makes me sad every time I finish a utility using one of these and realize I'll have to include the whole make sure you have the contrib modules installed disclaimer in its documentation again. Well the lightweight way to achieve what you want is to just move these functions into core. There's a pretty good argument to be made for debugging tools being considered an integral part of a base system. I remember making the same argument when Sun first made the radical move for a Unix vendor to stop shipping a working C compiler and debugger as part of the base Solaris packages. The only argument I see as particularly frightening on that front is people playing the sekurity card. A naive attacker who obtains access to the postgres account could do more damage than they might be able to do without these modules installed. Of course an attacker with postgres can do just about anything but it's not entirely baseless -- we don't set up the database with modules like plsh installed by default for example. The only actual security issue I can think of is that the pageinspect module would let users look at deleted records more easily. It would be pretty tricky, but not impossible, to do that without it. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_upgrade's bindir options could be optional
Excerpts from Tom Lane's message of vie may 06 17:11:35 -0300 2011: As an example, the proposed defaults would be not only wrong, but disastrous in the perfectly-reasonable situation where the user has moved the old installation aside and then installed the new executables in the same place the old ones used to be. My current RPM packaging of pg_upgrade would be at risk for the same reason. Eh, disastrous? Don't we check the versions reported by each postmaster before attempting to do anything? Because if we do, the worst that would happen is that the user gets a version mismatch error. And if we don't ... well, we should. -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [pgsql-advocacy] New Canadian nonprofit for trademark, postgresql.org domain, etc.
Excerpts from Josh Berkus's message of vie may 06 15:41:02 -0300 2011: We have, however, come up with a potential plan to change this. Marc has agreed to transfer the community assets to a new Canadian nonprofit which we set up for the purpose. The PostgreSQL Core Team supports this solution, and as such I've been talking to Canadian attorneys about setting up the NPO (we need an entity in Canada because of the trademark). The Funds Group has approved spending SPI money to pay for legal and operational fees for the corporation. Excellent news. -- Álvaro Herrera alvhe...@commandprompt.com The PostgreSQL Company - Command Prompt, Inc. PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch for new feature: Buffer Cache Hibernation
Hi, thanks for your comments! I'm glad to discuss about this topic. * pgfadv_WILLNEED * pgfadv_WILLNEED_snapshot The former ask to load each segment of a relation *but* the kernel can decide to not do that or load only part of each segment. (so it is not as brutal as cat file /dev/null ) The later read *exactly* each blocks required in each segment, not all blocks except if all were in cache while doing the snapshot. (this one is the part of the snapshot/restore combo) Sorry about that, I'm not so familiar with posix_fadvise(). I'll check posix_fadvise() later. Actually I used to execute 'cat database_file /dev/null' script on other DBSM before starting. # or 'select /*+ INDEX(emp emp_pk) */ count(*) from emp;' to load # index blocks I may prefer the per relation approach (so you can snapshot and restore only the interesting tables/index). Given what I read in your patch it looks easy to do, isn't it ? I would like to keep my patch as simple as possible, because it is just a hibernation function, not complicated buffer management. But I want to try improving buffer management on next vacation. # currently I'm in 11-days vacation until Sunday. My rough idea on improving buffer management like this; SQL alter table table_name buffer pin priority 7; SQL alter index index_name buffer pin priority 10; This DDL set 'buffer pin priority' property to table/index and also buffer descriptors related with table/index. Optionally preloading database files in FS cache and relation blocks in DB cache would be possible. When new buffer is required, buffer manager refer to the priority in each buffers and select a victim buffer. I think it helps batch job runs in better buffer cache condition by giving hints for buffer management. For example, job-A reads table_A, index_A and writes only table_B; SQL alter table table_A buffer pin priority 7; SQL alter index index_A buffer pin priority 10; SQL alter table table_B buffer pin priority 1; keeps buffers of index_A, table_A (table_B will be victims soon). Buffer pin priority can be reset like this; SQL alter system buffer pin priority 5; Next job-B reads and writes table_C, reads index_C with preloading; SQL alter table table_C buffer pin priority 5; SQL alter index index_C buffer pin priority 10 with preloading 50%; something like this. I also prefer the idea to keep a map of the Buffer Cache (yes, like what I do with pgfincore) than storing the data directly and reading it directly. This later part semmes a bit dangerous to me, even if it looks sane from a normal postgresql stop/start process. Never mind :) I added enough validations and will add more. better than me, and anyway your patch remain very easy to read in all case. Thanks a lot! My policy on experimental implementation is easy-to-read so that people understand my idea quickly. That's why my first patch doesn't have enough error checkings ;) Thanks -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 6:47 PM, Tom Lane t...@sss.pgh.pa.us wrote: As a packager, what I'd really want to see from a division into recommended and not-so-recommended packages is that they get installed into different subdirectories by make install. Then I could just point RPM at those directories and I'd be done. Well, that might be good, too. But, right now, if someone pulls up our documentation, or our source tree, they could easily be forgiven for thinking that hstore and dummy_seclabel are comparable, and they aren't. I don't know how practical this is from our development standpoint, nor from a user's standpoint --- I doubt we want to ask people to use different CREATE EXTENSION commands depending on the preferredness of the extension. Certainly not. A possibly workable compromise would be to provide two separate makefile installation targets for preferred and less preferred modules. The RPM script could then do something like make install-contrib-preferred ls -R .../sharedir contrib.files.for.server-package make install-contrib-second-class-citizens ls -R .../sharedir all.contrib.files ... and then some magic with comm to separate out the contrib ... files not mentioned in contrib.files.for.server-package ... Pretty grotty but it would work. Anyway my point is that this is all driven off the *installed* file tree. A specfile writer doesn't know nor want to know where make install is getting things from in the source tree. This isn't any uglier than some other RPM hacks I've seen, and less ugly than some, but you'd have a better sense of that than I do. At any rate, having the various categories separated in the source tree can't possibly hurt the effort to make something like this work, and might make it somewhat easier. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
Robert Haas robertmh...@gmail.com writes: On Fri, May 6, 2011 at 6:47 PM, Tom Lane t...@sss.pgh.pa.us wrote: As a packager, what I'd really want to see from a division into recommended and not-so-recommended packages is that they get installed into different subdirectories by make install. Well, that might be good, too. But, right now, if someone pulls up our documentation, or our source tree, they could easily be forgiven for thinking that hstore and dummy_seclabel are comparable, and they aren't. Sure, but that's a documentation issue, which again is not going to be helped by a source-tree rearrangement. As somebody who spends a lot of time on back-patching, I'm not excited in the least by suggestions to rearrange the source tree for marginal cosmetic benefits, which is all that I see here. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Fix incorrect description at SECURITY LABEL documentation
On Fri, May 6, 2011 at 6:55 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote: I found a few misdescriptions at SECURITY LABEL documentation. It might come from the COMMENT ON page. The attached patch will fix them. Good catch. Committed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Why not install pgstattuple by default?
On Fri, May 6, 2011 at 9:17 PM, Tom Lane t...@sss.pgh.pa.us wrote: Robert Haas robertmh...@gmail.com writes: On Fri, May 6, 2011 at 6:47 PM, Tom Lane t...@sss.pgh.pa.us wrote: As a packager, what I'd really want to see from a division into recommended and not-so-recommended packages is that they get installed into different subdirectories by make install. Well, that might be good, too. But, right now, if someone pulls up our documentation, or our source tree, they could easily be forgiven for thinking that hstore and dummy_seclabel are comparable, and they aren't. Sure, but that's a documentation issue, which again is not going to be helped by a source-tree rearrangement. I disagree - I think it would be helpful to rearrange both things. As somebody who spends a lot of time on back-patching, I'm not excited in the least by suggestions to rearrange the source tree for marginal cosmetic benefits, which is all that I see here. I understand, but we have back-patched only 32 patches that touch contrib into REL9_0_STABLE since its creation, of which 9 were done by you, and only 4 of those would have required adjustment under the separation criteria I proposed. I think, therefore, that the impact would be bearable. Source-code rearrangement is never going to be completely free, but that seems like a tolerable level of annoyance. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch: fix race in SSI's CheckTargetForConflictsIn
On Thu, May 5, 2011 at 1:43 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote: Dan Ports d...@csail.mit.edu wrote: While running some benchmarks to test SSI performance, I found a race condition that's capable of causing a segfault. A patch is attached. The bug is in CheckTargetForConflictsIn, which scans the list of SIREAD locks on a lock target when it's modified. There's an optimization in there where the writing transaction will remove a SIREAD lock that it holds itself, because it's being replaced with a (stronger) write lock. To do that, it needs to drop its shared lwlocks and reacquire them in exclusive mode. The existing code deals with concurrent modifications in that interval by redoing checks. However, it misses the case where some other transaction removes all remaining locks on the target, and proceeds to remove the lock target itself. The attached patch fixes this by deferring the SIREAD lock removal until the end of the function. At that point, there isn't any need to worry about concurrent updates to the target's lock list. The resulting code is also simpler. I just want to confirm that this addresses a real (although very narrow) race condition. It results from code used to implement a valuable optimization described in Cahill's thesis: don't get or keep an SIREAD lock on a row which has a write lock. The write lock is stronger and will cause a write/write conflict with any overlapping transactions which would care about a read/write conflict. The pattern of reading a row and then updating or deleting it is so common that this optimization does a lot to avoid promotion of predicate locks to coarser granularity, and thereby helps avoid false positives. While the optimization is valuable, the code used to implement it was pretty horrid. (And that was me that wrote it.) It has already been the cause of several other fixes since the main patch went in. What Dan has done here is move the optimization out of the middle of the loop which is doing the conflict detection, and in doing so has reduced the number of lines of code needed, reduced the amount of fiddling with LW locks, and all around made the code more robust and more understandable. I've reviewed the patch and it looks good to me. Dan has beat up on it with the same DBT-2 run which exposed the race condition without seeing a problem. Although a much smaller patch could address the immediate problem, I strongly feel that Dan has taken the right approach by refactoring this bit to something fundamentally cleaner and less fragile. Why does this HASH_FIND the applicable hash table entries and then HASH_REMOVE it as a separate step, instead of just HASH_REMOVE-ing it in one go? Doesn't this fail to release the locks if rmpredlock == NULL? I believe it's project style to test (rmpredlock != NULL) rather than just (rmpredlock). -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] kill -KILL: What happens?
On Sat, Jan 15, 2011 at 10:44 AM, Florian Pflug f...@phlo.org wrote: On Jan14, 2011, at 17:45 , Robert Haas wrote: On Fri, Jan 14, 2011 at 11:28 AM, Florian Pflug f...@phlo.org wrote: I gather that the behaviour we want is for normal backends to exit once the postmaster is gone, and for utility processes (bgwriter, ...) to exit once all the backends are gone. The test program I posted in this thread proves that FIFOs and select() can be used to implement this, if we're ready to check for EOF on the socket in CHECK_FOR_INTERRUPTS() every few seconds. Is this a viable route to take? I don't think there's much point in getting excited about the order in which things exit. If we're agreed (and we seem to be, modulo Tom) that the backends should exit quickly if the postmaster dies, then worrying about whether the utility processes exit slightly before or slightly after that doesn't excite me very much. I've realized that POSIX actually *does* provide a way to receive a signal - the SIGIO machinery. I've modified my test case do to that. To simplify things, I've removed support for multiple life sign objects. The code now does the following: The parents creates a pipe, sets it's reading fd to O_NONBLOCK and O_ASYNC, and registers a SIGIO handler. The SIGIO handler checks a global flag, and simply sends a SIGTERM to its own pid if the flag is set. Child processes close the pipe's writing end (called giving up ownership of the life sign in the code) and set the global flag if they want to receive a SIGTERM once the parent is gone. The parent's health state can additionally be checked at any time by trying to read() from the pipe. read() returns EAGAIN as long as the parent is still alive and EOF otherwise. I'm not sure how portable this is. It compiles and runs fine on both my linux machine (Ubuntu 10.04.01 LTS) and my laptop (OSX 10.6.6). In the EXEC_BACKEND case the pipe would need to be created with mkfifo() in the data directory, but otherwise things should work the same. Haven't tried that yet, though. Code attached. The output should be Launched backend 8636 Launched backend 8637 Launched backend 8638 Backend 8636 detected live parent Backend 8637 detected live parent Backend 8638 detected live parent Backend 8636 detected live parent Backend 8637 detected live parent Backend 8638 detected live parent Parent exiting Backend 8637 exiting after parent died Backend 8638 exiting after parent died Backend 8636 exiting after parent died if things work correctly. Are you planning to develop this into a patch for 9.2? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] (Better) support for cross compiled external modules
On Mon, May 2, 2011 at 10:41 AM, Johann 'Myrkraverk' Oskarsson joh...@2ndquadrant.com wrote: Hi all, Is it possible to add support for cross compiled PGXS modules to the build system? That is, when PG is cross compiled, a host-triplet-pg_config is also built for use with external modules? I'm not adverse to submit a patch for this myself, but would like a pointer in the general direction for it. Assuming it's not too invasive, I don't see why not. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] new AM, best way to obtain new block at end of index?
On Thu, Apr 28, 2011 at 8:03 AM, Yves Weißig weis...@rbg.informatik.tu-darmstadt.de wrote: Hi list, currently I am obtaining a new block at the end of an index with: buf = ReadBuffer(rel, P_NEW); but it throws: ERROR: unexpected data beyond EOF in block 0 of relation base/11874/156053 HINT: This has been seen to occur with buggy kernels; consider updating your system. system is up to date: $ uname -r 2.6.35-28-generic Is there another way to do it? What could be the source of the problem? Sounds like a bug either in your code, or in the kernel. That's the right way to do it. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] SSI non-serializable UPDATE performance
On Fri, Apr 29, 2011 at 3:23 AM, Dan Ports d...@csail.mit.edu wrote: On Thu, Apr 28, 2011 at 06:45:54PM +0200, Robert Haas wrote: Yeah, I think Dan's notes about memory ordering would be good to include. I left it out initially because I didn't want to make things more confusing. As far as memory ordering is concerned, this is the same story as anything else that uses lwlocks: the spinlock memory barrier prevents memory accesses from being reordered before the lock is acquired. The only unusual thing here is that the lock in question isn't the one that protects the variable we're reading. But I'm OK with adding a comment if you think it helps. Patch attached. Looks good. Committed. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] a bit more precise MaxOffsetNumber
2011/4/30 Tomas Vondra t...@fuzzy.cz: I've been digging in the sources, and I've noticed the MaxOffsetNumber is defined (in storage/off.h) like this (BLCKSZ / sizeof(ItemIdData)) I guess it might be made a bit more precise by subtracting the header like this (BLCKSZ - offsetof(PageHeaderData, pd_linp) / sizeof(ItemIdData)) although the difference is negligible (2048 vs 2042 for 8kB pages). I guess we could do that, but I'm not sure there's much point. It's also not entirely clear that this would actually work out to a win, because of the issues discussed in the When can/should we prune or defragment? section of src/backend/access/heap/README.HOT We could probably figure this out with some careful testing, but I'm not sure it's worth the effort. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Large Objects versus transactional behavior
On Sat, Apr 30, 2011 at 2:58 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote: This is related to the SIREAD lock versus ACCESS EXCLUSIVE lock thread, but seemed different enough to merit spinning off a new thread. Our shop hasn't used large objects so far because of the lack of security (until 9.1), so I never noticed the rather unusual transactional semantics of large objects. From the devel documentation: http://developer.postgresql.org/pgdocs/postgres/lo-interfaces.html#LO-OPEN | [...] with INV_READ you cannot write on the descriptor, and the | data read from it will reflect the contents of the large object at | the time of the transaction snapshot that was active when lo_open | was executed, regardless of later writes by this or other | transactions. Reading from a descriptor opened with INV_WRITE | returns data that reflects all writes of other committed | transactions as well as writes of the current transaction. This is | similar to the behavior of REPEATABLE READ versus READ COMMITTED | transaction modes for ordinary SQL SELECT commands. Since Serializable Snapshot Isolation can only serialize behavior which is working within the semantics of snapshot isolation, it doesn't seem like SSI has any chance of serializing access to the contents of a large object while the current behavior stands. Modifications to the *references* to large objects within the bodies of normal tables is properly tracked by SSI, but no predicate locks are taken on the large object contents themselves, nor would modifications to the contents be able to generate a rw-conflict between transactions. In other words, I don't think there is any action item here for SSI in terms of C code for 9.1, but we may want to mention the unusual transaction-related behavior of large objects within the Concurrency Control chapter of the docs. Comments? Well, in the long run, I think serializability ought to apply to large objects along with everything else. But documenting it seems like a reasonable approach for now. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] switch UNLOGGED to LOGGED
On Fri, Apr 22, 2011 at 4:13 AM, Leonardo Francalanci m_li...@yahoo.it wrote: Maybe you should change xl_act_commit to have a separate list of rels to drop the init fork for (instead of mixing those with the list of files to drop as a whole). I tried to follow your suggestion, thank you very much. I have to admit I don't like this approach very much. I can't see adding 4 bytes to every commit record for this feature. 3) Should we have a cascade option? I don't know if I have to handle inherited tables and other dependent objects Look at the way ALTER TABLE [ONLY] works for other action types, and copy it. 4) During the check for dependencies problems, I stop as soon as I find an error; would it be enough? It's a bit awkwardly phrased the way you have it. I would suggest something like: ERROR: constraints on permanent tables may reference only permanent tables HINT: constraint %s -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] switch UNLOGGED to LOGGED
On Fri, May 6, 2011 at 10:25 PM, Robert Haas robertmh...@gmail.com wrote: ERROR: constraints on permanent tables may reference only permanent tables HINT: constraint %s Argh, hit send too soon. HINT: constraint %s references table %s -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GSoC 2011 - New phpPgAdmin Plugin Architecture
On Tue, Apr 26, 2011 at 12:39 PM, Leonardo Sápiras sapi...@faccat.br wrote: A copy of my proposal can be found at [http://fit.faccat.br/~leonardo/gsoc_proposal.html]. But I will put a copy of this in another place. So, what would be better? Put on the PostgreSQL Wiki [http://http://wiki.postgresql.org/wiki] or the phpPgAdmin website [http://phppgadmin.sourceforge.net/doku.php]? I expect either one would be OK -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Backpatching of Teach the regular expression functions to do case-insensitive matching
On Fri, May 6, 2011 at 9:22 AM, Andres Freund and...@anarazel.de wrote: On Friday, May 06, 2011 04:30:01 AM Robert Haas wrote: On Thu, May 5, 2011 at 5:21 AM, Andres Freund and...@anarazel.de wrote: In my opinion this is actually a bug in 9.0. As its a (imo) low impact fix thats constrained to two files it seems sensible to backpatch it now that the solution has proven itself in the field? The issue is hard to find and has come up several times in the field. And it has been slightly embarassing more than once ;) Can you share some more details about your experiences? About the embarassing or hard to find part? One of the hard to find part parts involved a search (constraining word order after a tsearch search) where slightly fewer than usual search results were returned in production. Nobody had noticed during testing that case insensitive search worked for most things except multibyte chars as the tested case was something like: SELECT 'ÖFFENTLICHKEIT' ~* 'Öffentlichkeit' and the regex condition was only relevant when searching for multiple words. One of the emarassing examples was that I suggested moving away from a solution using several ILIKE rules to one case insenitive regular expression. Totally forgetting that I knew that this was only fixed in 9.0. This turned out to be faster. And it turned out to be wrong. In production :-(. Both sum up that the problem is often not noticed as most of the people realizing that that case could be a problem don't have a knowledge of the content and don't notice the problem until later... After mulling this over a bit more, I guess I''m a little skeptical of back-patching this because it is clearly a behavior change. It seems unlikely, but not impossible, that someone is relying on the current behavior, and changing it in a minor release might be considered unfriendly. On the flip side, the risk of it flat-out blowing up seems pretty small. For someone to invent their own version of wchar_t that uses something other than Unicode code points would be pretty much pure masochism, wouldn't it? -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] patch: fix race in SSI's CheckTargetForConflictsIn
On Fri, May 06, 2011 at 09:35:39PM -0400, Robert Haas wrote: Why does this HASH_FIND the applicable hash table entries and then HASH_REMOVE it as a separate step, instead of just HASH_REMOVE-ing it in one go? For PredicateLockHash, we need to find the lock entry first so that we can call SHMQueueDelete on its targetLink and xactLink fields. For LocalPredicateLockHash, we check the resulting entry to decide whether to remove it. Having looked at the code some more, however, we do always remove it because we only apply this optimization to heap tuple locks, which are not parents of other locks. So we can simplify this down to a HASH_REMOVE. Doesn't this fail to release the locks if rmpredlock == NULL? Yikes. Indeed it does. I believe it's project style to test (rmpredlock != NULL) rather than just (rmpredlock). That works for me (I prefer the != NULL myself). I believe I've seen both elsewhere, though... Will update the patch. Dan -- Dan R. K. Ports MIT CSAILhttp://drkp.net/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] improvements to pgtune
Hi Greg, So my exams are over now and am fully committed to the project in terms of time. I have started compiling a sort of personal todo for myself. I agree with your advice to start the project with small steps first. (I have a copy of the code and am trying to glean as much of it as I can) I would really appreciate your reply to Josh's thoughts. It would help me understand the variety of tasks and a possible ordering for me to attempt them. Josh's comments :* What would you list as the main things pgtune doesn't cover right now? I have my own list, but I suspect that yours is somewhat different.* * * *I do think that autotuning based on interrogating the database is possible. However, I think the way to make it not be a tar baby is to tackle it one setting at a time, and start with ones we have the most information for. One of the real challenges there is that some data can be gleaned from pg_* views, but a *lot* of useful performance data only shows up in the activity log, and then only if certain settings are enabled.* Regards, Shiv On Thu, Apr 28, 2011 at 9:34 PM, Shiv rama.the...@gmail.com wrote: That's some great starting advice there. I have a couple of final exams in the next 36 hours. Will get to work almost immediately after that. I will definitely take small steps before going for some of the tougher tasks. I would of-course like this conversation to go on, so I can see a more comprehensive TODO list. One of my first tasks on GSoC is to make sure I create a good project specification document. So there can be definite expectations and targets. This conversation helps me do that! Regards, Shiv On Thu, Apr 28, 2011 at 9:50 AM, Greg Smith g...@2ndquadrant.com wrote: Shiv wrote: On the program I hope to learn as much about professional software engineering principles as PostgreSQL. My project is aimed towards extending and hopefully improving upon pgtune. If any of you have some ideas or thoughts to share. I am all ears!! Well, first step on the software engineering side is to get a copy of the code in a form you can modify. I'd recommend grabbing it from https://github.com/gregs1104/pgtune ; while there is a copy of the program on git.postgresql.org, it's easier to work with the one on github instead. I can push updates over to the copy on postgresql.org easily enough, and that way you don't have to worry about getting an account on that server. There's a long list of suggested improvements to make at https://github.com/gregs1104/pgtune/blob/master/TODO Where I would recommend getting started is doing some of the small items on there, some of which I have already put comments into the code about but just not finished yet. Some examples: -Validate against min/max -Show original value in output -Limit shared memory use on Windows (see notes on shared_buffers at http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for more information) -Look for postgresql.conf file using PGDATA environment variable -Look for settings files based on path of the pgtune executable -Save a settings reference files for newer versions of PostgreSQL (right now I only target 8.4) and allow passing in the version you're configuring. A common mistake made by GSOC students is to dive right in to trying to make big changes. You'll be more successful if you get practice at things like preparing and sharing patches on smaller changes first. At the next level, there are a few larger features that I would consider valuable that are not really addressed by the program yet: -Estimate how much shared memory is used by the combination of settings. See Table 17-2 at http://www.postgresql.org/docs/9.0/static/kernel-resources.html ; those numbers aren't perfect, and improving that table is its own useful project. But it gives an idea how they fit together. I have some notes at the end of the TODO file on how I think the information needed to produce this needs to be passed around the inside of pgtune. -Use that estimate to produce a sysctl.conf file for one platform; Linux is the easiest one to start with. I've attached a prototype showing how to do that, written in bash. -Write a Python-TK or web-based front-end for the program. Now that I know someone is going to work on this program again, I'll see what I can do to clean some parts of it up. There are a couple of things it's easier for me to just fix rather than to describe, like the way I really want to change how it adds comments to the settings it changes. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us #!/bin/bash # Output lines suitable for sysctl configuration based # on total amount of RAM on the system. The output # will allow up to 50% of physical memory to be allocated # into shared memory. # On Linux, you can use it as follows (as root): # # ./shmsetup