Re: [HACKERS] triggers on prepare, commit, rollback... ?
On May 19, 2008, at 6:53 PM, Tom Lane wrote: Another response I've heard is "but I don't want to make inside-the-database changes, I want to propagate the state to someplace external". Of course that's completely broken too, because there is *absolutely no way* you will ever make such changes atomic with the inside-the-database transaction commit. We discourage people from making triggers cause outside-the-database side effects already --- it's not going to be better to do it in an "on commit" trigger. Isn't this close to what NOTIFY is? An on-commit trigger that causes only outside-the-database effects. Cheers, Steve -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Tom Lane wrote: * Trigger on rollback: what's that supposed to do? The current transaction is already aborted, so the trigger has no hope of making any database changes that will ever be visible to anyone. It can however affect state in the backend doing the rollback, which can be useful. * Trigger on commit: what do you do if the transaction fails after calling the trigger? The reductio ad absurdum for this is to consider having two on-commit triggers, where obviously the second one could fail. Ditto - this is effectively at the point where messaging for NOTIFY happens, and if it fails then that's tough. If you need to implement a custom NOTIFY, this is where to do it. Another response I've heard is "but I don't want to make inside-the-database changes, I want to propagate the state to someplace external". Of course that's completely broken too, because there is You really are being absurdly judgemental here. _You_ may not have a use case, but that does not mean that no-one else does. Some things are idempotent and are effectively hints - that they are not transacted can be well understood and accomodated. Is 'Tom doesn't need it' an adequate reason to take such a hard line? James -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [GSoC08]some detail plan of improving hash index
On Fri, 16 May 2008, Josh Berkus wrote: For a hard-core benchmark, I'd try EAStress (SpecJAppserver Lite) This reminds me...Jignesh had some interesting EAStress results at the East conference I was curious to try and replicate more publicly one day. Now that there are some initial benchmarking servers starting to become available, it strikes me that this would make a good test case to run on some of those periodically. I don't have a spare $2K for a commercial license right now, but there's a cheap ($250) non-profit license for EAStress around. That might be a useful purchase for one of the PG non-profits to make one day though. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Link requirements creep
Andrew Dunstan <[EMAIL PROTECTED]> writes: > Tom Lane wrote: >> What I'm inclined to do is move the test for -Wl,--as-needed down till >> after we've determined the readline dependent-libraries situation, and >> then enable it only if we can still link readline with it on. > Works for me. I don't think we need anything heroic either. Applied, we'll see what happens ... regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Would like to sponsor implementation of MATERIALIZED VIEWS
On May 15, 2008, at 1:40 AM, [EMAIL PROTECTED] wrote: as I posted already in the general newsgroup our company has decided that we would like to sponsor the implementation of materialized views for Postgres. However at the moment we have no idea about the complexity of the implementation and therefore what the cost would be. Since the point is already on the TODO List, are there already any (rough) estimates? The TODO List reads: "Right now materialized views require the user to create triggers on the main table to keep the summary table current. SQL syntax should be able to manager the triggers and summary table automatically." And this is what we need. "A more sophisticated implementation would automatically retrieve from the summary table when the main table is referenced, if possible." If this means that e.g. a query would "know by itself" that it could get the data from the view instead of from the main table, then we don't need this feature at the moment. Otherwise: Could anyone Has anyone contacted the OP about implementing this? Do we have procedure in place for people to sponsor major features like this? -- Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 smime.p7s Description: S/MIME cryptographic signature
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Bruce Momjian <[EMAIL PROTECTED]> writes: >>> trigger on "prepare", "commit", "rollback", "savepoint", >> >> This is a sufficiently frequently asked question that I wish someone >> would add an entry to the FAQ about it, or add it to the TODO list's >> "Features we don't want" section. > OK, remind me why we don't want it again? I'm sure I've ranted on this several times before, but a quick archive search doesn't find anything. So, here are a few points to chew on: * Trigger on rollback: what's that supposed to do? The current transaction is already aborted, so the trigger has no hope of making any database changes that will ever be visible to anyone. * Trigger on commit: what do you do if the transaction fails after calling the trigger? The reductio ad absurdum for this is to consider having two on-commit triggers, where obviously the second one could fail. The basic problem is that the transaction commit sequence is very carefully designed to do things in a specific order, and there is a well-defined atomic point where the transaction is really "committed", and we cannot go injecting random user-written code into that area and still expect to have a working system. These objections could be addressed to some extent by running the triggers in a separate transaction that's automatically executed after the "user" transaction commits or aborts. But that hardly seems like a usable basis for replication, since you're just plain out of luck if the secondary transaction fails. Another response I've heard is "but I don't want to make inside-the-database changes, I want to propagate the state to someplace external". Of course that's completely broken too, because there is *absolutely no way* you will ever make such changes atomic with the inside-the-database transaction commit. We discourage people from making triggers cause outside-the-database side effects already --- it's not going to be better to do it in an "on commit" trigger. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Andrew Dunstan wrote: > > > Fabien COELHO wrote: > > > > Dear Tom, > > > >>> trigger on "prepare", "commit", "rollback", "savepoint", > >> Yup, and there won't be. > > > > That's a definite answer! > > > >> This has been suggested and rejected before. See the archives. > > > > I'll check into that. > > > > > > This is a sufficiently frequently asked question that I wish someone > would add an entry to the FAQ about it, or add it to the TODO list's > "Features we don't want" section. OK, remind me why we don't want it again? -- Bruce Momjian <[EMAIL PROTECTED]>http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Link requirements creep
Tom Lane wrote: I wrote: Andrew Dunstan <[EMAIL PROTECTED]> writes: It broke my FC6 box :-( Yeah, I saw. I'm inclined to wait a day to get a handle on the scope of the problem before trying to choose an appropriate fix. So the returns are in, and buildfarm says: this is only broken on Red Hat-based systems. How embarrassing :-( It's evidently fixed in Fedora 7 and up, which means that only EOL'd Fedora distributions are affected, but since RHEL 5 is affected I'm still on the hook to fix it. What I'm inclined to do is move the test for -Wl,--as-needed down till after we've determined the readline dependent-libraries situation, and then enable it only if we can still link readline with it on. So a system with broken readline won't get the benefit. But the available evidence says that there are too few such systems to justify a more complicated solution. Works for me. I don't think we need anything heroic either. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Link requirements creep
I wrote: > Andrew Dunstan <[EMAIL PROTECTED]> writes: >> It broke my FC6 box :-( > Yeah, I saw. I'm inclined to wait a day to get a handle on the scope > of the problem before trying to choose an appropriate fix. So the returns are in, and buildfarm says: this is only broken on Red Hat-based systems. How embarrassing :-( It's evidently fixed in Fedora 7 and up, which means that only EOL'd Fedora distributions are affected, but since RHEL 5 is affected I'm still on the hook to fix it. What I'm inclined to do is move the test for -Wl,--as-needed down till after we've determined the readline dependent-libraries situation, and then enable it only if we can still link readline with it on. So a system with broken readline won't get the benefit. But the available evidence says that there are too few such systems to justify a more complicated solution. Comments? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Installation of Postgres 32Bit on 64 bit machine
On Mon, May 19, 2008 at 1:40 PM, cinu <[EMAIL PROTECTED]> wrote: > Hi All, > > > > I am trying to install PostgreSQL(postgresql-8.2.4-1PGDG.i686.rpm) on a 64 > bit machine, when I try to install I get the following error message: > > > > :/home/dump/postgres32bit # rpm -ivh postgresql-server-8.2.4-1PGDG.i686.rpm > postgresql-8.2.4-1PGDG.i686.rpm postgresql-libs-8.2.4-1PGDG.i686.rpm > warning: postgresql-server-8.2.4-1PGDG.i686.rpm: Header V3 DSA signature: > NOKEY, key ID 20579f11 > error: Failed dependencies: > libcrypto.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 > libreadline.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 > libssl.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 > initscripts is needed by postgresql-8.2.4-1PGDG.i686 > libcrypto.so.4 is needed by postgresql-8.2.4-1PGDG.i686 > libreadline.so.4 is needed by postgresql-8.2.4-1PGDG.i686 > libssl.so.4 is needed by postgresql-8.2.4-1PGDG.i686 > libcrypto.so.4 is needed by postgresql-libs-8.2.4-1PGDG.i686 > > libssl.so.4 is needed by postgresql-libs-8.2.4-1PGDG.i686 > > > > This installation is being done on SUSE linux 10, please let me know if > there is any alternative with which I can bypass these errors and make the > installation successful. You need to install 32-bit versions of the libraries PG depends on. They can coexist alongside the 64-bit versions. -Doug -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Installation of Postgres 32Bit on 64 bit machine
Hi All, I am trying to install PostgreSQL(postgresql-8.2.4-1PGDG.i686.rpm) on a 64 bit machine, when I try to install I get the following error message: :/home/dump/postgres32bit # rpm -ivh postgresql-server-8.2.4-1PGDG.i686.rpm postgresql-8.2.4-1PGDG.i686.rpm postgresql-libs-8.2.4-1PGDG.i686.rpmwarning: postgresql-server-8.2.4-1PGDG.i686.rpm: Header V3 DSA signature: NOKEY, key ID 20579f11error: Failed dependencies: libcrypto.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 libreadline.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-server-8.2.4-1PGDG.i686 initscripts is needed by postgresql-8.2.4-1PGDG.i686 libcrypto.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libreadline.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-8.2.4-1PGDG.i686 libcrypto.so.4 is needed by postgresql-libs-8.2.4-1PGDG.i686 libssl.so.4 is needed by postgresql-libs-8.2.4-1PGDG.i686 This installation is being done on SUSE linux 10, please let me know if there is any alternative with which I can bypass these errors and make the installation successful. Thanks in advance Regards Cinu Planet Earth is in the hot seat. Know more.
Re: [HACKERS] WITH RECURSIVE patch V0.1
Gregory Stark írta: "Martijn van Oosterhout" <[EMAIL PROTECTED]> writes: From an implementation point of view, the only difference between breadth-first and depth-first is that your tuplestore needs to be LIFO instead of FIFO. I think it's not so simple. How do you reconcile that concept with the join plans like merge join or hash join which expect you to be able to be able to process the records in a specific order? It sounds like you might have to keep around a stack of started executor nodes or something but hopefully we can avoid anything like that because, well, ick. If I understand the code right, the recursion from level N to level N+1 goes like this: collect all records from level N and JOIN it with the recursive query. This way we get all level 1 records from the base query, then all records at the second level, etc. This is how it gets breadth-first ordering. Depth-first ordering could go like this: get only 1 from the current level then go into recursion. Repeat until there are no records in the current level. The only difference would be more recursion steps. Instead of one per level, there would be N per level if there are N tuples in the current level. Definitely slower then the current implementation but comparable with the tablefunc.c connectby() code. -- -- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Fabien COELHO wrote: Dear Tom, trigger on "prepare", "commit", "rollback", "savepoint", Yup, and there won't be. That's a definite answer! This has been suggested and rejected before. See the archives. I'll check into that. This is a sufficiently frequently asked question that I wish someone would add an entry to the FAQ about it, or add it to the TODO list's "Features we don't want" section. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Dear Tom, trigger on "prepare", "commit", "rollback", "savepoint", Yup, and there won't be. That's a definite answer! This has been suggested and rejected before. See the archives. I'll check into that. It seems to me that such triggers would be useful to help implement a "simple" (hmmm...) synchroneous replication system, That argument has no credibility whatever. If you say so. We have not even been able to get the replication projects to agree on a common set of custom hooks; the chance that they could get by with triggers on SQL-visible events is nil. That is indeed an issue. On the other hand, there are several possible strategies to implement replication, but ISTM that all should require a hook (whether SQL visible or not) at the prepare/commit levels to play around with the 2PC. Well, thanks for your answer anyway, -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] DTrace probes.
Howdy, I just saw Robert Lor's patch w.r.t. dtrace probes. It looks very similar in what we've done. We run a nice set of probes in production here that allow us to track the details of checkpointing and statement execution. I've given a few presentations around these probes and have had very positive feedback. They've been available for a while now, but I never got around to sending them to the list: https://labs.omniti.com/trac/project-dtrace/browser/trunk/postgresql/8.3.1.patch?format=txt Documentation is in wiki format, but I'd be happy to convert it to something else: https://labs.omniti.com/trac/project-dtrace/wiki/Applications#PostgreSQL Best regards, Theo -- Theo Schlossnagle Esoteric Curio -- http://lethargy.org/ OmniTI Computer Consulting, Inc. -- http://omniti.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WITH RECURSIVE patch V0.1
"Martijn van Oosterhout" <[EMAIL PROTECTED]> writes: > From an implementation point of view, the only difference between > breadth-first and depth-first is that your tuplestore needs to be LIFO > instead of FIFO. I think it's not so simple. How do you reconcile that concept with the join plans like merge join or hash join which expect you to be able to be able to process the records in a specific order? It sounds like you might have to keep around a stack of started executor nodes or something but hopefully we can avoid anything like that because, well, ick. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] triggers on prepare, commit, rollback... ?
Fabien COELHO <[EMAIL PROTECTED]> writes: > I've played with triggers a bit, and I have noticed that there seem to be > no way to add a trigger on events such as "prepare", "commit", "rollback", > "savepoint", Yup, and there won't be. This has been suggested and rejected before. See the archives. > It seems to me that such triggers would be useful to help implement a > "simple" (hmmm...) synchroneous replication system, That argument has no credibility whatever. We have not even been able to get the replication projects to agree on a common set of custom hooks; the chance that they could get by with triggers on SQL-visible events is nil. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] notification information functions
Hannu Krosing <[EMAIL PROTECTED]> writes: > How will we know then that all listeners have received their events ? We won't, but we don't know that now. In both the current implementation and this proposed one, the most you can tell is whether a backend has absorbed an event notification, not whether it has passed it on to its client. ISTM the timing of the first event is an implementation artifact and not interesting for users. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WITH RECURSIVE patch V0.1
On Sun, 2008-05-18 at 22:17 -0700, David Fetter wrote: > On Mon, May 19, 2008 at 12:21:20AM -0400, Gregory Stark wrote: > > "Zoltan Boszormenyi" <[EMAIL PROTECTED]> writes: > > > Also, it seems there are no infinite recursion detection: > > > > > > # with recursive x(level, parent, child) as ( > > >select 1::integer, * from test_connect_by where parent is null > > >union all > > >select x.level + 1, base.* from test_connect_by as base, x where > > > base.child > > > = x.child > > > ) select * from x; > > > ... it waits and waits and waits ... > > > > Well, psql might wait and wait but it's actually receiving rows. A > > cleverer client should be able to deal with infinite streams of > > records. > > That would be a very good thing for libpq (and its descendants) to > have :) > > > I think DB2 does produce a warning if there is no clause it can > > determine will bound the results. But that's not actually reliable. > > I'd think not, as it's (in some sense) a Halting Problem. > > > It's quite possible to have clauses which will limit the output but > > not in a way the database can determine. Consider for example a > > tree-traversal for a binary tree stored in a recursive table > > reference. The DBA might know that the data contains no loops but > > the database doesn't. > > I seem to recall Oracle's implementation can do this traversal on > write operations, but maybe that's just their marketing. It may be possible to solve at least some of it by doing something similar to hash version of DISTINCT by having an hashtable of tuples already returned and not descending branches where you have already been. > Cheers, > David. > -- > David Fetter <[EMAIL PROTECTED]> http://fetter.org/ > Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter > Skype: davidfetter XMPP: [EMAIL PROTECTED] > > Remember to vote! > Consider donating to Postgres: http://www.postgresql.org/about/donate > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ignore $PostgreSQL lines in regression tests?
* Tom Lane ([EMAIL PROTECTED]) wrote: > Andrew Dunstan <[EMAIL PROTECTED]> writes: > > Recently while adding $PostgreSQL markers to a bunch of .c and .h files > > I ran into trouble with the ecpg regression tests and had to revert the > > change for a handful of files. However, it occurred to me that we could > > have pg_regress tell diff to ignore such lines, by passing it the > > arguments "-I '\$PostgreSQL:' ", which would tell it to ignore > > additions or deletions of lines matching that regex. > > > Would this be a good thing to do? > > I'm inclined to think not. It's easy to think of scenarios where such > a switch would mask errors. I tend to agree with this, though if people decide they want it, you could almost certainly tighten up the regexp some to reduce the chance of it masking things (eg: I assume a starting anchor ('^') would be correct here, and you could almost certainly add the rest of the columns which are included in the $Id$ format using appropriately-typed wildcards, etc...). Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] Link requirements creep
* Tom Lane ([EMAIL PROTECTED]) wrote: > Greg Smith <[EMAIL PROTECTED]> writes: > > When we noticed this recently, my digging suggested you'll be hard pressed > > to have a RedHat system now without those two installed. > > Indeed, I've not heard any squawks from the field yet. It's still > wrong though ... Unsuprisingly, half the world in Debian also depends on libxml2, but I agree 110% w/ Tom- it's wrong, and I feel it really ought to be fixed regardless. It's entirely likely that there will come a time when it's a less used library getting pulled in, too. I also personally hate useless clutter in dependencies as it can cause package management headaches.. After poking around a bit I did find a box that only pulled in libxml2 for subversion, and we've been talking about moving to a different SCM (which don't appear to depend on libxml2), so it might eventually only be pulled in by psql for us. Not a show-stopper, but it's also not completely out of the question that it'll get pulled in unnecessairly. Thanks, Stephen signature.asc Description: Digital signature
Re: [HACKERS] notification information functions
Hannu Krosing wrote: On Sun, 2008-05-18 at 16:00 -0400, Andrew Dunstan wrote: I am working on moving the notification buffer into shared memory as previously discussed. Since pg_listener will no longer exist, I think we need to provide a couple of information functions. I suggest: pg_listened_events(out event name) returns setof record pg_pending_events(out event name, out message text) returns setof record The first would show events being listened on by the current backend, while the second would show all pending events for the current db. Given that there will no longer be any central place where events will be registered to be listened on, it will not be possible to show all such events for the current db. Are you sure that there will be no central place ? How will we know then that all listeners have received their events ? Yes, quite sure. See Tom's answer to more or less this question from a year ago: http://archives.postgresql.org/pgsql-hackers/2007-03/msg01570.php What we will have in shared memory is each backend's queue pointer (if any). cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] WITH RECURSIVE patch V0.1
On Mon, May 19, 2008 at 05:57:17PM +0900, Yoshiyuki Asaba wrote: > Hi, > > > I think it's the other way around. The server should not emit > > infinite number of records. > > How about adding new GUC parameter "max_recursive_call"? Couldn't we just have it pay attention to the existing max_stack_depth? Cheers, David. -- David Fetter <[EMAIL PROTECTED]> http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: [EMAIL PROTECTED] Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Link requirements creep
On 18/05 00.59, Tom Lane wrote: > Greg Smith <[EMAIL PROTECTED]> writes: > > On Sat, 17 May 2008, Tom Lane wrote: > >> I was displeased to discover just now that in a standard RPM build of > >> PG 8.3, psql and the other basic client programs pull in libxml2 and > >> libxslt; this creates a package dependency that should not be there > >> by any stretch of the imagination. > > > When we noticed this recently, my digging suggested you'll be hard pressed > > to have a RedHat system now without those two installed. > > Indeed, I've not heard any squawks from the field yet. It's still > wrong though ... I agree it's wrong, but I don't this is likely to be any problem in practice on Solaris either. -- Bjorn Munch PostgreSQL Release Engineering Database Group, Sun Microsystems -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WITH RECURSIVE patch V0.1
Martijn van Oosterhout írta: On Mon, May 19, 2008 at 11:56:17AM +0200, Zoltan Boszormenyi wrote: >From an implementation point of view, the only difference between breadth-first and depth-first is that your tuplestore needs to be LIFO instead of FIFO. Are you sure? I think a LIFO tuplestore would simply return reversed breadth-first order. Depth-first means for every new record descend into another recursion first then continue with the next record on the right. Say your tree looks like: Root->A, D A->B,C D->E,F LIFO pushes A and D. It then pops A and pushes B and C. B and C have no children and are returned. Then D is popped and E and F pushed. So the returned order is: A,B,C,D,E,F. You could also do B,C,A,E,F,D if you wanted. FIFO pushes A and D. It then pops A and puts B and C at *the end*. It then pops D and pushes E and F at the end. So you get the order A,D,B,C,E,F Hope this helps, Thanks, I didn't consider popping elements off while processing. However, if the toplevel query returns tuples in A, D order, you need a positioned insert into the tuplestore, because the LIFO would pop D first. Say, a "treestore" would work this way: 1. setup: treestore is empty, storage_position := 0 2. treestore_puttupleslot() adds slot at current position, storage_position++ 3. treestore_gettupleslot() removes slot from the beginning, storage_position := 0 This works easily in memory lists but it's not obvious for me how it may work with disk backed temporary storage inside PG. -- -- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WITH RECURSIVE patch V0.1
On Mon, May 19, 2008 at 11:56:17AM +0200, Zoltan Boszormenyi wrote: > >From an implementation point of view, the only difference between > >breadth-first and depth-first is that your tuplestore needs to be LIFO > >instead of FIFO. > > Are you sure? I think a LIFO tuplestore would simply return reversed > breadth-first order. Depth-first means for every new record descend into > another recursion first then continue with the next record on the right. Say your tree looks like: Root->A, D A->B,C D->E,F LIFO pushes A and D. It then pops A and pushes B and C. B and C have no children and are returned. Then D is popped and E and F pushed. So the returned order is: A,B,C,D,E,F. You could also do B,C,A,E,F,D if you wanted. FIFO pushes A and D. It then pops A and puts B and C at *the end*. It then pops D and pushes E and F at the end. So you get the order A,D,B,C,E,F Hope this helps, -- Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > Please line up in a tree and maintain the heap invariant while > boarding. Thank you for flying nlogn airlines. signature.asc Description: Digital signature
Re: [HACKERS] WITH RECURSIVE patch V0.1
Martijn van Oosterhout írta: On Mon, May 19, 2008 at 08:19:17AM +0200, Zoltan Boszormenyi wrote: The standard has a clause to specify depth-first order. However doing a depth-first traversal would necessitate quite a different looking plan and it's far less obvious (to me anyways) how to do it. That would be even cooler to have it implemented as well. From an implementation point of view, the only difference between breadth-first and depth-first is that your tuplestore needs to be LIFO instead of FIFO. Are you sure? I think a LIFO tuplestore would simply return reversed breadth-first order. Depth-first means for every new record descend into another recursion first then continue with the next record on the right. However, just looking at the plan I don't know whether it could support that kind of usage. At the very least I don't think the standard tuplestore code can handle it. -- -- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] triggers on prepare, commit, rollback... ?
Dear pgdev, I've played with triggers a bit, and I have noticed that there seem to be no way to add a trigger on events such as "prepare", "commit", "rollback", "savepoint", if I'm not mistaken. Also, possible interesting events could be "create", "alter" and so, but it may already be possible to catch these by having a trigger on "pg_class" or the like. It seems to me that such triggers would be useful to help implement a "simple" (hmmm...) synchroneous replication system, possibly by extending or modifying slony, or for advance logging. Is there any special semantical issue for providing them in pg, or is it just the matter of implementing the parser, bookkeeping, callbacks... but with no other special "intrinsic" difficulty? Thanks in advance, -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WITH RECURSIVE patch V0.1
Martijn van Oosterhout írta: On Mon, May 19, 2008 at 08:19:17AM +0200, Zoltan Boszormenyi wrote: The standard has a clause to specify depth-first order. However doing a depth-first traversal would necessitate quite a different looking plan and it's far less obvious (to me anyways) how to do it. That would be even cooler to have it implemented as well. From an implementation point of view, the only difference between breadth-first and depth-first is that your tuplestore needs to be LIFO instead of FIFO. However, just looking at the plan I don't know whether it could support that kind of usage. At the very least I don't think the standard tuplestore code can handle it. Well, psql might wait and wait but it's actually receiving rows. A cleverer client should be able to deal with infinite streams of records. I think it's the other way around. The server should not emit infinite number of records. The server won't, the universe will end first. The universe is alive and well, thank you. :-) But the server won't emit infinite number of records, you are right. Given the implementation uses a tuplestore and not producing the tupleslots on the fly, it will go OOM first not the psql client, I watched them in 'top'. It just takes a bit of time. This is a nice example of the halting problem: http://en.wikipedia.org/wiki/Halting_problem Which was proved unsolvable a long time ago. Hmpf, yes, I forgot too much about Turing-machines since university. :-( Have a nice day, -- -- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] WITH RECURSIVE patch V0.1
Yoshiyuki Asaba írta: Hi, From: Zoltan Boszormenyi <[EMAIL PROTECTED]> Subject: Re: [PATCHES] WITH RECURSIVE patch V0.1 Date: Mon, 19 May 2008 08:19:17 +0200 Also, it seems there are no infinite recursion detection: # with recursive x(level, parent, child) as ( select 1::integer, * from test_connect_by where parent is null union all select x.level + 1, base.* from test_connect_by as base, x where base.child = x.child ) select * from x; ... it waits and waits and waits ... Well, psql might wait and wait but it's actually receiving rows. A cleverer client should be able to deal with infinite streams of records. I think it's the other way around. The server should not emit infinite number of records. How about adding new GUC parameter "max_recursive_call"? Yes, why not? MSSQL has a similar MAXRECURSION hint for WITH RECURSIVE queries according to their docs. http://msdn.microsoft.com/en-us/library/ms186243.aspx Regards, -- Yoshiyuki Asaba [EMAIL PROTECTED] -- -- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCHES] WITH RECURSIVE patch V0.1
Hi, From: Zoltan Boszormenyi <[EMAIL PROTECTED]> Subject: Re: [PATCHES] WITH RECURSIVE patch V0.1 Date: Mon, 19 May 2008 08:19:17 +0200 > >> Also, it seems there are no infinite recursion detection: > >> > >> # with recursive x(level, parent, child) as ( > >>select 1::integer, * from test_connect_by where parent is null > >>union all > >>select x.level + 1, base.* from test_connect_by as base, x where > >> base.child > >> = x.child > >> ) select * from x; > >> ... it waits and waits and waits ... > >> > > > > Well, psql might wait and wait but it's actually receiving rows. A cleverer > > client should be able to deal with infinite streams of records. > > > > I think it's the other way around. The server should not emit infinite > number of records. How about adding new GUC parameter "max_recursive_call"? Regards, -- Yoshiyuki Asaba [EMAIL PROTECTED] -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WITH RECURSIVE patch V0.1
On Mon, May 19, 2008 at 08:19:17AM +0200, Zoltan Boszormenyi wrote: > >The standard has a clause to specify depth-first order. However doing a > >depth-first traversal would necessitate quite a different looking plan and > >it's far less obvious (to me anyways) how to do it. > > That would be even cooler to have it implemented as well. From an implementation point of view, the only difference between breadth-first and depth-first is that your tuplestore needs to be LIFO instead of FIFO. However, just looking at the plan I don't know whether it could support that kind of usage. At the very least I don't think the standard tuplestore code can handle it. > >Well, psql might wait and wait but it's actually receiving rows. A cleverer > >client should be able to deal with infinite streams of records. > > I think it's the other way around. The server should not emit infinite > number of records. The server won't, the universe will end first. This is a nice example of the halting problem: http://en.wikipedia.org/wiki/Halting_problem Which was proved unsolvable a long time ago. Have a nice day, -- Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > Please line up in a tree and maintain the heap invariant while > boarding. Thank you for flying nlogn airlines. signature.asc Description: Digital signature
Re: [HACKERS] What in the world is happening on spoonbill?
On Sat, May 17, 2008 at 03:52:07PM -0400, Tom Lane wrote: > So I coded this up, and fortunately thought to try it with ecpg's tests > before committing: > ... > test preproc/whenever ... FAILED: test process exited with exit code 1 > ... > Apparently the exit(1) is intentional in that test. > .. > work than it's worth. Would it be all right to just remove the test of > "on error stop" mode? I'm fine with removing this test. Granted it leaves a very small code path untested but I think we can live with this. Michael -- Michael Meskes Email: Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org) ICQ: 179140304, AIM/Yahoo: michaelmeskes, Jabber: [EMAIL PROTECTED] Go VfL Borussia! Go SF 49ers! Use Debian GNU/Linux! Use PostgreSQL! -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Agro-annuaire.com vous invite à nous rendre visite au Sa lon SMA-Med Food 2008 du 20 au 24 mai
Title: Agro-Annuaire.com Si vous ne parvenez pas à visualiser correctement cette newsletter, cliquez ici Recommendez cette News à un ami Nous vous prions de nous excuser si cette lettre d'information vous a causé un désagrément. Pour ne plus recevoir nos e-mailing, cliquez ici
Re: [HACKERS] notification information functions
On Sun, 2008-05-18 at 16:00 -0400, Andrew Dunstan wrote: > I am working on moving the notification buffer into shared memory as > previously discussed. Since pg_listener will no longer exist, I think we > need to provide a couple of information functions. > > I suggest: > > pg_listened_events(out event name) returns setof record > pg_pending_events(out event name, out message text) returns setof record > > The first would show events being listened on by the current backend, > while the second would show all pending events for the current db. > > Given that there will no longer be any central place where events will > be registered to be listened on, it will not be possible to show all > such events for the current db. Are you sure that there will be no central place ? How will we know then that all listeners have received their events ? -- Hannu -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WITH RECURSIVE patch V0.1
Gregory Stark írta: This is indeed really cool. I'm sorry I haven't gotten to doing what I promised in this area but I'm glad it's happening anyways. "Zoltan Boszormenyi" <[EMAIL PROTECTED]> writes: Can we get the rows in tree order, please? ... After all, I didn't specify any ORDER BY clauses in the base, recursive or the final queries. The standard has a clause to specify depth-first order. However doing a depth-first traversal would necessitate quite a different looking plan and it's far less obvious (to me anyways) how to do it. That would be even cooler to have it implemented as well. Also, it seems there are no infinite recursion detection: # with recursive x(level, parent, child) as ( select 1::integer, * from test_connect_by where parent is null union all select x.level + 1, base.* from test_connect_by as base, x where base.child = x.child ) select * from x; ... it waits and waits and waits ... Well, psql might wait and wait but it's actually receiving rows. A cleverer client should be able to deal with infinite streams of records. I think it's the other way around. The server should not emit infinite number of records. I think DB2 does produce a warning if there is no clause it can determine will bound the results. But that's not actually reliable. It's quite possible to have clauses which will limit the output but not in a way the database can determine. Consider for example a tree-traversal for a binary tree stored in a recursive table reference. The DBA might know that the data contains no loops but the database doesn't. Well, a maintenance resjunk could be used like the branch column in tablefunc::connectby(). -- -- Zoltán Böszörményi Cybertec Schönig & Schönig GmbH http://www.postgresql.at/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers