Re: [HACKERS] pg_upgrade project status
Bruce Momjian wrote: Now that pg_migrator is BSD licensed, and already in C, I am going to spend my time trying to improve pg_migrator for 8.4: http://pgfoundry.org/projects/pg-migrator/ What is the plan now? Get pg_upgrade working, get pg_migrator working, ship pg_migrator in core or separately? Is there any essential functionality that we need to get into the server code before release? Should we try to get dropped columns working? It's quite late to be wondering about this, so unless we get a clear and definite plan this week, I say we stop kidding ourselves and drop it. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] A deprecation policy
We often discuss changing user-visible behavior of various kinds and are usually clueless on the question of "someone might rely on this" or "how many people are still using this" etc. Still, it is clearly often useful to revise interfaces from time to time. I have been thinking, with a semi-formal deprecation policy, we could make these decisions with more confidence. My proposed policy goes like this: 1. In release N, an interface is declared "obsolete", which means that we consider that it is no longer recommended to use the interface; better alternatives are usually available. An obsolete interface is marked as such in the documentation, and the list of obsoleted interfaces is also listed in the release notes. (In certain cases, it may be permissible to skip this step if the interface was built as a workaround in the first place and the obsoletion is obvious.) 2. In release N+1, obsolete interfaces are declared "deprecated", which means that users really shouldn't use the interface and are urged to change their code as soon as possible. Through some configuration mechanism, where technically possible, using deprecated features draws a warning or an error (warning on by default). Deprecated features are also marked in the documentation and release notes. 3. In release N+2, if there were no protests in response to step 2, deprecated features are removed. This approach gives users and developers the ability to clearly plan ahead and take necessary actions. Usually, you'd have about 2 years to react. Also, consider that we want to get in-place upgrade working, so essential interfaces such as basic commands and configuration files should at least be able to limp along after being moved to version N+1. Examples: Removing implicit casts (with hindsight): Release N: Declare certain casts obsolete. Release N+1: Raise deprecation warning when cast function invoked implicitly. Release N+2: Remove. Removing SQL_interitance option: Release: Declare obsolete. Release N+1: Deprecation warning. Release N+2: Remove. Altering semantics of log_filename without placeholder (under discussion): Release 8.4: Declare current behavior obsolete. Release 8.5: Deprecation warning. Release 8.6: Implement whatever new behavior we like. I would also extend this system to removed configuration settings, e.g., max_fsm_*. We should make these deprecated for one release, so (1) configuration files can be upgraded without manual work (relevant to in-place upgrade), and (2) users are alerted that their carefully crafted configuration might need a review. Comments? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Robert Haas wrote: I am not in love with the idea of using PQinitSSL(SOME_MAGIC_VALUE) The issue I see is the inability to toggle crypto initialization. I think crypto init and ssl init are 2 different things. Thus, I proposed the below: http://archives.postgresql.org/pgsql-hackers/2009-02/msg00488.php Andrew Chernow eSilo, LLC -- every bit count -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GIN fast insert
On Tue, Feb 10, 2009 at 11:18 PM, Tom Lane wrote: > For queries that select only a single index entry, there might be some > speed degradation, but a quick test suggests it's in the > single-digit-percentage range if everything's cached; and of course if > you have to go to disk then the extra CPU cycles to push a bitmap around > are lost in the noise. > > In any case, as a wise man once said "I can make this code arbitrarily > fast, if it doesn't have to give the right answer". Blowing up in easily > foreseeable circumstances isn't my idea of giving the right answer. Sure, but dropping index-scan support is not the only way of fixing that problem. It has a certain elegance to it, but if it produces a perceptible slowdown, it may not be the best approach. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 11:09 PM, Bruce Momjian wrote: > Why not just call PQinitSSL(true) and do everything in your > application?; from the libpq manual: > > If you are using SSL inside your application (in addition > to inside libpq), you can use > PQinitSSL(int) to tell libpq > that the SSL library has already been initialized by your > application. I think this question has been answered about four times on this thread already. I thought the OP's explanation was pretty clear: http://archives.postgresql.org/pgsql-hackers/2009-01/msg02488.php It's also re-explained here, here, and here: http://archives.postgresql.org/message-id/b42b73150902100713mdbfd64ah706ced5170897...@mail.gmail.com http://archives.postgresql.org/message-id/603c8f070902100849n44034028p5423f18e6e1b9...@mail.gmail.com http://archives.postgresql.org/message-id/b42b73150902101420m6c263f7ayafc10090af841...@mail.gmail.com The point is that there are three sane things you might want/need PQinitSSL() to do, and we allow two of them. I think the request for a behavior change is totally reasonable, but I think Andrew or Merlin should be the ones to write the patch (and then others can review). I am not in love with the idea of using PQinitSSL(SOME_MAGIC_VALUE) for this purpose, but haven't read the code enough to determine the least-ugly API. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GIN fast insert
Robert Haas writes: > On Tue, Feb 10, 2009 at 10:38 PM, Tom Lane wrote: >> It strikes me that part of the issue here is that the behavior of this >> code is much better adapted to the bitmap-scan API than the traditional >> indexscan API. Since GIN doesn't support ordered scan anyway, I wonder >> whether it wouldn't be more sensible to simply allow it to not offer >> the traditional API. It should be easy to make the planner ignore plain >> indexscan plans for an AM that didn't support them. > If that doesn't lead to a performance degradation, I think it would be > a good idea. For queries that select only a single index entry, there might be some speed degradation, but a quick test suggests it's in the single-digit-percentage range if everything's cached; and of course if you have to go to disk then the extra CPU cycles to push a bitmap around are lost in the noise. In any case, as a wise man once said "I can make this code arbitrarily fast, if it doesn't have to give the right answer". Blowing up in easily foreseeable circumstances isn't my idea of giving the right answer. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Bruce Momjian wrote: > Andrew Chernow wrote: > > > > > On Tue, Feb 10, 2009 at 5:02 PM, Bruce Momjian wrote: > > >> PQinitSSL(false) initializes crypto? Please point me to exact function > > >> calls that are the problem? Everything is very vague. > > > > File: src/interfaces/libpq/fe-secure.c > > Func: init_ssl_system > > Line: 823 > > > > Starting at around line 853, this function prepares a lock array for > > CRYPTO_set_locking_callback. This function is not part of libssl, its > > part of libcrypto. It also calls CRYPTO_set_id_callback. The rest of > > that function appears to only make libssl calls. > > > > There should be an "if (pq_initcryptolib)" around those libcrypto calls, > > serving the same purpose as the pq_initssllib variable. > > Why not just call PQinitSSL(true) and do everything in your Sorry, meant 'false'. Also, I read Merlin's comments and understand he doesn't want his application to have to set up SSL. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Andrew Chernow wrote: > > > On Tue, Feb 10, 2009 at 5:02 PM, Bruce Momjian wrote: > >> PQinitSSL(false) initializes crypto? Please point me to exact function > >> calls that are the problem? Everything is very vague. > > File: src/interfaces/libpq/fe-secure.c > Func: init_ssl_system > Line: 823 > > Starting at around line 853, this function prepares a lock array for > CRYPTO_set_locking_callback. This function is not part of libssl, its > part of libcrypto. It also calls CRYPTO_set_id_callback. The rest of > that function appears to only make libssl calls. > > There should be an "if (pq_initcryptolib)" around those libcrypto calls, > serving the same purpose as the pq_initssllib variable. Why not just call PQinitSSL(true) and do everything in your application?; from the libpq manual: If you are using SSL inside your application (in addition to inside libpq), you can use PQinitSSL(int) to tell libpq that the SSL library has already been initialized by your application. Actually, that wording doesn't say what the parameter means so I updated the documentation: If you are using SSL inside your application (in addition !to inside libpq), you can call !PQinitSSL(int) with 0 to tell !libpq that the SSL library !has already been initialized by your application. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] The testing of multi-batch hash joins with skewed data sets patch
> -Original Message- > From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers- > ow...@postgresql.org] On Behalf Of Tom Lane > But really there are two different performance regimes here, one where > the hash data is large enough to spill to disk and one where it isn't. > Reducing work_mem will cause data to spill into kernel disk cache, but > if the total problem fits in RAM then very possibly that data won't ever > really go to disk. So I suspect such a test case will act more like the > small-data case than the big-data case. You probably actually need more > data than RAM to be sure you're testing the big-data case. Is there a way to limit the kernel disk cache? (We are running SUSE Linux.) We have been testing hybrid hash join performance and have seen that the performance varies considerably less than expected even for dramatic changes in work_mem and the I/Os that appear to be performed. -- Ramon Lawrence -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GIN fast insert
On Tue, Feb 10, 2009 at 10:38 PM, Tom Lane wrote: >> I think this code needs to be somehow rewritten to make things degrade >> gracefully when the pending list is long - I'm not sure what the best >> way to do that is. Inventing a new data structure to store TIDs that >> is never lossy seems like it might work, but you'd have to think about >> what to do if it got too big. > > What would be wrong with letting it degrade to lossy? I suppose the > reason it's trying to avoid that is to avoid having to recheck all the > rows on that page when it comes time to do the index insertion; but > surely having to do that is better than having arbitrary, unpredictable > failure conditions. No, I don't think that's it. See here, beginning with "the problem with lossy tbm has two aspects": http://archives.postgresql.org/message-id/4974b002.3040...@sigaev.ru > It strikes me that part of the issue here is that the behavior of this > code is much better adapted to the bitmap-scan API than the traditional > indexscan API. Since GIN doesn't support ordered scan anyway, I wonder > whether it wouldn't be more sensible to simply allow it to not offer > the traditional API. It should be easy to make the planner ignore plain > indexscan plans for an AM that didn't support them. If that doesn't lead to a performance degradation, I think it would be a good idea. It certainly seems like it would allow this patch to be a LOT simpler, cleaner, and more robust. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] The testing of multi-batch hash joins with skewed data sets patch
> The idea I came up with for benchmarking was a little similar to what I > remember from the original tests. I have a sales orders table and a > products > table. My version of the sales orders table contains a customer column. > Data > for 10 customers is populated into the sales orders table, customer 1 has > a > totally non-skewed set of orders, where customer 10 has the most skew. > I've > done this by creating 1 products each with a product code that has > been > cast into a varchar and padded up to 5 chars in length with '0's. Each > customer has the same number of rows in the sales orders table, customer > 10 > mostly orders products that when cast as INT are evenly divisible by 10, > where customer 2 mostly orders products that are evenly divisible by 2. > You > get the idea. > Currently I'm unsure the best way to ensure that the hash join goes into > more than one batch apart from just making the dataset very large. > > Does anyone have any thoughts about the way I plan to go about > benchmarking? Thank you for testing the patch - it is very much appreciated. If you use the test version of the patch, it will print out statistics that will be helpful. I think your approach should work. I have two comments: 1) You will need to scale the data set larger to go multi-batch. Even a minimum work_mem of 1 MB may be enough to keep the product table in memory unless each tuple is large. For the TPC-H tests, the size of product was 200,000 for 1 GB tests and 2 million tuples for 10 GB tests. 2) The current formula may not generate the skew you expect on sales.productcode. To simplify the discussion, I will only consider customer 1 (c1) and customer 10 (c10) and a total of 100,000 sales (50,000 for each customer). If I look at product 10 for instance, it will be ordered 50,000/1,000 = 50 times by c10 and 50,000/10,000 = 5 times by c1 for a total of 55 times. Product 10 represents only 0.055% of all sales. For all mod 10 products combined, they represent 55% of sales, which is significant BUT requires us to store 10% of product in memory (1000 tuples all of which need to be in the stats record). This two customer test would be interesting. There should be no benefit for customer 1. In fact, you would see the worst case as you would plan for skew but not get any benefit. For customer 10 you should see a benefit if your stats have 1000 tuples. The issue is that you cannot scale this test easily. Increasing by a factor of 10 would require stats of 10,000, and increasing by a factor of 100 is not possible. The Zipfian distribution used in the previous experiments causes the top few values to be exponentially better than the average value. For instance, the top 100 products may represent 10 to 50% of total sales even for 1 million products. In the previous case, the top 100 products represent only 0.0055% of total sales for 1 million products. This level of skew would be ignored by the algorithm which has a cutoff value that at least 1% of the probe relation must match with the skew values buffered in memory. To test higher values of skew, you could setup the experiment like this (may scale down by a factor of 10 depending on your hardware): products - 1 million sales - 10 million customers - 5 - Each customer has 2 million orders. - Customer 1 orders each product equally. - Customer 2 orders each product mod 10^2 equally. - Customer 5 orders each product mod 10^5 equally. It is customer 5's orders that result in most of the skew as every 100,000th product will be ordered 200,000 times (customer 5 only orders 10 products). Then, there is a huge benefit for customer 5 for keeping these 10 products in memory during the join. The benefit decreases for each customer all the way down to customer 1 which will see no benefit. -- Ramon Lawrence -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] GIN fast insert
Robert Haas writes: > I think this is related to the problems with gincostestimate() that > Tom Lane was complaining about here: > http://archives.postgresql.org/message-id/20441.1234209...@sss.pgh.pa.us > I am not 100% sure I'm understanding this correctly, but I think the > reason why gincostestimate() is so desperate to avoid index scans when > the pending list is long is because it knows that scanFastInsert() > will blow up if an index scan is actually attempted because of the > aforementioned TIDBitmap problem. This seems unacceptably fragile. Yipes. If that's really the reason then I agree, it's a nonstarter. > I think this code needs to be somehow rewritten to make things degrade > gracefully when the pending list is long - I'm not sure what the best > way to do that is. Inventing a new data structure to store TIDs that > is never lossy seems like it might work, but you'd have to think about > what to do if it got too big. What would be wrong with letting it degrade to lossy? I suppose the reason it's trying to avoid that is to avoid having to recheck all the rows on that page when it comes time to do the index insertion; but surely having to do that is better than having arbitrary, unpredictable failure conditions. It strikes me that part of the issue here is that the behavior of this code is much better adapted to the bitmap-scan API than the traditional indexscan API. Since GIN doesn't support ordered scan anyway, I wonder whether it wouldn't be more sensible to simply allow it to not offer the traditional API. It should be easy to make the planner ignore plain indexscan plans for an AM that didn't support them. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] GIN fast insert
Jeff Davis asked me if I'd be willing to do a review of the GIN fast insert patch about two weeks ago, but I haven't actually had a chance to read through it in detail until tonight. I can't say I really know anything about GIN (though I did take this opportunity to RTM), so apologies in advance if my comments here are totally off base. My basic impression of this code is that it's trying unnaturally hard to avoid creating a lossy TIDBitmap, which seems pretty odd, considering that the whole point of TIDBitmap is, AFAICS, to degrade to lossy mode when the alternative is eating too much memory. I'm particularly dismayed by this hunk: + if ( ntids == NULL && tbm && tbm_has_lossy(tbm) ) + ereport(ERROR, + (errcode(ERRCODE_OUT_OF_MEMORY), + errmsg("not enough memory to store result of pending list or VACUUM table" ), + errhint("Increase the \"work_mem\" parameter."))); The definition of work_mem is the amount of memory that a hash table can use before spilling to disk, NOT the amount of memory a hash table can consume before arbitrarily failing. It's intended to be a soft limit which people can tune to get the best performance out of their system, not a hard limit that interrupts work. Using the limit this way will encourage people to set work_mem too high so that things don't crash, but that in turn will potentially lead to other bad behavior (like swapping). I see that there's already one other place in the GIN code that uses work_mem this way, but I don't think that's a good reason to add another one - I don't see any other place in the system that behaves this way, outside of GIN. I think this is related to the problems with gincostestimate() that Tom Lane was complaining about here: http://archives.postgresql.org/message-id/20441.1234209...@sss.pgh.pa.us I am not 100% sure I'm understanding this correctly, but I think the reason why gincostestimate() is so desperate to avoid index scans when the pending list is long is because it knows that scanFastInsert() will blow up if an index scan is actually attempted because of the aforementioned TIDBitmap problem. This seems unacceptably fragile. Faster insert performance is nice, but not if it means that my indices suddenly start switching themselves off (which is bad) or in the presence of cached plans, crashing (which is worse). I think this code needs to be somehow rewritten to make things degrade gracefully when the pending list is long - I'm not sure what the best way to do that is. Inventing a new data structure to store TIDs that is never lossy seems like it might work, but you'd have to think about what to do if it got too big. I think it's probably best to just go ahead and let it get arbitrarily long (since you have no other option besides crashing) and leave work_mem out of it. Instead, put a reloption it that controls the maximum length of the pending list. This is easy to tune: if you make it bigger, insert performance improves, but queries eat more memory. If you make it smaller, insert performance gets worse, but you bound the memory that queries use more tightly. The other problem with using work_mem is that it can vary between sessions, so one session happily stuffs a lot of data into the pending list and then another session can't scan the index because it has a lower work_mem setting. Using a reloption avoids that problem by making the whole thing symmetric, plus it gives you fine-grained control over the behavior on an index-by-index basis. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Optimization rules for semi and anti joins
On Tue, Feb 10, 2009 at 8:41 PM, Tom Lane wrote: > "Jonah H. Harris" writes: > > Cripes! I just had an idea and it looks like the buggers beat me to it > :( > > http://www.google.com/patents?id=4bqBEBAJ&dq=null+aware+anti-join > > I wonder if the USPTO is really clueless enough to accept this? > Claim 1 would give Oracle ownership of the definition of NOT IN, > and few of the other claims seem exactly non-obvious either. Yeah, I just looked up semi and anti-join optimization patents and Oracle/IBM have a ton. What an obvious exploitation of math for business gain. I doubt they'd be enforceable. I wish they'd just do away with software patents altogether :( -- Jonah H. Harris, Senior DBA myYearbook.com
Re: [HACKERS] Optimization rules for semi and anti joins
"Jonah H. Harris" writes: > Cripes! I just had an idea and it looks like the buggers beat me to it :( > http://www.google.com/patents?id=4bqBEBAJ&dq=null+aware+anti-join I wonder if the USPTO is really clueless enough to accept this? Claim 1 would give Oracle ownership of the definition of NOT IN, and few of the other claims seem exactly non-obvious either. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Optimization rules for semi and anti joins
On Tue, Feb 10, 2009 at 8:09 PM, Jonah H. Harris wrote: > On Tue, Feb 10, 2009 at 3:10 PM, Tom Lane wrote: > >> I wrote (in response to Kevin Grittner's recent issues): >> > Reflecting on this further, I suspect there are also some bugs in the >> > planner's rules about when semi/antijoins can commute with other joins; >> >> After doing some math I've concluded this is in fact the case. Anyone >> want to check my work? > > > FWIW, the logic looks correct to me. Cripes! I just had an idea and it looks like the buggers beat me to it :( http://www.google.com/patents?id=4bqBEBAJ&dq=null+aware+anti-join -- Jonah H. Harris, Senior DBA myYearbook.com
Re: [HACKERS] Optimization rules for semi and anti joins
On Tue, Feb 10, 2009 at 3:10 PM, Tom Lane wrote: > I wrote (in response to Kevin Grittner's recent issues): > > Reflecting on this further, I suspect there are also some bugs in the > > planner's rules about when semi/antijoins can commute with other joins; > > After doing some math I've concluded this is in fact the case. Anyone > want to check my work? FWIW, the logic looks correct to me. -- Jonah H. Harris, Senior DBA myYearbook.com
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 5:02 PM, Bruce Momjian wrote: PQinitSSL(false) initializes crypto? Please point me to exact function calls that are the problem? Everything is very vague. File: src/interfaces/libpq/fe-secure.c Func: init_ssl_system Line: 823 Starting at around line 853, this function prepares a lock array for CRYPTO_set_locking_callback. This function is not part of libssl, its part of libcrypto. It also calls CRYPTO_set_id_callback. The rest of that function appears to only make libssl calls. There should be an "if (pq_initcryptolib)" around those libcrypto calls, serving the same purpose as the pq_initssllib variable. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Bug #4284
> From: Tom Lane [mailto:t...@sss.pgh.pa.us] > Sent: 10 February 2009 22:30 > To: David Rowley > Cc: pgsql-hackers@postgresql.org > Subject: Re: [HACKERS] Bug #4284 > > "David Rowley" writes: > > My report contained a full re-creation script to reproduce the problem > and > > tonight I'm having the same problem with CVS Head. To my untrained eye > it > > looks like the planner is not properly pushing down the row count. > > It looks more like a multicolumn selectivity issue to me. The planner > is supposing that the join condition > > ON t1.productiondate = t2.productiondate AND t1.lineid = t2.lineid >AND t1.partcode = t2.partcode > > is going to eliminate some fair-size fraction of t1 rows, whereas in > fact the construction of t2 is such that it won't eliminate any of them. > This is less obviously true for the join to t4, but I imagine from the > rowcounts that it's also true there. So you get an unreasonably small > rowcount for whichever join gets done first, and then the nestloop plan > looks like a good idea for the second join. At work I've been (unwillingly) working with MS SQL Server 2008. I notice that when I request a query plan it's an "Estimated" query plan. Perhaps this is to get around problems with bad row estimates. In theory at least it should be possible to revert back to another plan after the inner nested queries have processed and before the join takes place... I know that's a major change, I'm just theorising. Apart from that, do you think that this could only be fixed with stats that span multiple columns? I know this was talked about not so long ago. David. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] The testing of multi-batch hash joins with skewed data sets patch
"David Rowley" writes: > Currently I'm unsure the best way to ensure that the hash join goes into > more than one batch apart from just making the dataset very large. Make work_mem very small? But really there are two different performance regimes here, one where the hash data is large enough to spill to disk and one where it isn't. Reducing work_mem will cause data to spill into kernel disk cache, but if the total problem fits in RAM then very possibly that data won't ever really go to disk. So I suspect such a test case will act more like the small-data case than the big-data case. You probably actually need more data than RAM to be sure you're testing the big-data case. Regardless, I'd like to see some performance results from both regimes. It's also important to be sure there is not a penalty for single-batch cases. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Bug #4284
"David Rowley" writes: > My report contained a full re-creation script to reproduce the problem and > tonight I'm having the same problem with CVS Head. To my untrained eye it > looks like the planner is not properly pushing down the row count. It looks more like a multicolumn selectivity issue to me. The planner is supposing that the join condition ON t1.productiondate = t2.productiondate AND t1.lineid = t2.lineid AND t1.partcode = t2.partcode is going to eliminate some fair-size fraction of t1 rows, whereas in fact the construction of t2 is such that it won't eliminate any of them. This is less obviously true for the join to t4, but I imagine from the rowcounts that it's also true there. So you get an unreasonably small rowcount for whichever join gets done first, and then the nestloop plan looks like a good idea for the second join. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Optimization rules for semi and anti joins
On Tue, Feb 10, 2009 at 5:03 PM, Tom Lane wrote: > Robert Haas writes: >> I don't understand why antijoins need to null-extend the tuple at all. > > Well, we are talking theoretical definition here, not implementation. > But if you need an example where the column values can be referenced: > >select * from a left join b on (a.id = b.id) >where b.id is null > > 8.4 does recognize this as an antijoin, if the join operator is strict. Oh, I see. Hmm. >> In the case of a semijoin, it's theoretically possible that there >> could be syntax which allows access to the attributes of the outer >> side of the relation, though IN and EXISTS do not. > > Actually, that makes less sense than the antijoin case. For antijoin > there is a well-defined value for the extended columns, ie null. For > a semijoin the RHS values might come from any of the rows that happen > to join to the current LHS row, so I'm just as happy that it's > syntactically impossible to reference them. You might some day want to optimize this case as a semijoin, or something similar to a semijoin: SELECT foo.a, (SELECT bar.b FROM bar WHERE bar.a = foo.a) FROM foo; ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 5:02 PM, Bruce Momjian wrote: > Merlin Moncure wrote: >> > PQinitSSL(0) was specifically designed to allow applications to set up >> > SSL on their own. How does this not work properly? >> >> this has nothing to do with who initializes ssl. this is all about >> *crypto*. remember, crypto and ssl are two separate libraries. The >> application or library in question may not even link with ssl or use >> ssl headers. >> >> The problem is PQinitSSL (re-) initializes crypto without asking if that's >> ok. > > PQinitSSL(false) initializes crypto? Please point me to exact function > calls that are the problem? Everything is very vague. nooo, you are not listening :-) PQinitSSL(0) initializes libpq for ssl but leaves crypto and ssl initialization to the app PQinitSSL(1) initializes libpq, crypto, and ssl libraries Now, consider an app that uses libcrypto for its own requirements *but not libssl*. It initializes libcrypto, passing its own lock vector, etc. It cannot however initialize ssl because it does not link with ssl, or include ssl headers. There are no ssl functions to call, and it wouldn't make sense to expect the app to do this even if there were. Now, if this app also has libpq dependency, it needs a way to tell libpq: 'i have already initialized the crypto library, but could you please set up libssl'. otherwise you end up re-initializing libcrypto with different lock vector which is very bad if there are any locks already in use, which is quite likely. There is no way to do that with libpqso you see that no matter how you call PQinitSSL, the application is broken in some way. Passing 0 breaks because ssl never ends up getting set up, and passing 1 breaks because libcrypto's locks get messed up. The main problem is that libpq PQinitSSL makes broad (and extremely dangerous assumption) that it is the only one interested in libcrypto lock vector. In short, it's broken. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] The testing of multi-batch hash joins with skewed data sets patch
I've been putting a little bit of thought into how to go about testing the performance of this patch. From reading the previous threads quite a bit of testing was done with a certain data set where all that tested found it to be a big winner with staggering performance gains with the skewed dataset. Still the wiki page states that it needs performance testing. I'm guessing what we really need to test now is ask: Are non skewed sets any slower now? Where do we start seeing the gains? So I've been working a little on a set of data that can be created simply just be running a few SQLs. I've yet run the tests as I'm having some hardware problem with my laptop. In the meantime I thought I'd share what I was going to test with the community to see if I'm going about things the right way. The idea I came up with for benchmarking was a little similar to what I remember from the original tests. I have a sales orders table and a products table. My version of the sales orders table contains a customer column. Data for 10 customers is populated into the sales orders table, customer 1 has a totally non-skewed set of orders, where customer 10 has the most skew. I've done this by creating 1 products each with a product code that has been cast into a varchar and padded up to 5 chars in length with '0's. Each customer has the same number of rows in the sales orders table, customer 10 mostly orders products that when cast as INT are evenly divisible by 10, where customer 2 mostly orders products that are evenly divisible by 2. You get the idea. Once I get this laptop sorted out or get access to some better hardware It was my plan to benchmark and chart the results from customers 1 to 10 for with and without the patch. What I hope to prove is that customer 1 is almost the same for with as without the patch and hopefully see an even rise in performance as the customer id number increases. Currently I'm unsure the best way to ensure that the hash join goes into more than one batch apart from just making the dataset very large. Does anyone have any thoughts about the way I plan to go about benchmarking? Please see the attached document for the benchmark script. David. mbhj_patch_tests.sql Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Optimization rules for semi and anti joins
Robert Haas writes: > I don't understand why antijoins need to null-extend the tuple at all. Well, we are talking theoretical definition here, not implementation. But if you need an example where the column values can be referenced: select * from a left join b on (a.id = b.id) where b.id is null 8.4 does recognize this as an antijoin, if the join operator is strict. > In the case of a semijoin, it's theoretically possible that there > could be syntax which allows access to the attributes of the outer > side of the relation, though IN and EXISTS do not. Actually, that makes less sense than the antijoin case. For antijoin there is a well-defined value for the extended columns, ie null. For a semijoin the RHS values might come from any of the rows that happen to join to the current LHS row, so I'm just as happy that it's syntactically impossible to reference them. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Optimization rules for semi and anti joins
> A6. (A antijoin B on (Pab)) leftjoin C on (Pbc) >= A antijoin (B leftjoin C on (Pbc)) on (Pab) > > The second form is in fact equivalent to null-extending the A/B antijoin > --- the actual contents of C cannot affect the result. So we could just I don't understand why antijoins need to null-extend the tuple at all. It seems that it would be cheaper and all-around simpler to just pass through the left-hand tuple unchanged. In the case of a semijoin, it's theoretically possible that there could be syntax which allows access to the attributes of the outer side of the relation, though IN and EXISTS do not. But with an antijoin that's just nonsense, so I don't quite understand why we're handling it as we are. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Bug #4284
I had an email today about an old bug that I reported back in July 2008. http://archives.postgresql.org/pgsql-bugs/2008-07/msg00026.php I didn't receive any response at the time and I didn't really follow it up. My report contained a full re-creation script to reproduce the problem and tonight I'm having the same problem with CVS Head. To my untrained eye it looks like the planner is not properly pushing down the row count. I know this is a busy time for all, but this seems to be distressing a few people. At the time I didn't find anyway apart from disabling nested loops. Would anyone gifted in the art of the query planner be able to look at this? David.
Re: [HACKERS] advance local xmin more aggressively
Alvaro Herrera writes: > For example, maybe we could keep track of counts of snapshots removed > since the last xmin calculation, and only run this routine if the number > is different from zero (or some small positive integer). I think most of the callers of SnapshotResetXmin already know they removed something. It might be interesting for FreeSnapshot or something nearby to note whether the snapshot being killed has xmin = proc's xmin, and only do the update calculation if so. I still dislike the assumption that all resource owners are children of a known owner. I suspect in fact that it's demonstrably wrong right now, let alone in future (cf comments in PortalRun). If we're going to do this then snapmgr.c needs to track the snapshots for itself. Of course that's going to make the "is it worth it" question even more pressing. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] advance local xmin more aggressively
Tom Lane wrote: > Jeff Davis writes: > > With the new snapshot maintenance code, it looks like we can advance the > > xmin more aggressively. > > The original design for that contemplated having snapmgr.c track > all the snapshots (cf the comment for RegisteredSnapshots). I don't > particularly care for having it assume that it can find all the resource > owners. > > But really the more important problem is to demonstrate that you > actually get a benefit commensurate with the additional cycles spent. > IIRC the reason the code is the way it is is that we concluded that for > typical usage patterns there wouldn't be any win from tracking things > more aggressively. As somebody pointed out recently, SnapshotResetXmin > is called quite a lot; if it's expensive it's going to be a problem. I think Jeff is coming from the Truviso point of view: they have very long running transactions, and they normally have a number of snapshots that are always being updated, but it's rare that there are no snapshots at all. So this optimization actually buys a chance to update Xmin at all; with the current code, they keep the same xmin all the time because there's always some snapshot. I'm not sure if the best answer is to just state that Truviso should keep maintaining this patch privately. It would be, of course, much better to come up with a way to keep track of this in a cheaper way. For example, maybe we could keep track of counts of snapshots removed since the last xmin calculation, and only run this routine if the number is different from zero (or some small positive integer). -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pgsql: Update autovacuum to use reloptions instead of a system catalog,
Alvaro Herrera writes: > Note that it introduces a LEFT JOIN on pg_class to itself that's always > present, even for server versions that do not support reloptions. Personally I'd be more worried about the unnest(). Also, please schema-qualify that function name; you can't assume anything about the search path here. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] advance local xmin more aggressively
Jeff Davis writes: > With the new snapshot maintenance code, it looks like we can advance the > xmin more aggressively. The original design for that contemplated having snapmgr.c track all the snapshots (cf the comment for RegisteredSnapshots). I don't particularly care for having it assume that it can find all the resource owners. But really the more important problem is to demonstrate that you actually get a benefit commensurate with the additional cycles spent. IIRC the reason the code is the way it is is that we concluded that for typical usage patterns there wouldn't be any win from tracking things more aggressively. As somebody pointed out recently, SnapshotResetXmin is called quite a lot; if it's expensive it's going to be a problem. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Optimization rules for semi and anti joins
Hi, Le 10 févr. 09 à 21:10, Tom Lane a écrit : I wrote (in response to Kevin Grittner's recent issues): Reflecting on this further, I suspect there are also some bugs in the planner's rules about when semi/antijoins can commute with other joins; After doing some math I've concluded this is in fact the case. Anyone want to check my work? I don't know how easy it would be to do, but maybe the Coq formal proof management system could help us here: http://coq.inria.fr/ The harder part in using coq might well be to specify the problem the way you just did, so... HTH, -- dim -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Update autovacuum to use reloptions instead of a system catalog,
ITAGAKI Takahiro wrote: > 2. psql's \d+ doesn't show toast storage parameters. > > Neither \d+ for base tables nor toast relations show toast.* parameters > though there are some values in pg_class.reloptions. > I think we should show toast.* parameters in \d+ for base tables > because it has consistency; we set them at ALTER TABLE for base tables. This patch seems to fix this problem. Note that it introduces a LEFT JOIN on pg_class to itself that's always present, even for server versions that do not support reloptions. I'm not sure that this is a problem; I can't measure any difference on \d with and without the patch on a test database with 1000 tables. Index: src/bin/psql/describe.c === RCS file: /home/alvherre/cvs/pgsql/src/bin/psql/describe.c,v retrieving revision 1.198 diff -c -p -r1.198 describe.c *** src/bin/psql/describe.c 22 Jan 2009 20:16:08 - 1.198 --- src/bin/psql/describe.c 10 Feb 2009 18:13:36 - *** *** 8,14 * * Copyright (c) 2000-2009, PostgreSQL Global Development Group * ! * $PostgreSQL: pgsql/src/bin/psql/describe.c,v 1.198 2009-01-22 20:16:08 tgl Exp $ */ #include "postgres_fe.h" --- 8,14 * * Copyright (c) 2000-2009, PostgreSQL Global Development Group * ! * $PostgreSQL: pgsql/src/bin/psql/describe.c,v 1.197 2009/01/20 02:13:42 momjian Exp $ */ #include "postgres_fe.h" *** describeOneTableDetails(const char *sche *** 910,923 /* Get general table info */ printfPQExpBuffer(&buf, ! "SELECT relchecks, relkind, relhasindex, relhasrules, %s, " ! "relhasoids" "%s%s\n" ! "FROM pg_catalog.pg_class WHERE oid = '%s'", ! (pset.sversion >= 80400 ? "relhastriggers" : "reltriggers <> 0"), (pset.sversion >= 80200 && verbose ? ! ", pg_catalog.array_to_string(reloptions, E', ')" : ",''"), ! (pset.sversion >= 8 ? ", reltablespace" : ""), oid); res = PSQLexec(buf.data, false); if (!res) --- 910,927 /* Get general table info */ printfPQExpBuffer(&buf, ! "SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules, %s, " ! "c.relhasoids" "%s%s\n" ! "FROM pg_catalog.pg_class c " ! "LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid) " ! "WHERE c.oid = '%s'", ! (pset.sversion >= 80400 ? "c.relhastriggers" : "c.reltriggers <> 0"), (pset.sversion >= 80200 && verbose ? ! ", pg_catalog.array_to_string(c.reloptions || " ! "array(select 'toast.' || x from unnest(tc.reloptions) x), ', ')" ! : ",''"), ! (pset.sversion >= 8 ? ", c.reltablespace" : ""), oid); res = PSQLexec(buf.data, false); if (!res) -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] advance local xmin more aggressively
With the new snapshot maintenance code, it looks like we can advance the xmin more aggressively. For instance: S1: INSERT INTO foo VALUES(1); S2: BEGIN; DECLARE c1 CURSOR FOR SELECT i FROM foo; S1: DELETE FROM foo; S2: DECLARE c2 CURSOR FOR SELECT i FROM foo; CLOSE c1; S1: VACUUM VERBOSE foo; The VACUUM should be able to clean up the deleted tuple, because it's no longer visible to anyone. Attached a small patch to accomplish this. I don't expect it to be put in 8.4, but it's small enough that I thought I should at least send it in just in case. Regards, Jeff Davis diff --git a/src/backend/utils/resowner/resowner.c b/src/backend/utils/resowner/resowner.c index 7b6e15b..06bf425 100644 --- a/src/backend/utils/resowner/resowner.c +++ b/src/backend/utils/resowner/resowner.c @@ -21,6 +21,7 @@ #include "postgres.h" #include "access/hash.h" +#include "access/transam.h" #include "storage/bufmgr.h" #include "storage/proc.h" #include "utils/memutils.h" @@ -1026,6 +1027,47 @@ ResourceOwnerForgetSnapshot(ResourceOwner owner, Snapshot snapshot) } /* + * Find the smallest xmin among all ResourceOwners under owner. + */ +TransactionId +ResourceOwnerSnapshotsMinXmin(ResourceOwner owner) +{ + ResourceOwner child; + Snapshot *snapshots = owner->snapshots; + int ns1 = owner->nsnapshots - 1; + int i; + TransactionId min_xmin = InvalidTransactionId; + + for (i = ns1; i >= 0; i--) + { + TransactionId xmin = snapshots[i]->xmin; + + if (!TransactionIdIsValid(xmin)) + continue; + + if (!TransactionIdIsValid(min_xmin) || + TransactionIdPrecedes(xmin, min_xmin)) + min_xmin = xmin; + } + + /* Recurse to handle descendants */ + for (child = owner->firstchild; child != NULL; child = child->nextchild) + { + TransactionId xmin = ResourceOwnerSnapshotsMinXmin(child); + + if (!TransactionIdIsValid(xmin)) + continue; + + if (!TransactionIdIsValid(min_xmin) || + TransactionIdPrecedes(xmin, min_xmin)) + min_xmin = xmin; + } + + return min_xmin; +} + + +/* * Debugging subroutine */ static void diff --git a/src/backend/utils/time/snapmgr.c b/src/backend/utils/time/snapmgr.c index 9992895..b7b0506 100644 --- a/src/backend/utils/time/snapmgr.c +++ b/src/backend/utils/time/snapmgr.c @@ -107,6 +107,7 @@ static boolregistered_serializable = false; static Snapshot CopySnapshot(Snapshot snapshot); static void FreeSnapshot(Snapshot snapshot); static void SnapshotResetXmin(void); +static TransactionId GetTrueLocalXmin(void); /* @@ -432,8 +433,53 @@ UnregisterSnapshotFromOwner(Snapshot snapshot, ResourceOwner owner) static void SnapshotResetXmin(void) { + TransactionId local_xmin; + if (RegisteredSnapshots == 0 && ActiveSnapshot == NULL) + { MyProc->xmin = InvalidTransactionId; + return; + } + + /* + * The transaction may have a snapshot but no xmin during abort + * when the transaction has a registered snapshot that is not + * active. + */ + if (!TransactionIdIsValid(MyProc->xmin)) + return; + + local_xmin = GetTrueLocalXmin(); + if (!TransactionIdIsValid(local_xmin) || + TransactionIdPrecedes(MyProc->xmin, local_xmin)) + MyProc->xmin = local_xmin; +} + +/* + * Returns the smallest xmin value in use by any of the active + * snapshots in the current process. + */ +static TransactionId +GetTrueLocalXmin(void) +{ + TransactionId min_xmin = InvalidTransactionId; + ActiveSnapshotElt *active_elt; + + min_xmin = ResourceOwnerSnapshotsMinXmin(TopTransactionResourceOwner); + + for (active_elt = ActiveSnapshot; active_elt != NULL; active_elt = active_elt->as_next) + { + TransactionId xmin = active_elt->as_snap->xmin; + + if (!TransactionIdIsValid(xmin)) + continue; + + if (!TransactionIdIsValid(min_xmin) || + TransactionIdPrecedes(xmin, min_xmin)) + min_xmin = xmin; + } + + return min_xmin; } /* @@ -458,7 +504,7 @@ AtSubCommit_Snapshot(int level) /* * AtSubAbort_Snapshot - * Clean up snapshots after a subtransaction abort + * Clean up snapshots after a subtransaction abort */ void AtSubAbort_Snapshot(int level) diff --git a/src/include/utils/resowner.h b/src/include/utils/resowner.h index 3f05bf4..f4e051e 100644 --- a/src/include/utils/resowner.h +++ b/src/include/utils/resowner.h @@ -128,5 +128,6 @@ extern void ResourceOwnerRememberSnapshot(ResourceOwner owner, Snapshot snapshot); extern void ResourceOwnerForgetSnapshot(ResourceOwner owner, Snapshot snapshot); +extern TransactionId ResourceOwnerSnapshotsMinXmin(ResourceOwner owner); #endif /* RESOWNER_H */ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Optimization rules for semi and anti joins
I wrote (in response to Kevin Grittner's recent issues): > Reflecting on this further, I suspect there are also some bugs in the > planner's rules about when semi/antijoins can commute with other joins; After doing some math I've concluded this is in fact the case. Anyone want to check my work? regards, tom lane --- The basic outer-join identities used up through 8.3 are (as quoted from optimizer/README): 1. (A leftjoin B on (Pab)) innerjoin C on (Pac) = (A innerjoin C on (Pac)) leftjoin B on (Pab) where Pac is a predicate referencing A and C, etc (in this case, clearly Pac cannot reference B, or the transformation is nonsensical). 2. (A leftjoin B on (Pab)) leftjoin C on (Pac) = (A leftjoin C on (Pac)) leftjoin B on (Pab) 3. (A leftjoin B on (Pab)) leftjoin C on (Pbc) = A leftjoin (B leftjoin C on (Pbc)) on (Pab) Identity 3 only holds if predicate Pbc must fail for all-null B rows (that is, Pbc is strict for at least one column of B). How do these relate to semi/antijoins? Semijoin can be rearranged as freely as inner join -- We have these properties: S1. (A semijoin B on (Pab)) semijoin C on (Pac) = (A semijoin C on (Pac)) semijoin B on (Pab) The two semijoins amount to independent filter conditions for each A row, so we can test them in either order. S2. (A semijoin B on (Pab)) innerjoin C on (Pac) = (A innerjoin C on (Pac)) semijoin B on (Pab) This is only really safe if Pab and Pac are nonvolatile, since we might evaluate them different numbers of times in the second form, but that is true for innerjoin associativity as well. Currently, since we only create semijoins from EXISTS subqueries with nonvolatile WHERE clauses, we know a priori that Pab is nonvolatile; and we have never bothered to worry about the volatility of innerjoin clauses, so I don't see that there's a reason to be extra careful with Pac. We also have to consider whether semijoin re-associates with outer joins in the same ways as an inner join does. S3. (A leftjoin B on (Pab)) semijoin C on (Pac) = (A semijoin C on (Pac)) leftjoin B on (Pab) This also works as long as the quals are nonvolatile. S4. (A antijoin B on (Pab)) semijoin C on (Pac) = (A semijoin C on (Pac)) antijoin B on (Pab) Again, these are independent conditions on each A row. Hence semijoins can be rearranged just as freely as inner joins. Antijoin is a tad more strict than left join We have these properties: A1. (A antijoin B on (Pab)) innerjoin C on (Pac) = (A innerjoin C on (Pac)) antijoin B on (Pab) True given nonvolatile quals. A2. (A antijoin B on (Pab)) antijoin C on (Pac) = (A antijoin C on (Pac)) antijoin B on (Pab) Again, these are independent conditions on each A row. A3? (A antijoin B on (Pab)) antijoin C on (Pbc) = A antijoin (B antijoin C on (Pbc)) on (Pab) This one unfortunately fails for antijoins. For example assume that all three are one-column relations with equality join conditions, and A = (1), (2) B = (1) C = (1) The antijoin of A and B is (2,NULL), and antijoining that to C gives (2,NULL,NULL). But the antijoin of B and C is empty, so the second form produces output (1,NULL,NULL), (2,NULL,NULL). We also have to consider mixed antijoin/leftjoin cases in identities 2 and 3. A4. (A antijoin B on (Pab)) leftjoin C on (Pac) = (A leftjoin C on (Pac)) antijoin B on (Pab) True given nonvolatile quals. A5. (A leftjoin B on (Pab)) antijoin C on (Pac) = (A antijoin C on (Pac)) leftjoin B on (Pab) This is just a restatement of A4. A6. (A antijoin B on (Pab)) leftjoin C on (Pbc) = A antijoin (B leftjoin C on (Pbc)) on (Pab) The second form is in fact equivalent to null-extending the A/B antijoin --- the actual contents of C cannot affect the result. So we could just drop C altogether. (I'm not going to do anything about that now, but it's something to consider for the planned join-elimination optimization.) In the first form, if Pbc is strict on B then it must fail for all rows of the antijoin result so we get the null-extended A/B result. If Pbc is not strict then the first form might duplicate some rows in the antijoin result, or produce non-null-extended rows. So in this case the identity holds only if Pbc is strict, which is the same as for left joins. A7? (A leftjoin B on (Pab)) antijoin C on (Pbc) = A leftjoin (B antijoin C on (Pbc)) on (Pab) This identity fails even if both qual clauses are strict (try it in the same example as above). So: identity 3 fails if C is an antijoin, but otherwise antijoins associate like leftjoins. The bottom line is that the current code isn't strict enough for antijoins but is too strict for semijoins. Also, if we fix the second part
Re: [HACKERS] HotStandby-Patch and WAL_DEBUG
On Tue, 2009-02-10 at 15:17 +0100, Bernd Helmle wrote: > I'm currently testing HotStandby v9g. > > Seems like this patch version misses to update guc.c, which still > references XLOG_DEBUG when compiled with WAL_DEBUG. This got replaced with > XLOG_DEBUG_FLUSH, *_BGFLUSH and *_REDO, resulting in a compile error. Maybe > we want to reflect those changes with new developer guc's? Thanks, will fix. I was setting those directly inside the patch. I'll just have separate ones for flush and redo. The distinction between flush and bgflush is too narrow to be important. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 1:02 PM, Magnus Hagander wrote: > Merlin Moncure wrote: >> On Tue, Feb 10, 2009 at 12:03 PM, Dave Page wrote: >>> On Tue, Feb 10, 2009 at 4:57 PM, Merlin Moncure wrote: >>> PQinitSSL is *broken*. It's always been broken. Since it already takes a parameter, I say add a special switch...the backwards compatibility danger doesn't seem too bad. >>> Add a switch to what? I get very nervous for our Windows users when >>> people start talking about changing the libpq API (for those that >>> don't know, Windows doesn't have DLL versioning like Unix - so any >>> non-backwards compatible API change really needs a corresponding >>> filename change to avoid pain and suffering). >> >> PQinitSSL(SSL_ONLY) or something, where the constant is carefully >> chosen to not be accidentally passed in by older libpq users. > > So how are you planinng to deal with it when your application passes > that to a version of libpq that doesn't support it? well, either nothing, which is no worse off than we are now, or backpatch the fix. probably nothing :-) merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Merlin Moncure wrote: > On Tue, Feb 10, 2009 at 12:03 PM, Dave Page wrote: >> On Tue, Feb 10, 2009 at 4:57 PM, Merlin Moncure wrote: >> >>> PQinitSSL is *broken*. It's always been broken. Since it already >>> takes a parameter, I say add a special switch...the backwards >>> compatibility danger doesn't seem too bad. >> Add a switch to what? I get very nervous for our Windows users when >> people start talking about changing the libpq API (for those that >> don't know, Windows doesn't have DLL versioning like Unix - so any >> non-backwards compatible API change really needs a corresponding >> filename change to avoid pain and suffering). > > PQinitSSL(SSL_ONLY) or something, where the constant is carefully > chosen to not be accidentally passed in by older libpq users. So how are you planinng to deal with it when your application passes that to a version of libpq that doesn't support it? //Magnus -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pg-migrator - src: Add support for specifying port numbers.
Tom Lane wrote: > Heikki Linnakangas writes: > > I was thinking that we could sidestep the whole port number question if > > we didn't try to start up postmaster, and used a stand-alone backend ( > > postgres --single) instead. > > That would be a good place to get to eventually, but I think it'd be > a serious error to be expending development effort on the point right > now. We need a *working* migrator; simplifying its use can come later. And I will have it working for 8.4, if it doesn't already work (I am not sure). -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 5:05 PM, Merlin Moncure wrote: > PQinitSSL(SSL_ONLY) or something, where the constant is carefully > chosen to not be accidentally passed in by older libpq users. Ahh, OK. That would be painless. -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [COMMITTERS] pg-migrator - src: Add support for specifying port numbers.
Heikki Linnakangas writes: > I was thinking that we could sidestep the whole port number question if > we didn't try to start up postmaster, and used a stand-alone backend ( > postgres --single) instead. That would be a good place to get to eventually, but I think it'd be a serious error to be expending development effort on the point right now. We need a *working* migrator; simplifying its use can come later. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 12:03 PM, Merlin Moncure wrote: > On Tue, Feb 10, 2009 at 11:54 AM, Bruce Momjian wrote: >> Merlin Moncure wrote: >>> On Tue, Feb 10, 2009 at 11:52 AM, Bruce Momjian wrote: >>> > Robert Haas wrote: >>> >> > Would someone remind me why turning off ssl initialization in libpq >>> >> > does >>> >> > not work for this case? >>> >> >>> >> That initializes both libcrypto and libssl. The problem arises when >>> >> libcrypto has been initialized but libssl has not. >>> > >>> > So initialize ssl in your application? What is the problem? >>> >>> then libpq doesn't work. > > PQinitSSL is required if you want to make any ssl calls (it does some > libpq setup beyond the ssl library initialization). that was worded badly. Rather, PQinitSSL is required if you need to use ssl features withing libpq. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pg-migrator - src: Add support for specifying port numbers.
Heikki Linnakangas wrote: > Bruce Momjian wrote: > > Heikki Linnakangas wrote: > >> User Bmomjian wrote: > >>> Log Message: > >>> --- > >>> Add support for specifying port numbers. > >> Hmm, I suppose we can't readily run pg_dump against a stand-alone backend? > > > > I am confused by the question; we used to default to the 5432 port > > numbers. > > I was thinking that we could sidestep the whole port number question if > we didn't try to start up postmaster, and used a stand-alone backend ( > postgres --single) instead. Libpq can't connect to a stand-alone backend and we have many libpq queries. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 12:03 PM, Dave Page wrote: > On Tue, Feb 10, 2009 at 4:57 PM, Merlin Moncure wrote: > >> PQinitSSL is *broken*. It's always been broken. Since it already >> takes a parameter, I say add a special switch...the backwards >> compatibility danger doesn't seem too bad. > > Add a switch to what? I get very nervous for our Windows users when > people start talking about changing the libpq API (for those that > don't know, Windows doesn't have DLL versioning like Unix - so any > non-backwards compatible API change really needs a corresponding > filename change to avoid pain and suffering). PQinitSSL(SSL_ONLY) or something, where the constant is carefully chosen to not be accidentally passed in by older libpq users. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Bruce Momjian wrote: Andrew Chernow wrote: Tom Lane wrote: If that's all you want, then PQinitSSLExtended is a perfectly good answer. How about PQinitCrypto(bool do_init), which would default to TRUE if never called. PQinitSSL already handles the SSL part, just need control over initializing crypto. Folks, we need a lot more demand before we add functions to libpq. Honestly, I'm not suggesting that a function is added. If others decide to do that, I think the function's purpose should be to init crypto. We don't need another way to init ssl. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 4:57 PM, Merlin Moncure wrote: > PQinitSSL is *broken*. It's always been broken. Since it already > takes a parameter, I say add a special switch...the backwards > compatibility danger doesn't seem too bad. Add a switch to what? I get very nervous for our Windows users when people start talking about changing the libpq API (for those that don't know, Windows doesn't have DLL versioning like Unix - so any non-backwards compatible API change really needs a corresponding filename change to avoid pain and suffering). -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 11:54 AM, Bruce Momjian wrote: > Merlin Moncure wrote: >> On Tue, Feb 10, 2009 at 11:52 AM, Bruce Momjian wrote: >> > Robert Haas wrote: >> >> > Would someone remind me why turning off ssl initialization in libpq does >> >> > not work for this case? >> >> >> >> That initializes both libcrypto and libssl. The problem arises when >> >> libcrypto has been initialized but libssl has not. >> > >> > So initialize ssl in your application? What is the problem? >> >> then libpq doesn't work. PQinitSSL is required if you want to make any ssl calls (it does some libpq setup beyond the ssl library initialization). merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pg-migrator - src: Add support for specifying port numbers.
Bruce Momjian wrote: Heikki Linnakangas wrote: User Bmomjian wrote: Log Message: --- Add support for specifying port numbers. Hmm, I suppose we can't readily run pg_dump against a stand-alone backend? I am confused by the question; we used to default to the 5432 port numbers. I was thinking that we could sidestep the whole port number question if we didn't try to start up postmaster, and used a stand-alone backend ( postgres --single) instead. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 11:14 AM, Tom Lane wrote: > Robert Haas writes: Well, you could create PQinitSSLExtended, but, as you say, the use case is pretty narrow... It would help if there were a PQgetLibraryVersion() function. >>> >>> Help how? There is nothing an app can do to work around the problem >>> AFAICS. Or if there were, we should just document it and not change >>> the code --- the use case for this is evidently too narrow to justify >>> complicating libpq's API even more. > >> It would let you assert that you were running against a version of >> libpq that has the functionality that you are attempting to use, thus >> eliminating the risk of silent failure. > > If that's all you want, then PQinitSSLExtended is a perfectly good > answer. Your app will fail to link if you try to use a library > version that hasn't got it. > > I think documenting the workaround is a sufficient answer though. I don't think you can get way with that this time. wsa cleanup was a mainly harmless side effect. This is a nasty 'maybe it works, maybe it doesn't' virtually impossible to debug problem. We caught it on a particular platform (windows, iirc) when deep in our code a mutex call deadlocked when it shouldn't have, after weeks of working ok. debugging nightmare. PQinitSSL is *broken*. It's always been broken. Since it already takes a parameter, I say add a special switch...the backwards compatibility danger doesn't seem too bad. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Merlin Moncure wrote: > On Tue, Feb 10, 2009 at 11:52 AM, Bruce Momjian wrote: > > Robert Haas wrote: > >> > Would someone remind me why turning off ssl initialization in libpq does > >> > not work for this case? > >> > >> That initializes both libcrypto and libssl. The problem arises when > >> libcrypto has been initialized but libssl has not. > > > > So initialize ssl in your application? What is the problem? > > then libpq doesn't work. Why? -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pg-migrator - src: Add support for specifying port numbers.
Heikki Linnakangas wrote: > User Bmomjian wrote: > > Log Message: > > --- > > Add support for specifying port numbers. > > Hmm, I suppose we can't readily run pg_dump against a stand-alone backend? I am confused by the question; we used to default to the 5432 port numbers. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 11:52 AM, Bruce Momjian wrote: > Robert Haas wrote: >> > Would someone remind me why turning off ssl initialization in libpq does >> > not work for this case? >> >> That initializes both libcrypto and libssl. The problem arises when >> libcrypto has been initialized but libssl has not. > > So initialize ssl in your application? What is the problem? then libpq doesn't work. merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Robert Haas wrote: > > Would someone remind me why turning off ssl initialization in libpq does > > not work for this case? > > That initializes both libcrypto and libssl. The problem arises when > libcrypto has been initialized but libssl has not. So initialize ssl in your application? What is the problem? -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pg-migrator - src: Add support for specifying port numbers.
User Bmomjian wrote: Log Message: --- Add support for specifying port numbers. Hmm, I suppose we can't readily run pg_dump against a stand-alone backend? -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Andrew Chernow wrote: > Tom Lane wrote: > > > > If that's all you want, then PQinitSSLExtended is a perfectly good > > answer. > > > > How about PQinitCrypto(bool do_init), which would default to TRUE if > never called. PQinitSSL already handles the SSL part, just need control > over initializing crypto. Folks, we need a lot more demand before we add functions to libpq. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Tom Lane wrote: If that's all you want, then PQinitSSLExtended is a perfectly good answer. How about PQinitCrypto(bool do_init), which would default to TRUE if never called. PQinitSSL already handles the SSL part, just need control over initializing crypto. -- Andrew Chernow eSilo, LLC every bit counts http://www.esilo.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
> Would someone remind me why turning off ssl initialization in libpq does > not work for this case? That initializes both libcrypto and libssl. The problem arises when libcrypto has been initialized but libssl has not. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 11:14 AM, Tom Lane wrote: > Robert Haas writes: Well, you could create PQinitSSLExtended, but, as you say, the use case is pretty narrow... It would help if there were a PQgetLibraryVersion() function. >>> >>> Help how? There is nothing an app can do to work around the problem >>> AFAICS. Or if there were, we should just document it and not change >>> the code --- the use case for this is evidently too narrow to justify >>> complicating libpq's API even more. > >> It would let you assert that you were running against a version of >> libpq that has the functionality that you are attempting to use, thus >> eliminating the risk of silent failure. > > If that's all you want, then PQinitSSLExtended is a perfectly good > answer. Your app will fail to link if you try to use a library > version that hasn't got it. I agree. I was thinking that there might not be enough interest in this feature to add an API call just to support it, but I thought PQgetVersion() might be a more general solution. > I think documenting the workaround is a sufficient answer though. I don't have a strong opinion on that one way or the other, but Andrew seemed to be concerned that he was cut-and-pasting a fair amount of code. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Magnus Hagander wrote: > Merlin Moncure wrote: > > On Tue, Feb 10, 2009 at 9:32 AM, Magnus Hagander > > wrote: > >>> How we worked around it: > >>> We solved it by copying the SSL init sequence from fe-secure.c. Doesn't > >>> seem like something that would change very often. So we > >>> init_our_library(), PQinitSSL(0) and then do a few lines of SSL init > >>> stuff. > >> Seems unusual, but certainly not "nearly impossible". But we're back to > >> the discussions around the WSA code - our API provides no really good > >> place to do this, so perhaps we should just clearly document how it's > >> done and how to work around it? > > > > I'm not so sure that's appropriate in this case. I think the existing > > libpq behavior is simply wrong...crypto and ssl are two separate > > libraries and PQinitSSL does not expose the necessary detail. This is > > going to break apps in isolated but spectacular fashion when they link > > to both pq and crypto for different reasons. > > They could, but nobody has reported it yet, so it's not a common scenario. Agreed. Would someone remind me why turning off ssl initialization in libpq does not work for this case? -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Robert Haas writes: >>> Well, you could create PQinitSSLExtended, but, as you say, the use >>> case is pretty narrow... >>> It would help if there were a PQgetLibraryVersion() function. >> >> Help how? There is nothing an app can do to work around the problem >> AFAICS. Or if there were, we should just document it and not change >> the code --- the use case for this is evidently too narrow to justify >> complicating libpq's API even more. > It would let you assert that you were running against a version of > libpq that has the functionality that you are attempting to use, thus > eliminating the risk of silent failure. If that's all you want, then PQinitSSLExtended is a perfectly good answer. Your app will fail to link if you try to use a library version that hasn't got it. I think documenting the workaround is a sufficient answer though. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 10:54 AM, Tom Lane wrote: > Robert Haas writes: >>> We could do that, I guess. However, if an application passes this in to >>> an old version of libpq, there is no way to know that it didn't know >>> about it. > >> Well, you could create PQinitSSLExtended, but, as you say, the use >> case is pretty narrow... > >> It would help if there were a PQgetLibraryVersion() function. > > Help how? There is nothing an app can do to work around the problem > AFAICS. Or if there were, we should just document it and not change > the code --- the use case for this is evidently too narrow to justify > complicating libpq's API even more. It would let you assert that you were running against a version of libpq that has the functionality that you are attempting to use, thus eliminating the risk of silent failure. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Tom Lane wrote: > Robert Haas writes: >>> We could do that, I guess. However, if an application passes this in to >>> an old version of libpq, there is no way to know that it didn't know >>> about it. > >> Well, you could create PQinitSSLExtended, but, as you say, the use >> case is pretty narrow... > >> It would help if there were a PQgetLibraryVersion() function. > > Help how? There is nothing an app can do to work around the problem > AFAICS. Or if there were, we should just document it and not change > the code --- the use case for this is evidently too narrow to justify > complicating libpq's API even more. Sure, there is a way to work around it in this case. By manually initializing ssl+crypto even though you only need crypto, and then tell libpq you have taken care of the initialization. //Magnus -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Robert Haas writes: >> We could do that, I guess. However, if an application passes this in to >> an old version of libpq, there is no way to know that it didn't know >> about it. > Well, you could create PQinitSSLExtended, but, as you say, the use > case is pretty narrow... > It would help if there were a PQgetLibraryVersion() function. Help how? There is nothing an app can do to work around the problem AFAICS. Or if there were, we should just document it and not change the code --- the use case for this is evidently too narrow to justify complicating libpq's API even more. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
>> maybe invent a special value to PQinitSSL for ssl only init? > > We could do that, I guess. However, if an application passes this in to > an old version of libpq, there is no way to know that it didn't know > about it. Well, you could create PQinitSSLExtended, but, as you say, the use case is pretty narrow... It would help if there were a PQgetLibraryVersion() function. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] WIP: fix SET WITHOUT OIDS, add SET WITH OIDS
Simon Riggs writes: > But the ability to turn this on/off is not an important one, since even > the people who use OIDs seldom use this. They have CTAS; let them use > it. Well, CTAS is a vastly inferior solution because you'd have to manually move indexes, constraints, FKs, etc to the new table. Plus it's just as slow if not slower than the proposed rewriting code. I think that Andrew's complaint about not putting barriers in the way of removing OIDs would apply pretty strongly to that approach. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Merlin Moncure wrote: > On Tue, Feb 10, 2009 at 9:32 AM, Magnus Hagander wrote: >>> How we worked around it: >>> We solved it by copying the SSL init sequence from fe-secure.c. Doesn't >>> seem like something that would change very often. So we >>> init_our_library(), PQinitSSL(0) and then do a few lines of SSL init stuff. >> Seems unusual, but certainly not "nearly impossible". But we're back to >> the discussions around the WSA code - our API provides no really good >> place to do this, so perhaps we should just clearly document how it's >> done and how to work around it? > > I'm not so sure that's appropriate in this case. I think the existing > libpq behavior is simply wrong...crypto and ssl are two separate > libraries and PQinitSSL does not expose the necessary detail. This is > going to break apps in isolated but spectacular fashion when they link > to both pq and crypto for different reasons. They could, but nobody has reported it yet, so it's not a common scenario. > maybe invent a special value to PQinitSSL for ssl only init? We could do that, I guess. However, if an application passes this in to an old version of libpq, there is no way to know that it didn't know about it. //Magnus -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
On Tue, Feb 10, 2009 at 9:32 AM, Magnus Hagander wrote: >> How we worked around it: >> We solved it by copying the SSL init sequence from fe-secure.c. Doesn't >> seem like something that would change very often. So we >> init_our_library(), PQinitSSL(0) and then do a few lines of SSL init stuff. > > Seems unusual, but certainly not "nearly impossible". But we're back to > the discussions around the WSA code - our API provides no really good > place to do this, so perhaps we should just clearly document how it's > done and how to work around it? I'm not so sure that's appropriate in this case. I think the existing libpq behavior is simply wrong...crypto and ssl are two separate libraries and PQinitSSL does not expose the necessary detail. This is going to break apps in isolated but spectacular fashion when they link to both pq and crypto for different reasons. maybe invent a special value to PQinitSSL for ssl only init? merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Table Partitioning Feature
Hi Robert, > I am a little fuzzy on what you're proposing here, but I think you're > saying that you're only going to support range partitioning on > integers or dates and that you plan to use the text type to store the > integer or date values. FWIW, those don't seem like very good > decisions to me. I think you should aim to support range partitioning > on any combination of a datatype and a less-than operator, similar to > what pg_statistic does for statistics. pg_statistic uses anyarray to > store the datums. > We don't have any strong reason for not using anyarray datatypes. We will consider your suggestion. > I am also somewhat skeptical about the idea of using triggers for > this. I haven't scrutinized the issue in detail, so I may be all > wet... but ISTM that the concerns raised elsewhere about the order in > which triggers can be expected to fire may bite you fairly hard. ISTM > the right semantics are something like this: > > - fire all of the row-level BEFORE triggers on the parent table > (giving up if any return NULL) > - determine the correct child table based on the resulting tuple > - fire all of the row-level BEFORE triggers on the child table (giving > up if any return NULL) > - insert the tuple into the child table > - fire all of the row-level AFTER triggers on the child table... and > possibly also the parent table... not sure about the order > The child tables will just have update triggers to take care of row movements. Invalid Inserts on child tables will be taken care by the constraints. Parent table will have all the triggers (insert/update/del) to redirect the rows to appropriate child tables. The order of execution of triggers can create problems. However, triggers are called in order of there names. So we can use a prefix starting with "large" string value for partition names to make sure that they are called last. > You will also need to fire statement-level triggers on the appropriate > tables, which is a little tricky. Presumably you want the tables on > which the AFTER triggers fire to be the same ones as those on which > the BEFORE triggers fire, but you don't know which child tables you're > actually going to hit until you actually perform the action. Maybe > the right thing to do is fire both sets of triggers on the parent > table and those child tables not excluded by constraint exclusion...? > But I'm not sure about that. I am not sure i understood the problem. But our triggers will know which partitions (child tables) will be subjected to insert/update/del. Thanks, Amit Persitent Systems > > Anyway, getting these types of behavior via triggers may be tricky. > But then again maybe not: I haven't read the code. > > > ...Robert > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PQinitSSL broken in some use casesf
Andrew Chernow wrote: > Bruce Momjian wrote: >> Andrew Chernow wrote: >>> I am using a library that links with and initializes libcrypto (ie. >>> CRYPTO_set_locking_callback) but not SSL. This causes problems even >>> when using PQinitSSL(FALSE) because things like SSL_library_init(); >>> are not called (unless I manually call them, copy and paste code from >>> fe-secure.c which may change). If libpq does init ssl, it overwrites >>> (and breaks) the other library's crypto. >>> >>> Shouldn't crypto and ssl init be treated as two different things? If >>> not, how does one determine a version portable way of initializing >>> SSL in a manner required by libpq? Lots of apps using encryption but >>> don't necessarily use ssl, so they need to know how to init ssl for >>> libpq. >> >> I didn't realize they were could be initialized separately, so we really >> don't have an answer for you. This is the first time I have heard of >> this requirement. >> > > Just bringing it to everyones attention. I have no idea how common this > use case is or if it deserves a patch. From your comments, it sounds > uncommon. > > How we came across this: > We have an internal library that links with libcrypto.so but not > libssl.so. The library uses digests and ciphers from libcrypto. It > initializes libcrypto for thread safety and seeds the PRNG. So, one of > our applications is linking with both libpq and this library; which > caused the conflict. > > How we worked around it: > We solved it by copying the SSL init sequence from fe-secure.c. Doesn't > seem like something that would change very often. So we > init_our_library(), PQinitSSL(0) and then do a few lines of SSL init stuff. Seems unusual, but certainly not "nearly impossible". But we're back to the discussions around the WSA code - our API provides no really good place to do this, so perhaps we should just clearly document how it's done and how to work around it? //Magnus -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] HotStandby-Patch and WAL_DEBUG
I'm currently testing HotStandby v9g. Seems like this patch version misses to update guc.c, which still references XLOG_DEBUG when compiled with WAL_DEBUG. This got replaced with XLOG_DEBUG_FLUSH, *_BGFLUSH and *_REDO, resulting in a compile error. Maybe we want to reflect those changes with new developer guc's? -- Thanks Bernd -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [COMMITTERS] pgsql: Update autovacuum to use reloptions instead of a system catalog,
ITAGAKI Takahiro wrote: > I tested this changes and found two issues: > > 1. fillfactor.* options are silently ignored when the table doesn't have >toast relation. Should we notice the behabior to users? >ex. NOTICE: toast storage parameters are ignored >because the table doesn't have toast relations. You mean "toast.* options"? If so, yes, they are silently ignored. Maybe issuing a warning is not a bad idea. Care to propose a patch? > 2. psql's \d+ doesn't show toast storage parameters. > > Neither \d+ for base tables nor toast relations show toast.* parameters > though there are some values in pg_class.reloptions. Yeah, this is a bug in psql. I neglected to update \d+ when I committed the namespace patch. I'll investigate. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] temporarily stop autovacuum
Tatsuo Ishii wrote: > Hi, > > Is there any way to stop autovacuum temporarily?(other than edit > postgresql.conf and reload it) Hmm, no, that's the only way. I'm not sure that this calls for a change in autovacuum itself; it seems to be that whatwe really need is the ability to change postgresql.conf settings from the SQL interface. This has been discussed at length elsewhere, and I think we need to bite the bullet eventually. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [COMMITTERS] pgsql: Update autovacuum to use reloptions instead of a system catalog,
ITAGAKI Takahiro escreveu: > 1. fillfactor.* options are silently ignored when the table doesn't have >toast relation. Should we notice the behabior to users? >ex. NOTICE: toast storage parameters are ignored >because the table doesn't have toast relations. > It was discussed and rejected [1]. > I think we should show toast.* parameters in \d+ for base tables > because it has consistency; we set them at ALTER TABLE for base tables. > +1. That's because the psql patch was applied _before_ the namespace patch. It seems we will need to hardcode the namespace notion at psql code. [1] http://archives.postgresql.org/pgsql-hackers/2009-02/msg00042.php -- Euler Taveira de Oliveira http://www.timbira.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Table Partitioning Feature
Thanks for your feedback, Emmanuel. Here are my comments: On 2/10/09, Emmanuel Cecchet wrote: > Hi Amit, > > I will be traveling until next Tuesday and will have no access to email so > don't be surprised if I don't follow up this week. > The overall approach seems sound. The metadata table should help also for > DDL to find out overlapping ranges or duplicate list entries. We are checking for overlaps in the partition keys before creating partitions. > So far, I have not tried to use the SPI interface from a C trigger so I > don't see any disadvantage yet. We would have to assess the performance to > make sure it's not going to be a show stopper. > I think that the main issue of the trigger approach is that other triggers > might interfere. The 'partition trigger' must be the last of the 'before > insert' triggers and if the destination (child) table has a trigger, we must > ensure that this trigger is not going to require a new routing. > Another issue is the result that will be returned by insert/copy commands > if all tuples are moved to other tables, the result will be 0. We might want > to have stats that would collect where tuples where moved for a particular > command (I don't know yet what would be the best place to collect these > stats but they could probably be updated by the trigger). Row movements will be done by firing deletes and inserts. We will investigte on how these stats can be maintained and displayed. > Also would the trigger be attached to all tables in the hierarchy or only > to the top parent? > What kind of query would you use with more than 1 level of inheritance > (e.g. parent=year, child=month, grand-child=day)? It looks like we have to > parse the leaves of the graph but intermediate nodes would help accelerating > the search. > We haven't yet planned for supporting multi-level partitioning. However, the pg_partition table can be extented to store "partlevel" column (to represent depth of partition from the root), and we should just select the leaf level partitions in the SQL that finds target partition. (This is with the assumption that only leaf level partitions will have the data.) > An alternative approach (I haven't assessed the feasibility yet) would be > to try to call the query planner. If we ask to select the partition value of > the tuple, the query planner should return the table it is going to scan (as > in EXPLAIN SELECT * FROM t WHERE key=$1). > That's a good idea. We will have to anyway write this code for planner module to find relevant partitions for 'SELECT' queries. Another question i have is - should we create a separate C file and shared library for the partition trigger functions, or can we bundle it with one of the existing libraries? Thanks, Amit > Let me know what you think, > > Emmanuel > > > > We are considering to following approach: > > 1. metadata table pg_partitions is defined as follows: > > CATALOG(pg_partitions,2336) BKI_WITHOUT_OIDS > > { > >Oid partrelid; // partition table Oid > >Oid parentrelid; // Parent table Oid > >int4parttype; // Type of partition, list, hash, range > >Oidpartkey;// partition key Oid > >Oidkeytype; /// type of partition key. > >int4keyorder /// order of the key in multi-key partitions. > >textmin; > >textmax; // min and max for range parti > >text[] list; > >inthash; // hash value > > } FormData_pg_partitions; > > > > > > 2. C triggers will fire a query on this table to get the relevant > > partition of the inserted/updated data using SPI interface. The query > > will look something like (for range partitioning) > > > > select min(partrelid) > > from pg_partitions > > where parentrelid = 2934 // we know this value > > and ( > > ( $1 between to_int(min ) and to_int(max) and > > keyorder = 1) OR > > ($2 between to_date (min) and to_date (max) and > > keyorder =2 ) > > > > ) > > group by > > parentrelid > > having > > count(*) = > > > > $1, $2, ... are the placeholders of the actual partition key values of > > trigger tuple. > > > > Since we know the type of partition keys, and the parentrelid, this > > kind of query string can be saved in another table say, pg_part_map. > > And its plan can be parsed once and saved in cache to be reused. > > Do you see any issue with using SPI interface within triggers? > > > > The advantage of this kind of approah is that trigger code can be made > > genric for any kind of partition table. > > > > Thanks, > > Amit > > Persistent Systems, > > www.persistentsys.com > > > > > > > > > > > > On 1/23/09, Emmanuel Cecchet wrote: > > > > > > > Amit, > > > > > > You might want to put this on the > > > http://wiki.postgresql.org/wiki/Table_partitioning wiki > > > page. > > > How does your timeline look like for this implementation? > > > I would be happy to contribute C triggers to your implementation. From > what > > > I unders
Re: [HACKERS] SE-PostgreSQL and row level security
BogDan, Thanks for your interesting. At first, I would like to confirm whether you know the row-level security feature is postponed to v8.5, or not. Thus, the latest patch set (toward v8.4 development cycle) does not contain the row-level one. Please note that the following my comments assume the separated feature. BogDan Vatra wrote: Hi, I need SE-PostgreSQL *ONLY* for row level security, but AFAIK SE-PostgreSQL works only on SELinux. This, for me, is unacceptable, because I want to use row level security on windows too. I don't need all that fancy security stuffs. In my understanding, the row-level ACLs feature (plus a bit enhancement) can help your requirements. I developed it with SE-PostgreSQL in parallel, but also postponed to v8.5 series. It enables to assign database ACLs on individual tuples, and filter out violated tupled from the result set of SELECT, UPDATE and DELETE. So, it is not very hard. At least, we already have an implementation. :) > -real cross platform row level security, this seems to be very hard to do. I want to share with you my "security" experience, my English is not so good so, to be more clear, I give you an example and show you what I do instead row level security. In this example I need row level security because I want an user who is logged in to see only a portion of a table (some filtered rows). Let's say we have 2 tables: 1. customers 2. customers_products and I want to let the user to select,insert, update or delete only what they operate. I guess you concerned about: - It is necessary to set up many trigger functions for each tables, which provide similar functionality. - Users have to specify different names between reference and modification. And, you want to make clear how the row-level access control resolves it. Is it OK? Your requirement is a simple separation between different users. Thus, what we have to do is: - When a tuple is inserted, the backend automatically assigns an ACL which allows anything for the current user, but nothing for others. - So, when user tries to select, update and delete this table, tuples which inserted by others to be filtered out from the result set or affected rows. - Normal users are disallowed to change automatically assigned ACLs. (I don't think you want to restrict superuser's operations.) The row-level ACLs have a functionality named as "default acl". It enables table's owner to specify ACLs to be assigned to newly inserted tuple, like: CREATE TABLE customer_products ( id serial, : ) WITH (default_row_acl='{rwd=kaigai}'); Currently, it does not allow replacement rules like "{rwd=%current_user}", but it is not a hard enhancement. If such an ACL is assigned, the tuple is not visible from other users without any triggers. For example, please consider when a user "kaigai" insert a tuple into "customer_products", the "{rwd=kaigai}" is assigned to the tuple, but the "{rwd=bogdan}" is assigned when a user "bogdan" do same thing. In this case, any users must not be an owner of the table, because owner of the table is allowed to change the ACLs. This is an aside. If you want different access controls, like read-only for other's tuples but read-writable for own tuples, it will be possible with different default acl configuration. Does it help you to understand about the row-level security currently we are in development? Thanks, [SQL] CREATE TABLE customers -- this is my "customers" table ( id serial, curstomer_name text, login_user name DEFAULT session_user, -- the user who have the permission to see this row PRIMARY KEY (id) ) ; ALTER TABLE customers OWNER TO postgres; -- the table is "invisible" to normal users GRANT UPDATE, INSERT, DELETE, TRIGGER ON TABLE customers TO public; -- but they can UPDATE, INSERT, DELETE and TRIGGER the table. GRANT USAGE ON TABLE customers_id_seq TO public; -- this is my solution to "row level security", user can query this view only, the table is "invisible" to them. CREATE OR REPLACE VIEW view_customers AS select * from customers where login_user=session_user; CREATE TABLE customers_products ( id serial, id_customer integer NOT NULL, -- the customer id product_name text NOT NULL, login_user name DEFAULT session_user, -- the user who have the permission to see this row PRIMARY KEY (id), FOREIGN KEY (id_customer) REFERENCES customers (id)ON UPDATE CASCADE ON DELETE CASCADE --here it will be more useful if I can REFERENCE the view. ); ALTER TABLE customers_products OWNER TO postgres; -- the table is "invisible" to normal users GRANT UPDATE, INSERT, DELETE, TRIGGER ON TABLE customers_products TO public; -- but they can only UPDATE, INSERT, DELETE and TRIGGER the table. GRANT USAGE ON TABLE customers_products_id_seq TO public; -- this is my solution to "row level security", user can query this view only, the table is "invisible" to them. CREATE OR REPLACE VIEW view_customers_products AS
Re: [HACKERS] WIP: fix SET WITHOUT OIDS, add SET WITH OIDS
On Sun, 2009-02-08 at 11:51 -0500, Tom Lane wrote: > Now, if you want to argue that we should get rid of SET WITHOUT OIDS > altogether, I'm not sure I could dispute it. But if we have the > ability > to do that ISTM we should offer the reverse too. We should keep the ability to have OIDs. Some people use it, though not many. But the ability to turn this on/off is not an important one, since even the people who use OIDs seldom use this. They have CTAS; let them use it. So I say let's drop support now for ALTER TABLE SET WITHOUT OIDS and don't bother to implement SET WITH OIDS. Less weird corners in the software means fewer bugs. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] SE-PostgreSQL and row level security
Hi, I need SE-PostgreSQL *ONLY* for row level security, but AFAIK SE-PostgreSQL works only on SELinux. This, for me, is unacceptable, because I want to use row level security on windows too. I don't need all that fancy security stuffs. I want to share with you my "security" experience, my English is not so good so, to be more clear, I give you an example and show you what I do instead row level security. In this example I need row level security because I want an user who is logged in to see only a portion of a table (some filtered rows). Let's say we have 2 tables: 1. customers 2. customers_products and I want to let the user to select,insert, update or delete only what they operate. [SQL] CREATE TABLE customers -- this is my "customers" table ( id serial, curstomer_name text, login_user name DEFAULT session_user, -- the user who have the permission to see this row PRIMARY KEY (id) ) ; ALTER TABLE customers OWNER TO postgres; -- the table is "invisible" to normal users GRANT UPDATE, INSERT, DELETE, TRIGGER ON TABLE customers TO public; -- but they can UPDATE, INSERT, DELETE and TRIGGER the table. GRANT USAGE ON TABLE customers_id_seq TO public; -- this is my solution to "row level security", user can query this view only, the table is "invisible" to them. CREATE OR REPLACE VIEW view_customers AS select * from customers where login_user=session_user; CREATE TABLE customers_products ( id serial, id_customer integer NOT NULL, -- the customer id product_name text NOT NULL, login_user name DEFAULT session_user, -- the user who have the permission to see this row PRIMARY KEY (id), FOREIGN KEY (id_customer) REFERENCES customers (id)ON UPDATE CASCADE ON DELETE CASCADE --here it will be more useful if I can REFERENCE the view. ); ALTER TABLE customers_products OWNER TO postgres; -- the table is "invisible" to normal users GRANT UPDATE, INSERT, DELETE, TRIGGER ON TABLE customers_products TO public; -- but they can only UPDATE, INSERT, DELETE and TRIGGER the table. GRANT USAGE ON TABLE customers_products_id_seq TO public; -- this is my solution to "row level security", user can query this view only, the table is "invisible" to them. CREATE OR REPLACE VIEW view_customers_products AS select * from customers_products where login_user=session_user; -- This trigger is executed every time you insert,update or delete from table. CREATE OR REPLACE FUNCTION customers_products_row_security() RETURNS trigger AS $BODY$ BEGIN IF (TG_OP = 'DELETE') THEN if OLD.id_customer NOT IN (SELECT id from view_customers)THEN RETURN NULL; END IF; RETURN OLD; END IF; IF NEW.id_customer NOT IN (SELECT id from view_customers) THEN RETURN NULL; END IF; NEW.login_user:=session_user; RETURN NEW; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER; ALTER FUNCTION customers_products_row_security() OWNER TO postgres; GRANT EXECUTE ON FUNCTION customers_products_row_security() TO public; CREATE TRIGGER customers_products_row_security_trigger BEFORE INSERT OR UPDATE OR DELETE ON customers_products FOR EACH ROW EXECUTE PROCEDURE public.customers_products_row_security(); [/SQL] Another trigger should be created on customers table but you've got the point. As you can see there is a lot of code and possibility to make many mistakes. What I my humble wish? I wish I can make this more simple and elegant. Here I see 2 solutions. -real cross platform row level security, this seems to be very hard to do. - the possibility to create "FOREIGN KEY"s who reference views or the possibility to "CHECK" a cell of a row with a subquery in our example something like this:"CHECK (id_customer IN (select id from view_customers))". If I'll have this feature I don't have to create that triggers anymore. I hope this is more simple for you to create. Yours, BogDan Vatra, -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Synch Replication
Hi, Thanks. Now it works. Few questions: 1) Do you have an idea by when the Hot standby patch and Sync replication patch will be integrated? 2) I have used 1 physical machine to try current patch of synchronous replication. Is it OK for me to try with 2 separate machines for Primary & Standby servers 3) Do you have test programs that can used for synchronous replication testing? 4) I'am thinking of trying load/performance tests as well. What do you feel? Will it be too early to do this test? regards, Niranjan > -Original Message- > From: ext Fujii Masao [mailto:masao.fu...@gmail.com] > Sent: Monday, February 09, 2009 7:47 PM > To: K, Niranjan (NSN - IN/Bangalore) > Cc: PostgreSQL-development > Subject: Re: Synch Replication > > Hi Niranjan, > > On Mon, Feb 9, 2009 at 10:39 PM, K, Niranjan (NSN - > IN/Bangalore) wrote: > > But after I login to replication database (note the active I had > > brought it down earlier & created a finish.trigger), I still cannot > > see the table that was created on the primary. > > Also please note that the LSN had changed after replication > in the ps > > command. > > Did you create the table in 'replication' database? If not, > please connect to the correct database which includes the table. > In log-shipping, the database objects are basically identical > between the primary and the standby server. > > Regards, > > -- > Fujii Masao > NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source > Software Center > -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers