Re: [HACKERS] BugTracker
[EMAIL PROTECTED] (Peter Eisentraut) writes: Am Mittwoch, 16. August 2006 14:10 schrieb Robert Treat: I'm not sure I follow this, since currently anyone can email the bugs list or use the bugs - email form from the website. Are you looking to increase the barrier for bug reporting? Only a small fraction of the new posts on pgsql-bugs are actually bugs. Most are confused or misdirected users. I don't want to raise that barrier. But I want a higher barrier before something is recorded in the bug tracking system. Seems to me that for there to be a *bit* of a barrier might not be a bad thing... If purported bugs had to be acknowledged before going into the bug tracker system, that wouldn't seem a bad thing. That would mean that the frequent I don't understand what I'm doing and didn't read the documentation reports could be quickly triaged away, which strikes me as an important prerequisite for further automating things. -- select 'cbbrowne' || '@' || 'ntlug.org'; http://cbbrowne.com/info/sap.html FLORIDA: Relax, Retire, Re Vote. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] news server does not respond
Marc G. Fournier wrote: Fixed, sorry for delay ... Good, thank you. But I've already switched back to IMAP, with subfolders and automatic filtering. Has the advantage of being available from any IMAP capable client _and_ saving the flags. Looks like the news server is not used that much, if I was the only one complaining... Regards Markus ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Unable to post to -patches (was: Visual C++ build files)
a.hub.org[200.46.208.251], delay=1, status=sent (250 2.7.1 Ok, discarded, id=258 35-09 - BANNED: P=p003,L=1,M=multipart/mixed | P=p002,L=1/2,M=application/x-gzip ,T=gz,N=vcbuild.tar.gz | P=p...) Seems -patches is rejecting any mail with attached .tar.gz files, if I read that correctly? Hm, I just managed to send a patch labeled application/octet-stream without any problem. Not sure what's the point in banning application/x-gzip, unless that's a common virus signature? I doubt it would be, and if it is then really, it's still not a very smart thing to do IMHO :) Anyway try the other MIME type. Hmm. I can't really control the MIME type out of my system (remember, running Exchange here..). But I guess I can rename the file ;-) Attempting here to get it into the archives at least.. //Magnus [note, file is a .tar.gz even though it doesn't look that way] vcbuild.tar.gz.bin Description: vcbuild.tar.gz.bin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] news server does not respond
Markus Schiltknecht wrote: Marc G. Fournier wrote: Fixed, sorry for delay ... Good, thank you. But I've already switched back to IMAP, with subfolders and automatic filtering. Has the advantage of being available from any IMAP capable client _and_ saving the flags. Looks like the news server is not used that much, if I was the only one complaining... Uhm, I am using it as well and I noticed a few others complaining as well on IRC. All the better that it is back in action :) regards, Lukas ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] Use of backslash in tsearch2
Teodor, are the new attached regression results correct? If so, I will apply the patch and update the expected file. Patch isn't full, simple test (values are took from regression.diffs): # create table tt (tv tsvector, tq tsquery); # insert into tt values (E'''1 \\''2''', NULL); # insert into tt values (E'''1 \\''2''3', NULL); # insert into tt values ( E'''1 \\''2'' 3', NULL); # insert into tt values ( E'''1 \\''2'' '' 3'' 4 ', NULL); # insert into tt values ( NULL, E'''1 \\''2'''); # insert into tt values ( NULL, E'''1 \\''2'''); # insert into tt values ( NULL, E'1(''2''('' 4''(\\|5 | ''6 \\'' !|'')))'); # insert into tt values ( NULL, E'1(''2''('' 4''(\\|5 | ''6 \\'' !|'')))'); and try dump table and restore: ERROR: syntax error CONTEXT: COPY tt, line 5, column tq: '1 ''2' PS I'm not subscribed to -patches, so I post to -hackers -- Teodor Sigaev E-mail: [EMAIL PROTECTED] WWW: http://www.sigaev.ru/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] BugTracker (Was: Re: 8.2 features status)
Kenneth Marshall wrote: RT is easy to setup/configure/use and works well with PostgreSQL as the backend. RT works with Postgres, but I wouldn't say well. All queries in RT are generated by a query generator due to a naive obsession with database independance. They've achieved database independance at the cost of all the queries being brain-dead. Fixing the query generator would be a pretty big job. Drew ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] pgstattuple extension for indexes
On Aug 17, 2006, at 4:10 PM, Martijn van Oosterhout wrote: On Thu, Aug 17, 2006 at 02:54:20PM -0500, Jim C. Nasby wrote: On Thu, Aug 17, 2006 at 02:23:48PM +0200, Martijn van Oosterhout wrote: On Thu, Aug 17, 2006 at 12:55:28PM +0900, ITAGAKI Takahiro wrote: But the method has the above problem. So I suggest to use whether the right link points to the next adjacent page or not. if (opaque-btpo_next != P_NONE opaque-btpo_next != blkno + 1) stat-fragments++; Do you think which method is better? Or do you have other ideas? Ok, fine... expand the example out to an index that's not trivial in size. Even with read-ahead, once you get to a few megs (which is obviously not that big), you're seeking. Well, mostly I'm just saying that only matching on the next block number is going to give unrealistically low numbers. We can't ignore OS level caching, the way Postgres works relies on it in many ways. While I agree that *users* must take caching into account, I don't think we should be fudging fragmentation numbers. For starters, we have absolutely no idea how much caching is actually happening. We should just report the raw numbers and let users draw their own conclusions. Doing otherwise makes the stat far less useful. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: BugTracker (Was: Re: [HACKERS] 8.2 features status)
On Aug 17, 2006, at 3:40 PM, Alvaro Herrera wrote: The searching capabilities in debbugs are, well, non-existent, which is a real problem in my mind. Well, we can set up our own indexing, like Oleg and Teodor have done in http://www.pgsql.ru/ That seems like quite a hack for something that should be built-in... it also severely limits searchability. For example, it's very important to be able to do things like ignore closed bugs when you're searching. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] seahorse again failing
seahorse just failed again with one of the dreaded permission denied errors we seem to sporadically getting reported on the lists: http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=seahorsedt=2006-08-22%2002:30:01 we seem to attribute those to AV and other security related software - except that seahorse does not have (and never had) anything like that installed. seahorse is just a stock windows XP box (with all patches and servicepacks applied) and msys/mingw. There is no other software installed or ever was - maybe there is really an underlying issue that is causing those sporadic permission denied errors ? Stefan ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: BugTracker (Was: Re: [HACKERS] 8.2 features status)
On 8/17/06, Peter Eisentraut [EMAIL PROTECTED] wrote: Alvaro Herrera wrote: Have you tried to use debbugs? If you can find up-to-date source code for debbugs, we might continue that line of thought. http://www.mail-archive.com/debian-debbugs@lists.debian.org/msg01266.html ( bzr get http://bugs.debian.org/debbugs-source/mainline/ ) The searching capabilities in debbugs are, well, non-existent, which is a real problem in my mind. As its mail based, it delegates searching to mail archive search tools. -- marko ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Open 8.2 items
Alvaro Herrera wrote: Bruce Momjian wrote: I will try to generate a list of open 8.2 items in 7-10 days so we can start focusing on beta. FYI, you have three emails about updatable views in the queue, but you are missing the one I sent today which contains an updated patch that is substantially better than those in the queue. If somebody is going to work on that item I suggest he starts from that one. My patch is at http://archives.postgresql.org/pgsql-patches/2006-08/msg00255.php OK, I moved them over. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PATCHES] COPY view
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: OK, based on this feedback, I am adding COPY VIEW to the patches queue. I think we have other things that demand our attention more than a half-baked feature. Well, the patch was submitted in time, and it is a desired feature. If we want to hold it for 8.3 due to lack of time, we can, but I don't think we can decide now that it must wait. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] COPY view
Bruce Momjian wrote: Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: OK, based on this feedback, I am adding COPY VIEW to the patches queue. I think we have other things that demand our attention more than a half-baked feature. Well, the patch was submitted in time, and it is a desired feature. If we want to hold it for 8.3 due to lack of time, we can, but I don't think we can decide now that it must wait. well I thought the agreed approach to that was allowing COPY from arbitrary expressions without the need to go through the extra CREATE VIEW step? Stefan ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PATCHES] COPY view
Stefan Kaltenbrunner [EMAIL PROTECTED] writes: Bruce Momjian wrote: Well, the patch was submitted in time, and it is a desired feature. If we want to hold it for 8.3 due to lack of time, we can, but I don't think we can decide now that it must wait. well I thought the agreed approach to that was allowing COPY from arbitrary expressions without the need to go through the extra CREATE VIEW step? Exactly. This is not the feature that was agreed to. Just because we have a patch for it doesn't mean that we have to put it in. If we do put it in, we'll be stuck carrying that feature forever, even after someone gets around to doing it right. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PATCHES] COPY view
Stefan Kaltenbrunner wrote: Bruce Momjian wrote: Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: OK, based on this feedback, I am adding COPY VIEW to the patches queue. I think we have other things that demand our attention more than a half-baked feature. Well, the patch was submitted in time, and it is a desired feature. If we want to hold it for 8.3 due to lack of time, we can, but I don't think we can decide now that it must wait. well I thought the agreed approach to that was allowing COPY from arbitrary expressions without the need to go through the extra CREATE VIEW step? Well, it's been a bit of a mess, unfortunately, and I can understand people feeling aggrieved. I think there is general agreement that we want to be able to do: COPY (SELECT ... ) TO ... When we have that it would not be unreasonable to have a special case for views which would transparently rewrite COPY VIEWNAME TO as COPY (SELECT * FROM VIEWNAME) TO So we would not necessarily be adopting a feature we don't want in the long run, from a user visibility angle. The issue seems to be that in adopting the present patch we would be incorporating some code we will essentially have to abandon when we get the feature we all really want, and which we hope will be available for 8.3. On that basis I can certainly appreciate Tom's reluctance to adopt the patch. It's a close call. On balance I'd be inclined to accept the patch if it reviews OK, even though we will throw the code away soon (we hope). cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: BugTracker (Was: Re: [HACKERS] 8.2 features status)
Marko Kreen wrote: On 8/17/06, Peter Eisentraut [EMAIL PROTECTED] wrote: Alvaro Herrera wrote: Have you tried to use debbugs? If you can find up-to-date source code for debbugs, we might continue that line of thought. http://www.mail-archive.com/debian-debbugs@lists.debian.org/msg01266.html ( bzr get http://bugs.debian.org/debbugs-source/mainline/ ) The searching capabilities in debbugs are, well, non-existent, which is a real problem in my mind. As its mail based, it delegates searching to mail archive search tools. Why are we even dabating a system when it has been reported that the authors believe it is completely unsuitable for use by the PostgreSQL project? cheers andrew ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] seahorse again failing
Stefan Kaltenbrunner [EMAIL PROTECTED] writes: seahorse just failed again with one of the dreaded permission denied errors we seem to sporadically getting reported on the lists: seahorse is just a stock windows XP box (with all patches and servicepacks applied) and msys/mingw. There is no other software installed or ever was - maybe there is really an underlying issue that is causing those sporadic permission denied errors ? How repeatable is it? It would be interesting to know the actual underlying Windows error code --- I see that win32error.c maps several different codes to EACCES. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] seahorse again failing
Tom Lane wrote: Stefan Kaltenbrunner [EMAIL PROTECTED] writes: seahorse just failed again with one of the dreaded permission denied errors we seem to sporadically getting reported on the lists: seahorse is just a stock windows XP box (with all patches and servicepacks applied) and msys/mingw. There is no other software installed or ever was - maybe there is really an underlying issue that is causing those sporadic permission denied errors ? How repeatable is it? It would be interesting to know the actual underlying Windows error code --- I see that win32error.c maps several different codes to EACCES. It may be a good idea to put a elog(LOG) with the error code in the failure path of AllocateFile. This particular problem must be coming from FindMyDatabase (or maybe RebuildFlatFiles when called from PostgresMain?) -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] seahorse again failing
Tom Lane wrote: Stefan Kaltenbrunner [EMAIL PROTECTED] writes: seahorse just failed again with one of the dreaded permission denied errors we seem to sporadically getting reported on the lists: seahorse is just a stock windows XP box (with all patches and servicepacks applied) and msys/mingw. There is no other software installed or ever was - maybe there is really an underlying issue that is causing those sporadic permission denied errors ? How repeatable is it? this seems two be the second time seahorse managed to trigger that (first was a manual build a while ago) - so unfortunably not very repeatable :-( It would be interesting to know the actual underlying Windows error code --- I see that win32error.c maps several different codes to EACCES. yeah - is there a way to log the actual windows error code too ? Stefan ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] ISBN/ISSN/ISMN/EAN13 module
Michael Glaesemann wrote: On Aug 22, 2006, at 2:52 , Bruce Momjian wrote: Do we want to replace our /contrib/isbn with this, or have it pgfoundry? If contrib/isbn is made obsolete by the pgfoundry ISBN/ISSN/ISMN/EAN13 code, unless there's a compelling reason that the pgfoundry be part of the distribution, I recommend removing contrib/isbn from the core distribution and pointing people to pgfoundry. AFAIK there is no code currently on pgfoundry. We had a proposed project a little while back but the proposer got impatient and decided to withdraw it. I think that having this or a similar module in contrib has some advantages, as an example on how to do user defined types, and as a check that pgxs is working in such cases. cheers andrew ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] BugTracker (Was: Re: 8.2 features status)
Kenneth Marshall wrote: RT is easy to setup/configure/use and works well with PostgreSQL as the backend. RT works with Postgres, but I wouldn't say well. All queries in RT are generated by a query generator due to a naive obsession with database independance. They've achieved database independance at the cost of all the queries being brain-dead. Fixing the query generator would be a pretty big job. We use RT with PostgreSQL for all internal IT requests and development/support/doc tasks on a couple products, and there's never been a problem. Are the queries optimal? no. The alternative might have been MySQL-only, and that would be worse. I can't really give a fair estimate on performance, because I'm running it on a PIII at 800MHz with several other things as well. But it's fast enough that I'm not screaming for a hardware upgrade. Regards, Paul Bort ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] seahorse again failing
Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: It would be interesting to know the actual underlying Windows error code --- I see that win32error.c maps several different codes to EACCES. It may be a good idea to put a elog(LOG) with the error code in the failure path of AllocateFile. That seems like a plan to me. I had been thinking of making win32error.c itself log the conversions, but that would not provide any context information. AllocateFile could log the file name along with the code, which should be enough info to associate a particular log entry with the actual failure. Note you should probably save and restore errno around the elog call, just to be safe. Could someone with access to Windows code and test this? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Autovacuum on by default?
Going back on-list... On Tue, Aug 22, 2006 at 08:47:04AM -0400, Alvaro Herrera wrote: Jim Nasby wrote: On Aug 17, 2006, at 3:19 PM, Alvaro Herrera wrote: Nevermind -- it's just that if you vacuum a table which you haven't touched (insert, update, delete) since the last stats reset, then the vacuum info isn't recorded because we refuse to create the pgstat entry for the table. Have you changed this? No ... ISTM that it should go ahead and create the pgstat entry... What for? While on the surface it makes sense not to have a stat entry for a table with no activity (since no activity means no need to vacuum), there's 2 problems: This doesn't exactly meet the test of 'least surprise'. If the table's vacuumed for any reason (even manually), we should record the info. If there's a bunch of activity on a table but stats are reset before a vacuum is run on it and then a vacuum is run, the user will still be left thinking that the table needs to be vacuumed. -- Jim C. Nasby, Database Architect [EMAIL PROTECTED] 512.569.9461 (cell) http://jim.nasby.net ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] seahorse again failing
Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: It would be interesting to know the actual underlying Windows error code --- I see that win32error.c maps several different codes to EACCES. It may be a good idea to put a elog(LOG) with the error code in the failure path of AllocateFile. That seems like a plan to me. I had been thinking of making win32error.c itself log the conversions, but that would not provide any context information. AllocateFile could log the file name along with the code, which should be enough info to associate a particular log entry with the actual failure. Note you should probably save and restore errno around the elog call, just to be safe. Could someone with access to Windows code and test this? All this seems good and sensible. I am just a little suspicious of seahorse, though, as it is running on a Xen VM. I wonder if we should add a VM column to the buildfarm machine specs. cheers andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] seahorse again failing
Andrew Dunstan wrote: Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: It would be interesting to know the actual underlying Windows error code --- I see that win32error.c maps several different codes to EACCES. It may be a good idea to put a elog(LOG) with the error code in the failure path of AllocateFile. That seems like a plan to me. I had been thinking of making win32error.c itself log the conversions, but that would not provide any context information. AllocateFile could log the file name along with the code, which should be enough info to associate a particular log entry with the actual failure. Note you should probably save and restore errno around the elog call, just to be safe. Could someone with access to Windows code and test this? All this seems good and sensible. I am just a little suspicious of seahorse, though, as it is running on a Xen VM. indeed seahorse is running under Xen - though i have no reason to believe that xen is at fault - the eventlog shows absolutly no sign of any troubles nor does the hypervisor. The only thing I would think about is that the VM might cause some subtile timing differences wrt disk-access or scheduling (xen is not exceptionally bright about cpu scheduling - so it might starve some guests sometimes). Other than that I do seem to recall that we got a number of weird looking permission denied errors on win32 - improving the error reporting might help to find out if there is a pattern involved somewhere. I wonder if we should add a VM column to the buildfarm machine specs. that would be fine with me - maybe we could add a LDAP symbol too since we just had some body failing after the ldap-on-windows fix ? Stefan ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] seahorse again failing
On Tue, Aug 22, 2006 at 10:19:38AM -0400, Tom Lane wrote: It may be a good idea to put a elog(LOG) with the error code in the failure path of AllocateFile. That seems like a plan to me. I had been thinking of making win32error.c itself log the conversions, but that would not provide any context information. AllocateFile could log the file name along with the code, which should be enough info to associate a particular log entry with the actual failure. Would it be possible to get errcode_for_file_access() to report the results of GetLastError() for windows, or would that roduce spurious results. At DEBUG lavel maybe? Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [HACKERS] seahorse again failing
Martijn van Oosterhout kleptog@svana.org writes: Would it be possible to get errcode_for_file_access() to report the results of GetLastError() for windows, or would that roduce spurious results. At DEBUG lavel maybe? It would have to be at LOG level, because otherwise it wouldn't get logged at all with the default settings that the buildfarm is using. Also, I think that errcode_for_file_access() may run too late, ie, we couldn't be sure that we were looking at the same value of GetLastError. This could be dealt with by saving GetLastError into the error data structure at the same place we save errno, but that's starting to get a bit invasive for a temporary-investigation kluge. BTW, whoever writes this needs to check that it doesn't change the default regression test results ... regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Autovacuum on by default?
Jim C. Nasby [EMAIL PROTECTED] writes: If there's a bunch of activity on a table but stats are reset before a vacuum is run on it and then a vacuum is run, the user will still be left thinking that the table needs to be vacuumed. Except that autovac *won't* vacuum it if the stats have been reset. So I'm not seeing that there's really a problem in practice. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] COPY view
Andrew Dunstan [EMAIL PROTECTED] writes: It's a close call. On balance I'd be inclined to accept the patch if it reviews OK, even though we will throw the code away soon (we hope). Well, the patch seems pretty ugly code-wise as well. I'd be willing to clean it up if I thought it wouldn't ultimately get yanked out again, but I'm not sure that I see the point if we think it's a dead end. It doesn't come close to applying to CVS HEAD, either :-( regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] Where is hstore?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, sorry if I'm on the wrong list (tell me so!). I'd like to know whether hstore isn't supposed to be in contrib any more and where it is supposed to live nowadays (I was close to file a complaint to my distributor until I discovered that it isn't in the 8.1 upstream source). Thanks - -- tomás -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFE6yt+Bcgs9XrR2kYRAvmBAJoC2GqNhPe0iiiYF5pwpfR+OUcaBQCfQoJy zvP+RZ2sWH5qhMjPV1ZBkm8= =bbs+ -END PGP SIGNATURE- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Where is hstore?
[EMAIL PROTECTED] wrote: Hi, sorry if I'm on the wrong list (tell me so!). I'd like to know whether hstore isn't supposed to be in contrib any more and where it is supposed to live nowadays (I was close to file a complaint to my distributor until I discovered that it isn't in the 8.1 upstream source). When has it ever been? cheers andrew ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Where is hstore?
[EMAIL PROTECTED] wrote: Hi, sorry if I'm on the wrong list (tell me so!). I'd like to know whether hstore isn't supposed to be in contrib any more and where it is supposed to live nowadays (I was close to file a complaint to my distributor until I discovered that it isn't in the 8.1 upstream source). hstore can be found here: http://www.sai.msu.su/~megera/postgres/gist/ but it has never actually been part of contrib so I'm not sure what you are complaining about ? Stefan ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Where is hstore?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Tue, Aug 22, 2006 at 04:06:22PM +, [EMAIL PROTECTED] wrote: Hi, sorry if I'm on the wrong list (tell me so!). I'd like to know whether hstore [...] I've been set right. Seems hstore was never in contrib. Sorry for the noise. BTW: any reason it isn't? It is very cool... Regards - -- tomás -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFE6zMSBcgs9XrR2kYRAobhAJ4j1PLayyJneVUSrdlzfPrnKVRfFACfdjP0 nlKNdOOW076ZdsNDFKK1fgA= =BI5g -END PGP SIGNATURE- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PATCHES] COPY view
Hans-Juergen Schoenig wrote: Tom Lane wrote: Stefan Kaltenbrunner [EMAIL PROTECTED] writes: Bruce Momjian wrote: Well, the patch was submitted in time, and it is a desired feature. If we want to hold it for 8.3 due to lack of time, we can, but I don't think we can decide now that it must wait. well I thought the agreed approach to that was allowing COPY from arbitrary expressions without the need to go through the extra CREATE VIEW step? Exactly. This is not the feature that was agreed to. Just because we have a patch for it doesn't mean that we have to put it in. If we do put it in, we'll be stuck carrying that feature forever, even after someone gets around to doing it right. regards, tom lane It has been made as COPY FROM / TO view because people wanted it to be done that way. My original proposal was in favour of arbitrary SELECTs (just like proposed by the TODO list) but this was rejected. So, we did it that way (had to explain to customer why views are better). Now everybody wants the original select which was proposed. I can understand if things are not committed because of bad code quality or whatever but to be honest: It is more of less frustrating if things are done differently because of community wish and then rejected because things are not done the original way ... Things have been submitted months ago and now we are short of time. I think everybody on the list is going a superior job but after 6 years I still have no idea how patches are treated ;). There's nothing hidden (unless it's also hidden from me ;-) ) I take it that when you talk about we did this you are referring to the patch from Karel Zak. I have had a quick look at that patch, and apart from not applying cleanly to the current CVS tree (which isn't your fault as the patch has been sitting around for so long) it is also missing regression tests and docs. That's without even looking at code quality. So, how quickly can you fix those 3 things? cheers andrew ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Tricky bugs in concurrent index build
I see fairly nasty problems in the concurrent-index patch when it's trying to build a unique index. I think it's solvable but want some more eyeballs on my reasoning. Look at the code in IndexBuildHeapScan where we are deciding whether or not to include a tuple in the index (indexIt) and also whether or not to include it in the uniqueness check (tupleIsAlive). In a normal non-concurrent build, we have to include recently-dead tuples in the index because transactions started before ours might try to use the index after we finish it. (Remember system catalogs generally operate on SnapshotNow, so a query could use a newly created index even though it will be run with a serializable snapshot much older than the index.) So we have to put tuples into the index if any active transaction might wish to see those tuples. OTOH, we should exclude dead tuples from the uniqueness check: the uniqueness constraint ought to be across currently-valid tuples only. In particular, for tuples previously created or deleted by our own transaction, we certainly must include created ones and not include deleted ones in the uniqueness check. In the past, the only way we could see HEAPTUPLE_INSERT_IN_PROGRESS or HEAPTUPLE_DELETE_IN_PROGRESS was for tuples created/deleted by our own transaction, and so the actions taken by IndexBuildHeapScan are to include in the index in both cases, but exclude DELETE_IN_PROGRESS tuples from the uniqueness check. This does not work for a concurrent build, though, because if the in-progress delete is from another transaction, it could roll back after we look. In that case we have an entry that is in the index and has escaped the uniqueness check. If it conflicts with another tuple also entered into the index in the first pass, we'll never notice that. I think we can solve this by having IndexBuildHeapScan not index DELETE_IN_PROGRESS tuples if it's doing a concurrent build. The problem of old transactions trying to use the index does not exist, because we'll wait 'em out before marking the index valid, so we need not worry about preserving validity for old snapshots. And if the deletion does in fact roll back, we'll insert the tuple during the second pass, and catch any uniqueness violation at that point. But wait, there's more: in the patch as it stands, the second pass over the table ignores DELETE_IN_PROGRESS tuples, which is wrong. It's entirely possible for a tuple that is RECENTLY_DEAD or DELETE_IN_PROGRESS to have no entry in the index, if it was inserted during the first pass, and then the deletion occurred after the first pass (and either has or hasn't committed yet). If we ignore DELETE_IN_PROGRESS and then the deleter rolls back, the tuple never gets into the index at all. Furthermore, the patch also tries to insert RECENTLY_DEAD tuples, which is good for MVCC coverage, but wrong for uniqueness checking --- keep in mind that in the second pass, we are just doing normal index insertions, and so anything we insert into the index will be uniqueness-checked as though still alive. We could get a uniqueness failure that should not occur, eg from trying to insert the old version of a recently-updated row. What I think we can do about this is to include DELETE_IN_PROGRESS tuples in the set of candidate tuples to insert in the second pass. During the merge step that verifies whether the tuple is already in the index, if we find that it's not, then we must wait for the deleter to commit or roll back. If the deleter commits then we ignore the tuple. If the deleter rolls back then we have to insert the tuple in the index. (I think we have to actually take a FOR UPDATE or possibly FOR SHARE lock on the tuple while we do this, else we have race conditions against someone else starting a new deletion attempt on the tuple.) In the commit case we've essentially waited long enough to transform DELETE_IN_PROGRESS into RECENTLY_DEAD, and for both of those statuses we are not going to insert into the index for fear of causing a false uniqueness violation. What that means is that after we finish the second pass, we need *another* wait-for-everyone-to-die before we can mark the index valid. Otherwise we have the same MVCC problem that someone might try to use the index for a query with a snapshot old enough that it should be able to see the not-inserted tuple. Have I missed anything? This is tricky stuff. In any case it's clear that the patch still needs major work. Greg, do you have cycles to spare now? regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: BugTracker (Was: Re: [HACKERS] 8.2 features status)
Andrew, Why are we even dabating a system when it has been reported that the authors believe it is completely unsuitable for use by the PostgreSQL project? Not *completely*. More that it would take a couple dozen hours of work to make it good for us, and the resulting version then couldn't be synched with the Debian version. Mind you, it would take an equal amount of time to add an e-mail-comment interface to Bugzilla, but BZ would then probably accept the patch. -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] BugTracker (Was: Re: 8.2 features status)
Did you look at http://www.atlassian.com/software/jira/ - can use postgresql as database - free to open source projects, used by apache, hiberate, OpenSymphony - bugs may be submitted via email/web - built-in configurable workflow - runs as J2EE webapp on a number of OS's - lots of other features I am not associated with the company, just a user. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] PostgreSQL on 64 bit Linux
On 21 Aug 2006, at 10:48, [EMAIL PROTECTED] wrote: On Mon, Aug 21, 2006 at 12:00:58PM +0300, Alexander Kirpa wrote: WRT 64-bit and Postgres, it depends on the CPU as to whether you see a simple performance benefit. On the Opteron you will see a benefit when doing CPU bound work. When doing the CPU portion, the additional registers of the Opteron running in 64-bit mode are used by the compiler to produce a 20-30% boost in performance. On the Xeon in 64-bit mode, the same regions of execution will slow down by about 5%. Postgres benefits automatically from the larger memory addressing of the 64-bit kernel by using the larger I/O cache of Linux. Main benefit Postgres in 64-bit mode possible only in case dedicated DB server on system with RAM 3GB and use most part of RAM for shared buffers and avoid persistent moving buffers between OS cache and shared memory. On system with RAM below 2-3GB to difficult found serious gain of performance. This is the main difference between PostgreSQL today - designed for 32-bit - when recompiled with a 64-bit compiler. The additional registers are barely enough to counter the increased cost of processing in 64-bits. Cheers, mark Current 32-bit Postgres architecture allow use main benefit of 64-bit OS - huge memory size for shared buffers. At current time possible use 2G x 8KB = 16TB as shared memory and regarding this issue need use (O_DIRECT) to avoid OS cache especially in case databases fit to shared memory. Best regards, Alexander Kirpa ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Tricky bugs in concurrent index build
Tom Lane [EMAIL PROTECTED] writes: I think we can solve this by having IndexBuildHeapScan not index DELETE_IN_PROGRESS tuples if it's doing a concurrent build. Sure It's entirely possible for a tuple that is RECENTLY_DEAD or DELETE_IN_PROGRESS to have no entry in the index, if it was inserted during the first pass, and then the deletion occurred after the first pass (and either has or hasn't committed yet). Egads. That's nasty indeed. Furthermore, the patch also tries to insert RECENTLY_DEAD tuples, which is good for MVCC coverage, but wrong for uniqueness checking --- keep in mind that in the second pass, we are just doing normal index insertions, and so anything we insert into the index will be uniqueness-checked as though still alive. We could get a uniqueness failure that should not occur, eg from trying to insert the old version of a recently-updated row. Hm, I hadn't absorbed the purpose of isAlive and the distinction between live for uniqueness checks and live for index build purposes. Is it not possible to brute force this adding an AM method to insert without the uniqueness check? That would mean the index build would fail even if the transaction eventually aborts though. (or even if it has already aborted?) [ extended description of complex footwork involving more waiting while holding locks ] Have I missed anything? This is tricky stuff. Wow, that seems pretty unsatisfactory, all the waiting and locking sounds awful. If you have a lot of update transactions starting continuously you could keep bumping into this situation and repeatedly have to wait for new transactions to end. It also seems like a lot of code :( In any case it's clear that the patch still needs major work. Greg, do you have cycles to spare now? I do. But I'll have to spend some time just rereading the code and your comments to convince myself that all this waiting and locking is the best solution. -- greg ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] An Idea for planner hints
Peter Eisentraut wrote: Jim C. Nasby wrote: Meet EXPLAIN ANALYZE. Which does no good for apps that you don't control the code on. Even if you do control the code, you have to find a way to stick EXPLAIN ANALYZE in front of every query, and figure out how to deal with what's comming back. It would not be hard to create an auto explain analyze mode that implicitly runs EXPLAIN ANALYZE along with every query and logs the result. On its face, it sounds like an obviously great idea. I just don't see how you would put that to actual use, unless you want to read server logs all day long. Grepping for query duration and using the statistics views are much more manageable tuning methods. In my view anyway. Going back to the original discussion though, there's no reason this needs to involve EXPLAIN ANALYZE. All we want to know is what columns the planner is dealing with as a set rather than individually. This would log a whole bunch of column groups, since every moderately interesting query uses a column in combination with some other column, but you still won't know which ones you want the planner to optimize. To get that piece of information, you'd need to do something like principal component analysis over the column groups thus identified. Which might be a fun thing to do. But for the moment I think it's better to stick to declaring the interesting pairs/groups manually. If the system logs which cross-table join statistics it didn't have for cross-table joins that it actually performed, it won't log the really interesting stuff. What is interesting are the plans that it didn't chose on account of guessing that they were too expensive, when in reality the cross-table statistics were such that they were not too expensive. This case might not be the common case, but it is the interesting case. We are trying to get the planner to notice cheap plans that don't look cheap unless you have the cross-table statistics. So you have a chicken-and-egg problem here unless the system attempts (or outputs without actually attempting) what appear to be sub-optimal plans in order to determine how bad they really are. I proposed something like this quite a bit up-thread. I was hoping we could have a mode in which the system would run the second, third, fourth, ... best plans rather than just the best looking one, and then determine from actual runtime statistics which was best. (The proposal also included the ability to output the best plan and read that in at a later time in lieu of a SQL query, but that part of it can be ignored if you like.) The posting didn't generate much response, so I'm not sure what people thought of it. The only major problem I see is getting the planner to keep track of alternate plans. I don't know the internals of it very well, but I think the genetic query optimizer doesn't have a concept of runner-up #1, runner-up #2, etc., which it would need to have. mark ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [GENERAL] Queries joining views
Alban Hertroys [EMAIL PROTECTED] writes: I'm confused too. Would it be possible for you to send me a dump of your database? Attached is a cleaned out database, the full schema is included, but only the relevant tables contain any data. Thanks. After digging through it a bit, I understand what's happening, but I'm not seeing any simple fix. The problem is basically that you've got create or replace view mm_product as SELECT ... FROM mm_product_table JOIN mm_object mmop USING (number); and then the problem query has WHERE mm_product.number = insrel.snumber which causes the planner to conclude that mm_product_table.number, mm_object.number, and mm_insrel_table.snumber are all basically interchangeable. In particular it ends up performing the join between mm_product_table.number and mm_object.number as though mm_product_table.number were being joined to mm_insrel_table.snumber. Which is fine, except that it's thinking that the statistics for mm_object.number are applicable in this context, and they're completely misleading. After the join to mm_insrel_table, the statistics of the variable are really like mm_insrel_table.number --- in particular the fraction of the table that has to be visited is much larger than it would've been for mm_object as a whole. This is a problem we've understood in a generic form for awhile: a join or selection might change the statistics of a variable, and so the info stored in the catalogs ought to be modified somehow to predict what will happen at upper join levels. We've not seen it in this particular form before, though. I'm not sure if there's a whole lot you can do about it in the near term other than refactor your schema to avoid having different tables joining to different subranges of mm_object.number. (You don't necessarily have to get rid of mm_object --- just try assigning its keys from a serial, or something, so that there's no correlation to the ranges of keys in other tables.) We might be able to do something about actually solving the statistical problem in 8.3, but I fear it's too late to think about it for 8.2. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [GENERAL] Queries joining views
I wrote: Thanks. After digging through it a bit, I understand what's happening, but I'm not seeing any simple fix. I forgot to mention that although I could reproduce your bad plan in 8.1, CVS HEAD doesn't fall into the trap. I don't believe we've done anything to fix the fundamental problem however --- it may just be a side effect of the changes in the indexscan cost model that cause it to not go for the bogus plan. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [GENERAL] Queries joining views
Tom Lane wrote: We might be able to do something about actually solving the statistical problem in 8.3, but I fear it's too late to think about it for 8.2. I take it you mean you already have a very concrete idea on how to solve it. Come on, illuminate us poor dumb souls. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Tricky bugs in concurrent index build
Greg Stark [EMAIL PROTECTED] writes: Is it not possible to brute force this adding an AM method to insert without the uniqueness check? Hm. Actually there already is a feature of aminsert to allow suppressing the unique check, but I'm not sure whether using it for RECENTLY_DEAD tuples helps. Seems like we have to wait to see whether DELETE_IN_PROGRESS deleters commit in any case. Have I missed anything? This is tricky stuff. Wow, that seems pretty unsatisfactory, all the waiting and locking sounds awful. Yeah, I'm very unhappy. The whole idea may be going down in flames :-( It's fairly clear that we could support concurrent builds of nonunique indexes, but is that enough of a use-case to justify it? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Tricky bugs in concurrent index build
Wow, that seems pretty unsatisfactory, all the waiting and locking sounds awful. Yeah, I'm very unhappy. The whole idea may be going down in flames :-( It's fairly clear that we could support concurrent builds of nonunique indexes, but is that enough of a use-case to justify it? I believe there would be. Most PostgreSQL users I run into, develop in production, which means being able to add an index they forgot when doing query analysis. Most of the time (I would say 95%) this is not a unique index. Sincerely, Joshua D. Drake regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [BUGS] BUG #2585: Please provide pkg-config support
[ redirecting to pgsql-hackers since this is not a bug ] Alvaro Herrera [EMAIL PROTECTED] writes: Manuel A. Fernandez Montecelo wrote: Well, I use around 10 libraries in my project, it's easier to use standardized methods to gather the compiler/linker options using pkg-config than calling different configuration scripts for each one, Just FYI, this was proposed and rejected before. pg_config is our method. It doesn't seem like too onerous a requeriment to change pkg- to pg_, is it? The previous discussion starts here: http://archives.postgresql.org/pgsql-patches/2004-08/msg00433.php That requestor got off to an extremely bad start by presuming that a documentation-free patch with no prior discussion would be a sufficient way of proposing a new feature. However there are a couple of pretty compelling points mentioned later in the thread, particularly the issue of multiple installations on one machine. (And does pkg-config still not distinguish CPPFLAGS from CFLAGS?) Since that was two years ago, maybe we should take another look and see if pkg-config has gotten better. If it hasn't evolved some then the answer will probably be the same though. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [GENERAL] Queries joining views
Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: We might be able to do something about actually solving the statistical problem in 8.3, but I fear it's too late to think about it for 8.2. I take it you mean you already have a very concrete idea on how to solve it. Come on, illuminate us poor dumb souls. No, I don't :-( ... that was intended to suggest that we might think of a solution given months to work on it rather than days. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Tricky bugs in concurrent index build
Joshua D. Drake [EMAIL PROTECTED] writes: It's fairly clear that we could support concurrent builds of nonunique indexes, but is that enough of a use-case to justify it? I believe there would be. Most PostgreSQL users I run into, develop in production, which means being able to add an index they forgot when doing query analysis. True, unique constraints are usually something you should get right to start with. But it'll be annoying if we can do everything BUT that :-( regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Where is hstore?
[EMAIL PROTECTED] writes: I've been set right. Seems hstore was never in contrib. Sorry for the noise. BTW: any reason it isn't? It is very cool... AFAIR the authors have never proposed it for inclusion. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Tricky bugs in concurrent index build
Tom Lane wrote: Joshua D. Drake [EMAIL PROTECTED] writes: It's fairly clear that we could support concurrent builds of nonunique indexes, but is that enough of a use-case to justify it? I believe there would be. Most PostgreSQL users I run into, develop in production, which means being able to add an index they forgot when doing query analysis. True, unique constraints are usually something you should get right to start with. But it'll be annoying if we can do everything BUT that :-( Agreed, but better then nothing :). Sincerely, Joshua D. Drake regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] selecting large result sets in psql using
True. They could even put it in .psqlrc if they want. Basically need a way to modify \g. Seems a \set is the way we have always done such modifications in the past. The big question is whether this is somehow different. Personally, I don't think so. If you want a \set variable, then at least make it do something useful: make it an integer var that sets the fetch count, rather than hard-wiring the count as is done in Chris' existing patch. Zero (or perhaps unset) disables. regards, tom lane Hello, first I must admit that I misunderstood Bruce post. I thought he meant to tweak \pset (psql command to set formatting). This didn't make sense to me. Only now I realize everyone is talking about \set (psql internal variable). That being said, I'm a bit unsure now what we should do. As Peter said, it is true that mostly this feature would be used for scripting where \set and \unset are not as cumbersome to use as in an interactive session. Tom's idea to factor in the fetch count as an option is also very tempting. To cut the Gordon knot I'm going to suggest we use: \set CURSOR_FETCH fetch_count and \g and ; are modified such that when they see this variable set to fetch_count 0 and the buffer is a select they would use the modified fetch/output code. Does this sound reasonable to everyone? Bye :) Chris. -- Chris Mair http://www.1006.org ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PATCHES] COPY view
Tom Lane wrote: Stefan Kaltenbrunner [EMAIL PROTECTED] writes: Bruce Momjian wrote: Well, the patch was submitted in time, and it is a desired feature. If we want to hold it for 8.3 due to lack of time, we can, but I don't think we can decide now that it must wait. well I thought the agreed approach to that was allowing COPY from arbitrary expressions without the need to go through the extra CREATE VIEW step? Exactly. This is not the feature that was agreed to. Just because we have a patch for it doesn't mean that we have to put it in. If we do put it in, we'll be stuck carrying that feature forever, even after someone gets around to doing it right. regards, tom lane It has been made as COPY FROM / TO view because people wanted it to be done that way. My original proposal was in favour of arbitrary SELECTs (just like proposed by the TODO list) but this was rejected. So, we did it that way (had to explain to customer why views are better). Now everybody wants the original select which was proposed. I can understand if things are not committed because of bad code quality or whatever but to be honest: It is more of less frustrating if things are done differently because of community wish and then rejected because things are not done the original way ... Things have been submitted months ago and now we are short of time. I think everybody on the list is going a superior job but after 6 years I still have no idea how patches are treated ;). best regards, hans -- Cybertec Geschwinde Schönig GmbH Schöngrabern 134; A-2020 Hollabrunn Tel: +43/1/205 10 35 / 340 www.postgresql.at, www.cybertec.at ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] COPY view
Hans-Juergen Schoenig wrote: It has been made as COPY FROM / TO view because people wanted it to be done that way. My original proposal was in favour of arbitrary SELECTs (just like proposed by the TODO list) but this was rejected. So, we did it that way (had to explain to customer why views are better). Now everybody wants the original select which was proposed. This is not the first time this happens. It has happened to Simon Riggs at least once and to me as well. Sometimes the community just doesn't realize what it wants, until what it think it wants is done and then realizes it wants something else. It is frustrating, but I don't see how to do things differently. Things have been submitted months ago and now we are short of time. I think everybody on the list is going a superior job but after 6 years I still have no idea how patches are treated ;). It sucks that patches are posted and no action is taken on them for months. I agree with that. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PATCHES] COPY view
Alvaro Herrera [EMAIL PROTECTED] writes: It sucks that patches are posted and no action is taken on them for months. I agree with that. This particular patch was originally posted during the 8.1 feature freeze window (2005-09-29), so it was doomed to a certain amount of languishing on the to-worry-about-later list in any case. We should have gotten around to reviewing it sooner than we did (the followup discussion was around 2006-06-14), but there was still plenty of time at that point to rework it per the discussion and get it into 8.2. As I see it, we've effectively got a patch that was rejected once, and Bruce wants to apply it anyway because no replacement has been forthcoming. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] BugTracker (Was: Re: 8.2 features status)
[EMAIL PROTECTED] wrote: Did you look at http://www.atlassian.com/software/jira/ We had discussed that in an earlier round, but it's not free software, so it's out of the question. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Where is hstore?
On Aug 22, 2006, at 17:01 , Tom Lane wrote: [EMAIL PROTECTED] writes: I've been set right. Seems hstore was never in contrib. Sorry for the noise. BTW: any reason it isn't? It is very cool... AFAIR the authors have never proposed it for inclusion. According to http://www.sai.msu.su/~megera/postgres/gist/hstore/ README.hstore : Stable version, included into PostgreSQL distribution, ... version 2 (June 1991). -M ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [BUGS] BUG #2585: Please provide pkg-config support
Tom Lane wrote: Since that was two years ago, maybe we should take another look and see if pkg-config has gotten better. If it hasn't evolved some then the answer will probably be the same though. For all intents and purposes, pkg-config is still as broken as it ever was. And I still haven't heard of a problem that it would solve that we don't currently cover. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Where is hstore?
AgentM [EMAIL PROTECTED] writes: On Aug 22, 2006, at 17:01 , Tom Lane wrote: AFAIR the authors have never proposed it for inclusion. According to http://www.sai.msu.su/~megera/postgres/gist/hstore/ README.hstore : Stable version, included into PostgreSQL distribution, ... version 2 (June 1991). 1991? There's a typo there somewhere. But anyway it has never actually been in the distribution, and the closest thing to a proposal I can find in the archives is http://archives.postgresql.org/pgsql-hackers/2003-05/msg00763.php which specifically says not released yet. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PATCHES] selecting large result sets in psql using
[EMAIL PROTECTED] writes: To cut the Gordon knot I'm going to suggest we use: \set CURSOR_FETCH fetch_count and \g and ; are modified such that when they see this variable set to fetch_count 0 and the buffer is a select they would use the modified fetch/output code. Does this sound reasonable to everyone? OK with me, but maybe call the variable FETCH_COUNT, to avoid the presupposition that the implementation uses a cursor. As I mentioned before, I expect we'll someday rework it to not use that. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Tricky bugs in concurrent index build
Tom Lane [EMAIL PROTECTED] writes: Greg Stark [EMAIL PROTECTED] writes: Is it not possible to brute force this adding an AM method to insert without the uniqueness check? Hm. Actually there already is a feature of aminsert to allow suppressing the unique check, but I'm not sure whether using it for RECENTLY_DEAD tuples helps. Seems like we have to wait to see whether DELETE_IN_PROGRESS deleters commit in any case. Hm, actually don't we need both of these to make it work? We need to see whether the deleter commits to determine whether to enforce the uniqueness constraint on the missing tuple. . If the deleter aborts we need to insert the tuple normally including enforcing the constraint. . If the deleter commits then we still need to insert the tuple but without enforcing the constraint in case some old transaction queries the index What would happen if we just insert DELETE_IN_PROGRESS tuples normally? Would the only risk be that the index build would fail with a spurious unique constraint violation? I suppose it would be pretty common though given how updates work. Incidentally does this point out a problem with the planner depending on unique constraints? For old (serializable) transactions the unique index exists and the constraint is enforced but they can still find tuples that were deleted before the index was built and might violate the unique constraint. Even if we had the plan invalidation mechanism that's frequently mentioned you would still have to check constraints against your snapshot and not just snapshotnow for planning purposes. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Tricky bugs in concurrent index build
Greg Stark [EMAIL PROTECTED] writes: What would happen if we just insert DELETE_IN_PROGRESS tuples normally? Would the only risk be that the index build would fail with a spurious unique constraint violation? I suppose it would be pretty common though given how updates work. Yeah, that's the problem: if we can't support UPDATEs that don't change the to-be-unique column, it ain't much of a feature. Incidentally does this point out a problem with the planner depending on unique constraints? Good point. It doesn't depend on them yet, but we've been hoping to make it do so once we have plan invalidation capability. We shall have to think very carefully about timing semantics of all that. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [PATCHES] COPY view
Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: It sucks that patches are posted and no action is taken on them for months. I agree with that. This particular patch was originally posted during the 8.1 feature freeze window (2005-09-29), so it was doomed to a certain amount of languishing on the to-worry-about-later list in any case. We should have gotten around to reviewing it sooner than we did (the followup discussion was around 2006-06-14), but there was still plenty of time at that point to rework it per the discussion and get it into 8.2. As I see it, we've effectively got a patch that was rejected once, and Bruce wants to apply it anyway because no replacement has been forthcoming. Yea, that pretty much sums it up. Based on the number of people who wanted it applied, I think we need to have a discussion like this. I can easily go with rejecting it, but I think the discussion is needed to be fair to the patch author. So, what do we want to do with this? Where did we say we didn't want SELECT? I never remember that being discussed. I remember us saying we never wanted SELECT or VIEWs because it was going to be slow, but once the SELECT idea came up, I think we decided we wanted that, and views could be built on top of that. I certainly never remember us saying we didn't want SELECT but wanted views. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [PATCHES] COPY view
On Tuesday 22 August 2006 16:10, Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: It sucks that patches are posted and no action is taken on them for months. I agree with that. This particular patch was originally posted during the 8.1 feature freeze window (2005-09-29), so it was doomed to a certain amount of languishing on the to-worry-about-later list in any case. We should have gotten around to reviewing it sooner than we did (the followup discussion was around 2006-06-14), but there was still plenty of time at that point to rework it per the discussion and get it into 8.2. As I see it, we've effectively got a patch that was rejected once, and Bruce wants to apply it anyway because no replacement has been forthcoming. Well, unless someone is going to commit to doing it the other way, it seems the guy who actually codes something offers a better solution than handwaving... people have also had plenty of time to come up with a replacement if that's what they really wanted. -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] news server does not respond
After a long battle with technology, Lukas Kahwe Smith [EMAIL PROTECTED], an earthling, wrote: Markus Schiltknecht wrote: Marc G. Fournier wrote: Fixed, sorry for delay ... Good, thank you. But I've already switched back to IMAP, with subfolders and automatic filtering. Has the advantage of being available from any IMAP capable client _and_ saving the flags. Looks like the news server is not used that much, if I was the only one complaining... Uhm, I am using it as well and I noticed a few others complaining as well on IRC. All the better that it is back in action :) Yeah, and you can't complain when you're cut off... -- let name=cbbrowne and tld=gmail.com in name ^ @ ^ tld;; http://cbbrowne.com/info/postgresql.html Know the list of large, chronic problems. If there is any problem with the window system, blame it on the activity system. Any lack of user functionality should be attributed to the lack of a command processor. A suprisingly large number of people will believe that you have thought in depth about the issue to which you are alluding when you do. -- from the Symbolics Guidelines for Sending Mail ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] [PATCHES] COPY view
Robert Treat [EMAIL PROTECTED] writes: On Tuesday 22 August 2006 16:10, Tom Lane wrote: As I see it, we've effectively got a patch that was rejected once, and Bruce wants to apply it anyway because no replacement has been forthcoming. Well, unless someone is going to commit to doing it the other way, it seems the guy who actually codes something offers a better solution than handwaving... people have also had plenty of time to come up with a replacement if that's what they really wanted. The patch submitter has neither provided an updated patch nor defended his original submission as being the right thing. If he doesn't take it seriously enough to have done any followup, why should the rest of us? At the moment, with the online-index and updatable-views patches both pretty seriously broken, and no sign that the bitmap-index people are awake at all, I might take it on myself to fix this one instead of those others. But is that what I should be spending my time on in the waning days of the 8.2 freeze cycle? Speak now or hold your peace. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: pgsql-patches reply-to (was Re: [HACKERS] [PATCHES] selecting large result sets in psql using cursors)
On Thursday 17 August 2006 11:55, Peter Eisentraut wrote: Tom Lane wrote: Yeah, that experiment hasn't seemed to work all that well for me either. Do you have another idea to try, or do you just want to revert to the old way? Since almost the first day I hacked on PostgreSQL I have been filtering both lists into the same folder, so they pretty much appear to be one and the same to me anyway. I'm curious, do you combine any other lists like that? I've played around with that idea (for example, I used to combine webmaster emails, pgsql-www, and -slaves emails but the slaves traffic was too high so I had to split it back out). As someone subscribed to a good dozen pg lists, I've always been quite amazed how much email some of the folks here manage to process... I suppose I could just chalk it up to a pine vs. gui thing, but I suspect there are some other tricks people have to make emails more manageable (anyone combine all pg mail to one folder?) -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] COPY view
Tom Lane wrote: Robert Treat [EMAIL PROTECTED] writes: On Tuesday 22 August 2006 16:10, Tom Lane wrote: As I see it, we've effectively got a patch that was rejected once, and Bruce wants to apply it anyway because no replacement has been forthcoming. Well, unless someone is going to commit to doing it the other way, it seems the guy who actually codes something offers a better solution than handwaving... people have also had plenty of time to come up with a replacement if that's what they really wanted. The patch submitter has neither provided an updated patch nor defended his original submission as being the right thing. If he doesn't take it seriously enough to have done any followup, why should the rest of us? At the moment, with the online-index and updatable-views patches both pretty seriously broken, and no sign that the bitmap-index people are awake at all, I might take it on myself to fix this one instead of those others. But is that what I should be spending my time on in the waning days of the 8.2 freeze cycle? Speak now or hold your peace. Your analysis is accurate. You can spend your time on whatever _you_ think is important. If someone wants to take on COPY VIEW and do all the work to make it 100%, then they are welcome to do it, but if you don't feel it is worth it, you can just leave it. If it isn't 100% by the time we start beta, it is kept for a later release. Alvaro has already indicated some problems with the patch (the objection email is in the patches queue), so it is up to someone to correct at least that, and if other objections are found, they have to correct those too before 8.2 beta starts. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: pgsql-patches reply-to (was Re: [HACKERS] [PATCHES]
Robert Treat wrote: On Thursday 17 August 2006 11:55, Peter Eisentraut wrote: Tom Lane wrote: Yeah, that experiment hasn't seemed to work all that well for me either. Do you have another idea to try, or do you just want to revert to the old way? Since almost the first day I hacked on PostgreSQL I have been filtering both lists into the same folder, so they pretty much appear to be one and the same to me anyway. I'm curious, do you combine any other lists like that? I've played around with that idea (for example, I used to combine webmaster emails, pgsql-www, and -slaves emails but the slaves traffic was too high so I had to split it back out). As someone subscribed to a good dozen pg lists, I've always been quite amazed how much email some of the folks here manage to process... I suppose I could just chalk it up to a pine vs. gui thing, but I suspect there are some other tricks people have to make emails more manageable (anyone combine all pg mail to one folder?) Yes, all mine are in one folder, and I use elm ME. It is faster than a GUI email client. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: pgsql-patches reply-to (was Re: [HACKERS] [PATCHES] selecting
I'm curious, do you combine any other lists like that? I've played around with that idea (for example, I used to combine webmaster emails, pgsql-www, and -slaves emails but the slaves traffic was too high so I had to split it back out). As someone subscribed to a good dozen pg lists, I've always been quite amazed how much email some of the folks here manage to process... I suppose I could just chalk it up to a pine vs. gui thing, but I suspect there are some other tricks people have to make emails more manageable (anyone combine all pg mail to one folder?) Well as someone who is also on almost all of the PostgreSQL lists, plus a number of sub projects :) I filter everything postgresql except for the funds list into a single box and I process each in order :). I used to break them up, but I found with cross posting, and trying to reference back and forth it was just easier to have a single box. I used to be a big pine user but due to the large amount of email I do process I had to move to Thunderbird which makes certain things just much easier. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: pgsql-patches reply-to (was Re: [HACKERS] [PATCHES] selecting large result sets in psql using cursors)
Bruce Momjian [EMAIL PROTECTED] writes: Robert Treat wrote: ... some other tricks people have to make emails more manageable (anyone combine all pg mail to one folder?) Yes, all mine are in one folder, and I use elm ME. It is faster than a GUI email client. All my PG list mail goes into one folder too. The list bot is pretty good (not perfect :-() about sending only one copy of crossposted messages. Personally I use exmh, but I don't expect people who don't remember the Mesozoic era to know what that is. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: pgsql-patches reply-to (was Re: [HACKERS] [PATCHES] selecting large result sets in psql using cursors)
On Aug 23, 2006, at 12:15 , Robert Treat wrote: On Thursday 17 August 2006 11:55, Peter Eisentraut wrote: Tom Lane wrote: Yeah, that experiment hasn't seemed to work all that well for me either. Do you have another idea to try, or do you just want to revert to the old way? Since almost the first day I hacked on PostgreSQL I have been filtering both lists into the same folder, so they pretty much appear to be one and the same to me anyway. I'm curious, do you combine any other lists like that? I've played around with that idea (for example, I used to combine webmaster emails, pgsql-www, and -slaves emails but the slaves traffic was too high so I had to split it back out). As someone subscribed to a good dozen pg lists, I've always been quite amazed how much email some of the folks here manage to process... I suppose I could just chalk it up to a pine vs. gui thing, but I suspect there are some other tricks people have to make emails more manageable (anyone combine all pg mail to one folder?) Reading pg ml mail is relatively high on my list of things I want to do, so I have it all come into my inbox. However, with other mailing lists (e.g., ruby-talk and the RoR lists which have the highest volume of any mailing list I'm subscribed to) I generally have them routed into their own folder. I usually let lower-volume mailing lists just end up in my inbox as well Mail.app on Mac OS X 10.4. I make heavy use of the Mail Act-on[1] plugin to make further processing of mail easier (such as archiving to appropriate folders). Michael Glaesemann grzm seespotcode net [1](http://www.indev.ca/MailActOn.html) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: pgsql-patches reply-to (was Re: [HACKERS] [PATCHES] selecting
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: Robert Treat wrote: ... some other tricks people have to make emails more manageable (anyone combine all pg mail to one folder?) Yes, all mine are in one folder, and I use elm ME. It is faster than a GUI email client. All my PG list mail goes into one folder too. The list bot is pretty good (not perfect :-() about sending only one copy of crossposted messages. Personally I use exmh, but I don't expect people who don't remember the Mesozoic era to know what that is. I know what it is from text books ;). Practical Unix 3rd Ed, by Sobel I think it was. Sincerely, Joshua D. Drake regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Autovacuum on by default?
Jim C. Nasby [EMAIL PROTECTED] wrote: And +1 on Rod's suggestion to make it more aggressive. I always drop the scale factor to at least 0.2 and 0.1 (though 0.1 and 0.05 don't seem unreasonable), and typically drop the thresholds to 200 and 100 (though again, lower is probably warrented). The default fillfactors for index btree is 90%. So if we want to avoid spliting of the leaf pages, vacuum scale factors should be less than 0.1 in cases where tuples are only updated randomly. I think threshoulds should be less than PCTFREEs(=1-fillfactors) except ever-increasing tables. This is a too simplified policy, but we probably need documentation for the linkages between autovacuum and fillfactors. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] Some minor changes to pgbench
Joshua D. Drake [EMAIL PROTECTED] writes: * The schema now uses foreign keys to more accurately reflect a finacial DDL Addition of foreign key checking will certainly impact performance significantly. * The history table now has a primary key that uses a serial Ditto. * The respective balance columns have been increased to int8 to deal with larger values Ditto. * Initalization will be done in a new schema/namespace, pgbench will exit if this schema/namespace exists OK, maybe that doesn't matter. * The new DDL should allow both Mammoth Replicator and Slony to be tested using pgbench (at least basic replication) Erm ... exactly why couldn't you do that before? pgbench doesn't have all that many things to recommend it, but what it does have is that it's been a stable testbed across quite a few PG releases. Arbitrarily whacking around the tested functionality will destroy that continuity. I fell into this trap before myself ... I have a local copy of pgbench that produces TPS numbers quite a lot better than the standard pgbench, against exactly the same server. What's wrong with that picture? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster