[HACKERS] [patch] jdbc build fix when ./configure is run in separate dir
When ./configure is run outside the source directory (to keep the build files separate from the source files), make fails for the jdbc target because ant is not informed of the configure/build directory location. This patch fixes this, but could you please review it for portability, etc. For example, I am not sure that `pwd` is the best way to find the configure/build dir. HTH --Tim Larson Index: Makefile === RCS file: /projects/cvsroot/pgsql-server/src/interfaces/jdbc/Makefile,v retrieving revision 1.39 diff -r1.39 Makefile 27c27,28 $(ANT) -buildfile $(srcdir)/build.xml all --- $(ANT) -buildfile $(srcdir)/build.xml all \ -Dconfig.directory=`pwd` 30a32 -Dconfig.directory=`pwd` \ 41c43,44 $(ANT) -buildfile $(srcdir)/build.xml clean_all --- $(ANT) -buildfile $(srcdir)/build.xml clean_all \ -Dconfig.directory=`pwd` 44c47,48 $(ANT) -buildfile $(srcdir)/build.xml test --- $(ANT) -buildfile $(srcdir)/build.xml test \ -Dconfig.directory=`pwd` Index: build.xml === RCS file: /projects/cvsroot/pgsql-server/src/interfaces/jdbc/build.xml,v retrieving revision 1.38 diff -r1.38 build.xml 21,22c21,22 property name=jardir value=jars / property name=builddir value=build / --- property name=jardir value=${config.directory}/jars / property name=builddir value=${config.directory}/build / 26c26 property file=build.properties/ --- property file=${config.directory}/build.properties/ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] What's planned for 7.5?
Christopher Browne wrote: 2. Jan Wieck's work on SLONY-1 Many are keen to see the code, but it's not out yet. And it is not self-evident that it will necessarily be ready to release in conjunction with 7.5 (Not to say it _can't_ happen, but just that we'll see the code when we see it...) It is planned to be implemented in C, so it would presumably be more suitable for inclusion in the main code than eRserv. But it stands as hope, and not certainty. I've just made one major step backward on that. Discovered that my first thread model was deadlock prone, so I better throw that away instead of building a lot of code on top of it. What I currently do is documenting the Slony-I ERD and the new thread model I have in mind. This will be a document in work, but I plan to have something readable by the end of this week, latest mid next week. I will create a mailing list for Slony-I on gborg so we can start discussing the implementation details. About the inclusion of a replication solution into the core distributon. The much I personally would be proud to see this one added, as a CORE member I do not see any of the replication things fit. All of them, and neither Slony-I nor Slony-II will be any different here, have pros and cons, none is the one size that fits all magic solution. To select one of the replication projects that high above all the others that it will be added to the core distribution, it should be really outstanding and general purpose. I think that Slony in the end will be outstanding, but only for what it was designed for. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] Getting the results columns before execution
Hi all, From the OLE DB manual: The consumer can also get information about the columns of the rowset by calling IColumnsInfo::GetColumnInfo, which all providers must support, or IColumnsRowset::GetColumnsRowset, which providers may support, on the command. Before calling GetColumnInfo or GetColumnsRowset, the consumer must set the command text and, if the provider supports command preparation, prepare the command object. The problem - it appears there is no requirement to execute the command before asking about the rowset returned. Only setting the actual command and preparing it. This is a problem, as merely preparing a command is not enough to get the returned rows information. I'm wondering whether anyone on this list(s) have an idea how to solve this? I can start the command in a cursor, and not retrieve information until requested. I can also execute the command with where 0 appended to it. Neither solutions seem very practical or good performance wise. Shachar -- Shachar Shemesh Lingnu Open Systems Consulting http://www.lingnu.com/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] VACUUM delay (was Re: What's planned for 7.5?)
Stephen wrote: The vacuum delay patch is not the ideal solution but it worked like a charm on my servers. I really need the vacuum delay patch or a better solution in 7.5. I'm getting millions of requests a month and running VACUUM without the patch makes PostgreSQL useless for many consecutive hours. Not quite the 24/7 system I was hopping for. :-( Unfortunately, it's rather difficult to patch so many machines as my entire system runs on Redhat RPMs. I'm really hopping to see a solution to this VACUUM problem in 7.5. I've been waiting for this fix for over 3 years and now it's almost there. Will this problem get addressed in the not so official TODO list? Well, I had a few different versions of vacuum delay stuff out as patches, together with ARC and the beginnings of the background writer. Instead of getting some numbers on those, the whole discussion got stuck in differences about how we actually let the background writer tell the kernel do something ... the whole sync(), fsync(), fdatasync(), fadvise() discussion. I don't have the time to make enough different attempts to find the one that pleases all. My argument still is that all this IO throttling and IO optimizing is mainly needed for dedicated servers, because I think that if you still run multiple services on one box you're not really in trouble yet. So in the first round a configurable sync() approach would do. So far nobody even agreed to that. I currently have better to do. We do not have a big IO problem, we have other problems, and I spend my time on solving them. If someone wants to pick up the IO throttle problem, I am allways here to help, but I will not waste my time with making patches nobody even gives a try. Thanks and keep up the good work! Sorry for the venting, but I needed that out. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] Getting the results columns before execution
On Monday 19 January 2004 15:13, Shachar Shemesh wrote: Hi all, From the OLE DB manual: The consumer can also get information about the columns of the rowset by calling IColumnsInfo::GetColumnInfo, which all providers must support, or IColumnsRowset::GetColumnsRowset, which providers may support, on the command. Before calling GetColumnInfo or GetColumnsRowset, the consumer must set the command text and, if the provider supports command preparation, prepare the command object. The problem - it appears there is no requirement to execute the command before asking about the rowset returned. Only setting the actual command and preparing it. This is a problem, as merely preparing a command is not enough to get the returned rows information. I'm wondering whether anyone on this list(s) have an idea how to solve this? I can start the command in a cursor, and not retrieve information until requested. I can also execute the command with where 0 appended to it. Neither solutions seem very practical or good performance wise. Shachar LIMIT 0 will do ? I do it often when I need to know structure of the result without returning any rows. Regards ! ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Getting the results columns before execution
Shachar Shemesh [EMAIL PROTECTED] writes: ... This is a problem, as merely preparing a command is not enough to get the returned rows information. Sure it is, if you are using the V3 protocol (new in 7.4). See the Describe message types. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Getting the results columns before execution
Tom Lane wrote: Shachar Shemesh [EMAIL PROTECTED] writes: ... This is a problem, as merely preparing a command is not enough to get the returned rows information. Sure it is, if you are using the V3 protocol (new in 7.4). See the Describe message types. regards, tom lane Are those exposed through the libpq interface? If not, is there a way to use the libpq interface for most operations, and only for the rest to use whatever it is you are suggesting I use (couldn't locate it in the docs, yet) Funnily enough, mandating protocol version 3 is not my problem at the moment. I may have to do so for a host of other reasons as well. I guess, if you want to use an earlier version of the database, you will have to make do with the ole db to ODBC interface. Shachar -- Shachar Shemesh Lingnu Open Systems Consulting http://www.lingnu.com/ ---(end of broadcast)--- TIP 8: explain analyze is your friend
[HACKERS] logging enhancement recap
Fellow hackers, Back in August I submitted a patch that essentially did 2 things: . explicitly logged the end of a client connection, including the connection's elapsed time, enabled by a config variable called log_session_end, and . provided for tagging all of a sessions log lines using a printf-style format string, recognizing the escape sequences %U = username and %D = databasename, and enabled via a config variable called log_line_format. This was done after some discussion on the hackers list - see mailing list archives around the beginning of August under the heading logging stuff if you are interested. Back then most of the discussion was around the names and formats of the GUC variables. The consensus seemed to be that we should not roll the pid and timestamp variables up into a single variable. In private and public discussion Bruce has now raised this possibility again. However, having reviewed the matter I have again come to the conclusion that this is not a good idea for 2 reasons: . syslog already does timestamp and pid logging, so if we rolled these up we'd have to add in extra processing just for the syslog case. . some lines won't have any other useful info that we can tag (e.g. log lines from the postmaster or the stats collector). Anything else related to the session that we want to include in the tag could be done by an extremely easy extension to the recognized formats (e.g. remote host addr, remote host name, remote port), without any necessity to add another GUC var. This patch has unfortunately suffered some bitrot, as I found yesterday when I tried to apply it. This is hardly surprising given the amount of time that has elapsed since it was prepared (which raises the question of whether or not we should branch of the release branch earlier in the process - i.e. around the time feature freeze is declared.). I would *really* like to put this all to bed. The first feature above seems quite uncontroversial, and is to my mind the more important in that you can't get the info from log analysis tools. The second feature has significant utility, has been requested by several users, and I still think the way I did it back in August is the best way to go, combining backwards compatibility with forwards flexibility and minimal code disturbance, and preventing an explosion in GUC vars. What is the best way to proceed so this can be wrapped up? Just fix the bitrot and resubmit? Split it into 2? Other? cheers andrew ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Getting the results columns before execution
Shachar Shemesh [EMAIL PROTECTED] writes: Tom Lane wrote: Sure it is, if you are using the V3 protocol (new in 7.4). See the Describe message types. Are those exposed through the libpq interface? No, because libpq doesn't really have any concept of prepared statements. It would probably make sense to add some more API to libpq to allow creation and interrogation of prepared statements, but no one's got around to that yet. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: Incremental Backup (Was: [HACKERS] And ppl complain about *our*
Martin Marques wrote: Mensaje citado por David Garamond [EMAIL PROTECTED]: Marc G. Fournier wrote: From the Firebird FAQ: The first beta was released on January 29, 2003. We are hoping to be close to a full release some time around Easter 2003. They are at RC8 right now ... running a *wee* bit behind scheduale :) Yes, they're pretty late. Last time I read, the only major issues preventing their final release is around the installer. The 1.5 codebase itself has been stabilized for quite a while. Practically all work is now done to the 2.0 branch/HEAD. They have several goodies in store for the 2.0 release (e.g.: incremental backup). Anyone working on incremental backups for PostgreSQL? I kind of miss them from my Informix times (hey, I think it's the only thing I really miss :-) ). Sort of. A kind of delayed standby server and backup with roll forward stuff will be a byproduct of the Slony-I replication system. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] Strange COPY from problem (resent)
It's not ideal, but can you dump those tables using INSERT commands and load them into your database? Robert Treat On Wednesday 14 January 2004 13:53, [EMAIL PROTECTED] wrote: Hi all, I resend this mail because I didn't have any answer; I still did'nt find any solution, the same line is ALWAYS failing although I see no error. Of course, I can't post the dump 'cause it's over 200 Megs but if some one wants to make test (need to be postgres), I can arrange to put it on private ftp; I have actually 2 databases with the problem; running 7.3.4 pg_dump or 7.4.1 dosn't change the problem. Can some one help me please... -- Olivier PRENANT Tel: +33-5-61-50-97-00 (Work) 6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax) 31190 AUTERIVE +33-6-07-63-80-64 (GSM) FRANCE Email: [EMAIL PROTECTED] --- --- Make your life a dream, make your dream a reality. (St Exupery) -- Forwarded message -- Date: Fri, 9 Jan 2004 13:54:22 +0100 (MET) From: [EMAIL PROTECTED] To: pgsql-hackers list [EMAIL PROTECTED] Subject: Strand COPY from problem Hi every one and Happy new year (this is my first post since 2004) I wanted to upgrade my system from 7.3.4 to 7.4. and fell on this: One of the databases I host doesn't load correctly. Let me explain: I first pg_dumpall (with the 7.4.1 version) using the 7.3.4 port; that works fine. The psql -f all.sql template1 on the 741 version to recreate every thing. 2 copy don't work. those tables both contain a large text column with text containing lots of \r\n ' . What's stange is that copy from stdin; obviously don't work but if I copy table to '/tmp/xxx.dat' in 7.3.4 and copy table from 'xxx.dat' in 7.4.1, it works fine... Could there but something wrong with from stdin? TIA for your help... -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] Strange COPY from problem (resent)
The problem has been solved by Tom Lane et was NOT a postgresql problem... I'm ashamed to say so :à) Regards On Mon, 19 Jan 2004, Robert Treat wrote: Date: Mon, 19 Jan 2004 13:29:19 -0500 From: Robert Treat [EMAIL PROTECTED] To: [EMAIL PROTECTED], pgsql-hackers list [EMAIL PROTECTED] Subject: Re: [HACKERS] Strange COPY from problem (resent) It's not ideal, but can you dump those tables using INSERT commands and load them into your database? Robert Treat On Wednesday 14 January 2004 13:53, [EMAIL PROTECTED] wrote: Hi all, I resend this mail because I didn't have any answer; I still did'nt find any solution, the same line is ALWAYS failing although I see no error. Of course, I can't post the dump 'cause it's over 200 Megs but if some one wants to make test (need to be postgres), I can arrange to put it on private ftp; I have actually 2 databases with the problem; running 7.3.4 pg_dump or 7.4.1 dosn't change the problem. Can some one help me please... -- Olivier PRENANT Tel: +33-5-61-50-97-00 (Work) 6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax) 31190 AUTERIVE +33-6-07-63-80-64 (GSM) FRANCE Email: [EMAIL PROTECTED] --- --- Make your life a dream, make your dream a reality. (St Exupery) -- Forwarded message -- Date: Fri, 9 Jan 2004 13:54:22 +0100 (MET) From: [EMAIL PROTECTED] To: pgsql-hackers list [EMAIL PROTECTED] Subject: Strand COPY from problem Hi every one and Happy new year (this is my first post since 2004) I wanted to upgrade my system from 7.3.4 to 7.4. and fell on this: One of the databases I host doesn't load correctly. Let me explain: I first pg_dumpall (with the 7.4.1 version) using the 7.3.4 port; that works fine. The psql -f all.sql template1 on the 741 version to recreate every thing. 2 copy don't work. those tables both contain a large text column with text containing lots of \r\n ' . What's stange is that copy from stdin; obviously don't work but if I copy table to '/tmp/xxx.dat' in 7.3.4 and copy table from 'xxx.dat' in 7.4.1, it works fine... Could there but something wrong with from stdin? TIA for your help... -- Olivier PRENANT Tel: +33-5-61-50-97-00 (Work) 6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax) 31190 AUTERIVE +33-6-07-63-80-64 (GSM) FRANCE Email: [EMAIL PROTECTED] -- Make your life a dream, make your dream a reality. (St Exupery) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] What's planned for 7.5?
On Mon, Jan 19, 2004 at 08:12:28AM -0500, Jan Wieck wrote: and cons, none is the one size that fits all magic solution. To select Does anyone realy believe that there can be a one size fits all solution? Heck, even Oracle and IBM offer a couple of different systems, depending on what you need. (That also suggests that any replication system need not always be shipped with the basic distribution, but could instead be integrated into a larger, postgresql_plus_enterprise_features.tgz or something like that.) A -- Andrew Sullivan | [EMAIL PROTECTED] I remember when computers were frustrating because they *did* exactly what you told them to. That actually seems sort of quaint now. --J.D. Baldwin ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] set search_path and pg_dumpall
Robert Treat [EMAIL PROTECTED] writes: I can't be the only one forsee frustration from users who typo the set search_path statement and then can't figure out why their tables aren't showing up... can we emit a warning that not all of the schemas in the search path were found? Since no one else commented, I've followed your suggestion. As of CVS tip, you get a NOTICE not ERROR in this case: regression=# set search_path = public,z; ERROR: schema z does not exist regression=# create database foo; CREATE DATABASE regression=# alter database foo set search_path = public,z; NOTICE: schema z does not exist ALTER DATABASE regression=# select datconfig from pg_database where datname = 'foo'; datconfig --- {search_path=public, z} (1 row) If a bogus entry is present in the established search_path value, it's just ignored (this was true already to handle $user): regression=# \c foo You are now connected to database foo. foo=# show search_path; search_path - public, z (1 row) foo=# select current_schemas(true); current_schemas - {pg_catalog,public} (1 row) regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[HACKERS] Old binary packages.
I am looking at the possibility of cleaning up the binary tree on the ftp site, and was wondering what the group thought about purging old binaries. What I was thinking would be to remove all but the last minor release of each major version. Thus, I would remove 7.4, but leave 7.4.1. The space taken by binaries is significant (about 1GB at this point). Since we are keeping all source releases (although I would question that, since we use CVS), keeping all the binaries around is just a space waster, IMHO. Comments? -- Lamar Owen Director of Information Technology Pisgah Astronomical Research Institute 1 PARI Drive Rosman, NC 28772 (828)862-5554 www.pari.edu ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[HACKERS] [7.4] permissions problem with pl/pgsql function
Just had a client point this out to me, and am wondering if its supposed to happen: 420_test= select dropgeometrycolumn('420_test','lroadline61','roads61_geom'); ERROR: permission denied for relation pg_attribute CONTEXT: PL/pgSQL function dropgeometrycolumn line 19 at execute statement the database was created as: CREATE DATABASE db WITH OWNER = owner and I'm connected to the database as the owner ... shouldn't the system tables also be owned by the owner? Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Old binary packages.
Lamar Owen wrote: I am looking at the possibility of cleaning up the binary tree on the ftp site, and was wondering what the group thought about purging old binaries. What I was thinking would be to remove all but the last minor release of each major version. Thus, I would remove 7.4, but leave 7.4.1. The space taken by binaries is significant (about 1GB at this point). Since we are keeping all source releases (although I would question that, since we use CVS), keeping all the binaries around is just a space waster, IMHO. I would keep 7.3.5, 7.4, 7.4.1 (as 7.4 is the current release) and then do as you suggest for the older binaries. J Comments? -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [7.4] permissions problem with pl/pgsql function
Marc G. Fournier [EMAIL PROTECTED] writes: Just had a client point this out to me, and am wondering if its supposed to happen: 420_test= select dropgeometrycolumn('420_test','lroadline61','roads61_geom'); ERROR: permission denied for relation pg_attribute CONTEXT: PL/pgSQL function dropgeometrycolumn line 19 at execute statement Can't tell much about this without seeing the contents of the function ... in particular, what SQL command is it trying to execute when it chokes? regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Old binary packages.
Lamar Owen [EMAIL PROTECTED] writes: I am looking at the possibility of cleaning up the binary tree on the ftp site, and was wondering what the group thought about purging old binaries. What I was thinking would be to remove all but the last minor release of each major version. Thus, I would remove 7.4, but leave 7.4.1. I concur with Josh Drake's thought --- leave releases that are less than, perhaps, six months old, even if they have been superseded in their series. Superseded releases that are older than that could be dispensed with. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] SPI_prepare and error recovery
I'm using SPI_prepare to do some dynamic SQL stuff in the backend. Some errors result in a call to the elog routines with ERROR level, which in turn causes log printout and a longjmp. Since I want to trap those errors and try an alternative I tried the following: sigjmp_buf saveRestart; memcpy(saveRestart, Warn_restart, sizeof(saveRestart)); if(sigsetjmp(Warn_restart, 1) != 0) { memcpy(Warn_restart, saveRestart, sizeof(Warn_restart)); /* Handle error here */ return NULL; } void* plan = SPI_prepare(...); memcpy(Warn_restart, saveRestart, sizeof(Warn_restart)); return plan; My question is, at the point of /* Handle error here */, how do I get hold of the error information? Judging from the code in elog.c, this info is sent do the server and/or client log and then lost before the longjmp is made. Is that the way it's supposed to be? Regards, Thomas Hallgren ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] SPI_prepare and error recovery
Thomas Hallgren [EMAIL PROTECTED] writes: My question is, at the point of /* Handle error here */, how do I get hold of the error information? You don't. The above coding technique is entirely unsafe anyway, because it relies on the assumption that the system is still in a good state to continue the transaction, which is in general not true. If we had nested-transaction support you could arrange for an inner transaction around the thing you want to retry; but we don't, and this problem of cleaning up after an error is one of the biggest reasons why not. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] [7.4] permissions problem with pl/pgsql function
420_test= select dropgeometrycolumn('420_test','lroadline61','roads61_geom'); ERROR: permission denied for relation pg_attribute CONTEXT: PL/pgSQL function dropgeometrycolumn line 19 at execute statement the database was created as: CREATE DATABASE db WITH OWNER = owner and I'm connected to the database as the owner ... shouldn't the system tables also be owned by the owner? No, you have to have the usecatupd field set to true in your pg_shadow row to be able to modify the catalogs. This is automatically assigned to a superuser, not the database owner. (Otherwise it's trivial to munt someone else's database by deleting from pg_database or pg_shadow...) Chris ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Old binary packages.
Lamar Owen wrote: I am looking at the possibility of cleaning up the binary tree on the ftp site, and was wondering what the group thought about purging old binaries. What I was thinking would be to remove all but the last minor release of each major version. Thus, I would remove 7.4, but leave 7.4.1. The space taken by binaries is significant (about 1GB at this point). Since we are keeping all source releases (although I would question that, since we use CVS), keeping all the binaries around is just a space waster, IMHO. Unless you know that someone is actually running out of space, I think it would be better to keep past releases around. I've needed them more often than you would think. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] SPI_prepare and error recovery
As you already know, I'm working on a backend Java mapping. I'm using the SPI routines to implement java.sql interfaces such as Connection, PreparedStatement, and ResultSet, that will utilize the current transaction. Using them, you will work with a try/catch/finally metaphor and all exceptions, short of OutOfMemory errors and alike, are trappable. To just bypass the catch/finally using a longjmp is a somewhat serious violation of rules. If the statement indeed does invalidate the transaction, then I can understand that further database access during that particular would be futile. But what if it doesn't (a prepare really shouldn't)? Or what if I want some error recovery that doesn't access the database at all? Perhaps I just want to send a message on a socket, write something to a file, or whatever, before the error is propagated. But OK, the postgresql SPI and backend code was not written with try/catch etc. in mind (nor multithreading :-) ) so I guess I have to live with the limitations and try to make the best of it. Meanwhile, perhaps someone else should consider a solution where: a) It is possible to trap an error and retrieve the cause of it (analog with try/catch) b) The catching of an error will inhibit that the error is propagated to the client. c) An error that was caught, can be reactivated (re-thrown). d) Errors that invalidates the current transaction sets a flag that makes further calls impossible (generates a new error) until the transaction has ended. IMO, that should be doable without nested transactions. Regards, Thomas Hallgren Tom Lane [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Thomas Hallgren [EMAIL PROTECTED] writes: My question is, at the point of /* Handle error here */, how do I get hold of the error information? You don't. The above coding technique is entirely unsafe anyway, because it relies on the assumption that the system is still in a good state to continue the transaction, which is in general not true. If we had nested-transaction support you could arrange for an inner transaction around the thing you want to retry; but we don't, and this problem of cleaning up after an error is one of the biggest reasons why not. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] VACUUM delay (was Re: What's planned for 7.5?)
People, I don't have the time to make enough different attempts to find the one that pleases all. My argument still is that all this IO throttling and IO optimizing is mainly needed for dedicated servers, because I think that if you still run multiple services on one box you're not really in trouble yet. So in the first round a configurable sync() approach would do. So far nobody even agreed to that. I won't claim expertise on the different sync algorithms. However, I do need to speak up in support of Jan's assertion; the machines most likely to suffer I/O choke are, or should be, dedicated machines. If someone's running 6 major server applications on a server with a 25GB database and a single RAID-5 array, then they've got to expect some serious performance issues. We currently have a lot of users running large databases on devoted servers, though, and they can't vaccuum their databases during working hours because the vacuum ties up the I/O for 10 minutes or more. It's a bad situation and makes us look very bad in comparison to the proprietary databases, which have largely solved this problem. Maybe the sync() approach isn't perfect, but it's certainly better than not doing anything, particularly if it can be turned off at startup time. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] VACUUM delay (was Re: What's planned for 7.5?)
On Mon, 2004-01-19 at 08:37, Jan Wieck wrote: but I will not waste my time with making patches nobody even gives a try. I downloaded and tested your patches. I just didn't get results get results that were put together well enough to present to the group. I hope this doesn't fall by the wayside, it is IMHO, on of the critical problems that needs to be solved. Sorry for the venting, but I needed that out. I understand. I'm sorry there wasn't more feedback as a result of your work. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] What's planned for 7.5?
Andrew Sullivan wrote: On Mon, Jan 19, 2004 at 08:12:28AM -0500, Jan Wieck wrote: and cons, none is the one size that fits all magic solution. To select Does anyone realy believe that there can be a one size fits all solution? Heck, even Oracle and IBM offer a couple of different systems, depending on what you need. (That also suggests that any replication system need not always be shipped with the basic distribution, but could instead be integrated into a larger, postgresql_plus_enterprise_features.tgz or something like that.) I don't, but consider Linux which can be configured to run on devices as small as a wristwatch and as large as the the big irons... -- dave ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] [Fwd: plpgsql and booleans?]
Hi guys, When writing a PL/pgSQL trigger function how do you handle the case : EXECUTE ''UPDATE test_table SET test_col '' || quote_literal(NEW.test_col2) || '';''; where test_col and test_col2 are boolean fields? The case above gives : ERROR: function quote_literal(boolean) does not exist And without the quote_literal() gives : ERROR: operator does not exist: text || boolean Is there supposed to be a quote_literal() for booleans? Chris ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Allow backend to output result sets in XML
hello, i would like to begin work on the TODO item Allow backend to output result sets in XML i would like to know if anyone has already begun work on this item. if someone has already started work, i would love to help! thanks much in advance, b __ Do you Yahoo!? Yahoo! Hotjobs: Enter the Signing Bonus Sweepstakes http://hotjobs.sweepstakes.yahoo.com/signingbonus ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match