Re: [HACKERS] Regression test failure date.
Bruce Momjian [EMAIL PROTECTED] writes: I have only been running nightly paralell regression runs since June 27, so it is possible that the paralell regression was broken in February, fixed in May, then broken some time after that. Any further progress on this? My best theory at the moment is that we have a problem with relcache entry creation failing if it's interrupted by an SI inval message at just the right time. I don't much want to grovel through six months worth of changelog entries looking for candidate mistakes, though. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Make clean fails
Yep, that solved both the make clean and the coredump problems. Is that piece of information in the developers FAQ anywhere? It's in the how to use CVS instructions ... So it is. I probably read that before I got CVS working here, and it isn't mentioned (or that section of the docs referenced) in the CVS section in the Developer's FAQ. I found both sections insufficient for me to get CVS working here, and since I am thinking about using it for another project I picked up a copy of 'ESSENTIAL CVS' to fill in some of the gaps in my knowledge. Just part of the baptism of fire for a newbie, I guess. :-) -- Mike Nolan ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Make clean fails
On Mon, 28 Jul 2003 04:27 pm, [EMAIL PROTECTED] wrote: Just part of the baptism of fire for a newbie, I guess. :-) I've found the learning curve pretty steep too. Is it worth putting together some of these 'gotchas' into a neophyte-developer-FAQ? As a side note: anyone else noticed that developer.postgresql.org is displaying an apache test page? I assume this might be an indicator of work in progress. Regards, Philip Yarra. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[HACKERS] developer.postgresql.org is broken
It's showing the apache page. Chris ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Equivalent of PQresultErrorField for PGconn?
I seem to be missing an equivalent of PQresultErrorField() that operates on a PGconn object (like PQerrorMessage() does). This would be useful to detect errors that occur outside of a statement execution. (One particular case I'm looking at is the message fe_sendauth: no password supplied, which clients look at to detect if they need to send a password.) -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Some macros for error field codes
Should we create some user-friendly macros for the error field codes? I have a feeling that PGXYZ_SQLSTATE may make for better code than just 'C'. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] SQLSTATEs for warnings
The SQL standard states An implementation-defined completion condition shall be indicated by returning an implementation-defined subclass in conjunction with one of the classes successful completion, warning, or no data. In other words, an info, notice, or warning must have a class 00, 01, 02. However, I noticed several warnings that use codes that are intended for errors, for example in portalcmds.c and xact.c. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Passing server_encoding to the client is not future-proof
Has anyone thought of what will happen to the server_encoding parameter when the character set/encoding will be settable for individual columns and the concept of a global server encoding will go away? What will happen to clients that make use of this parameter? -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] is_superuser parameter creates inconsistencies
Presumably, the is_superuser parameter was intended to make the updating of psql's prompt more accurate when SET SESSION AUTHORIZATION is used. However, if the prompt is customized to include the user name (%n), then the prompt changes to reflect the real superuser status, but does not change the user name. I guess we need to pass session_user as well. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Equivalent of PQresultErrorField for PGconn?
Peter Eisentraut [EMAIL PROTECTED] writes: I seem to be missing an equivalent of PQresultErrorField() that operates on a PGconn object (like PQerrorMessage() does). This would be useful to detect errors that occur outside of a statement execution. Yeah. I thought about that, but the infrastructure is not there: libpq's internal error reports would have to be restructured to deliver multiple fields, in the same way as we have done in the backend. There was not time to make it happen for 7.4. Looking at the spec's list of SQLSTATE codes, it seems clear that they intend SQLSTATE to cover client-library-detected errors as well as server-detected ones. So it would make sense to tackle this project for 7.5. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Some macros for error field codes
Peter Eisentraut [EMAIL PROTECTED] writes: Should we create some user-friendly macros for the error field codes? I have a feeling that PGXYZ_SQLSTATE may make for better code than just 'C'. I just pushed the ERRCODE_ macros out into their own file. How about placing that file where client apps can include it directly? Note though that I think client apps will want to test is this code in this class about as often as they'll want to check for specific codes. We might need to provide some additional infrastructure to handle that. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] OSF build fixed
On Wed, 16 Jul 2003 03:56 am, Tom Lane wrote: osf-template.patch: adds pthread support for OSF tested on OSF (uname -a: OSF1 hostname V4.0 1229 alpha) Applied; it would be a good idea to get some more testing of it, but that's what beta is for ... Yes, I only have access to one version of Tru64, so all I can assert is that it works for me, and that is only for a fairly simple test case so far. I'd be much more comfortable if other OSF users could test too. I can supply the simple ECPG app I've been using. Regards, Philip. [back from holidays and catching up on a staggering volume of PostgreSQL mail] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Some macros for error field codes
Tom Lane writes: Should we create some user-friendly macros for the error field codes? I have a feeling that PGXYZ_SQLSTATE may make for better code than just 'C'. I just pushed the ERRCODE_ macros out into their own file. How about placing that file where client apps can include it directly? I was actually talking of the single-letter codes that are used on the protocol layer to tag the different fields of an error or notice result. That is, instead of PQresultErrorField(result, 'M') I'd rather use something like PQresultErrorField(result, PGEF_MESSSAGE_TEXT) I'm not actually eager to push out the wordy SQLSTATE macros to the client, although some might argue for that. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
[HACKERS] Doubt w.r.t vacuum
Hi, I was just wondering over it. This is for difference between vacuum full and vacuum analyze. Can somebody enlighten, 1. IIRC vacuum recovers/reuses dead tuples generated from update but can not do so for delete? Why? 2. Vacuum full locks entire table, is it possible that it locks a page at a time and deal with it. It will make vacuum full non-blocking at the cost of letting it run for a longer time. Or is it that the defragmentation algorithm needs more than a page? Just a thought.. Bye Shridhar -- Weed's Axiom: Never ask two questions in a business letter. The reply will discuss the one in which you areleast interested and say nothing about the other. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] Warning for undefined cursor
In backend/commands/portalcmds.c we have /* FIXME: shouldn't this be an ERROR? */ ereport(WARNING, (errcode(ERRCODE_UNDEFINED_CURSOR), errmsg(portal \%s\ does not exist, stmt-portalname))); The effect of this is that you can fetch from an undefined cursor and (successfully) get zero rows plus this warning. I propose that we change this to an error. Does anyone see a reason against this, except possibly bugward compatibility? -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[HACKERS] Assignment scheme for implementation-defined error codes?
I'm currently fixing up ecpg for the new error codes. (ecpg was doing string comparisons to detect certain failure conditions, which no longer works, so this is a must-fix.) Many of the failure conditions that ecpg detects explicitly can be mapped to SQLSTATE codes that are defined in the standard or have been assigned for use by the backend. Some codes, however, will end up being specific to ecpg. I'm wondering what kind of scheme we should use to allow clients to reserve some SQLSTATE codes for their own use. The errors I'm currently looking at can be thought of as internal errors, so should I be using the class XX, or maybe XY as internal error on the client side, or maybe YE as internal error on the client side in ecpg (so YL could be libpq, YO the ODBC driver, etc.)? What about creating client-specific subclasses of existing classes? Is the DB2 SQLSTATE reference, where some codes were apparently taken from, available somewhere? (A web search doesn't show anything useful.) -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] developer.postgresql.org is broken
already fixed On Mon, 28 Jul 2003, Christopher Kings-Lynne wrote: It's showing the apache page. Chris ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy Systems Administrator @ hub.org primary: [EMAIL PROTECTED] secondary: [EMAIL PROTECTED]|postgresql}.org ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Doubt w.r.t vacuum
Shridhar Daithankar [EMAIL PROTECTED] writes: Hi, I was just wondering over it. This is for difference between vacuum full and vacuum analyze. Can somebody enlighten, 1. IIRC vacuum recovers/reuses dead tuples generated from update but can not do so for delete? Why? YDNRC. 2. Vacuum full locks entire table, is it possible that it locks a page at a time and deal with it. It will make vacuum full non-blocking at the cost of letting it run for a longer time. Or is it that the defragmentation algorithm needs more than a page? This I don't know, but I imagine that if what you suggest was easy to do it would have been done, and there would have been no need for two different kinds of VACUUM. -DOUG ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Doubt w.r.t vacuum
On 28 Jul 2003 at 9:11, Doug McNaught wrote: Shridhar Daithankar [EMAIL PROTECTED] writes: Hi, I was just wondering over it. This is for difference between vacuum full and vacuum analyze. Can somebody enlighten, 1. IIRC vacuum recovers/reuses dead tuples generated from update but can not do so for delete? Why? YDNRC. You did not read... C for what? Code? 2. Vacuum full locks entire table, is it possible that it locks a page at a time and deal with it. It will make vacuum full non-blocking at the cost of letting it run for a longer time. Or is it that the defragmentation algorithm needs more than a page? This I don't know, but I imagine that if what you suggest was easy to do it would have been done, and there would have been no need for two different kinds of VACUUM. I went thr. the code, although vbery briefly but I can imagine that code being dependent upon tons of other things. Didn't understand everything so left it as it is.. Bye Shridhar -- Mix's Law: There is nothing more permanent than a temporary building. There is nothing more permanent than a temporary tax. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Assignment scheme for implementation-defined error codes?
Peter Eisentraut [EMAIL PROTECTED] writes: ... I'm wondering what kind of scheme we should use to allow clients to reserve some SQLSTATE codes for their own use. The errors I'm currently looking at can be thought of as internal errors, so should I be using the class XX, or maybe XY as internal error on the client side, or maybe YE as internal error on the client side in ecpg (so YL could be libpq, YO the ODBC driver, etc.)? It seems reasonable to reserve Y for client-side internal errors, with as you suggest the second character used to identify a particular client library. Note also that there are several standard SQLSTATEs that appear to be intended for client-side use, such as ERRCODE_SQLSERVER_REJECTED_ESTABLISHMENT_OF_SQLCONNECTION What about creating client-specific subclasses of existing classes? Again we could go with reserving the Y prefix, although the namespace is getting rather tight there --- if we use the next character to distinguish different clients, there's only one character left for the individual errors. Perhaps a dozen or so per client will be enough. Is the DB2 SQLSTATE reference, where some codes were apparently taken from, available somewhere? (A web search doesn't show anything useful.) You need to google for the DB2 Message Reference; in recent editions the SQLSTATE stuff is in Volume 2, Chapter 3. There are complete sets of DB2 manuals available as PDFs at www.ibm.com. I don't have a record of the exact URL I found it at, but I've been using a PDF of the message reference manual for DB2 Version 8. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Error code mixup?
My copy of SQL99 assigns most specific type mismatch 2200G null value, no indicator parameter 22002 but elog.h has it set up the other way around. Can someone clear this up for me? -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Doubt w.r.t vacuum
On Mon, Jul 28, 2003 at 02:29:36PM +0530, Shridhar Daithankar wrote: I was just wondering over it. This is for difference between vacuum full and vacuum analyze. Can somebody enlighten, Actually, the different concepts are lazy vacuum (plain VACUUM command, with or without ANALYZE) and full vacuum (VACUUM FULL command, with or without ANALYZE). Lazy vacuum works one page at a time, so it doesn't need to lock the entire table. It is able to recover empty space from both updated and deleted tuples -- in fact, they look the same to it. All free space on each page is defragmented. Pages with free space are recorded in the Free Space Map. The FSM has limited space available, so only the pages with the most free space will be recorded. Vacuum full locks the entire table and moves tuples between pages. It leaves all pages full of tuples (except, obviously, the last one), so it doesn't need to record them in the FSM. Pages that are empty at the end of the table are truncated. This was the only version of VACUUM present in releases previous to 7.2. If I got something wrong, I'm sure someone will correct me. -- Alvaro Herrera (alvherre[a]dcc.uchile.cl) I dream about dreams about dreams, sang the nightingale under the pale moon (Sandman) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Regression test failure date.
I am testing this today. I found 2003-03-03 to not generate a failure in 20 tests, so I am moving forward to April/May. --- Robert Creager wrote: -- Start of PGP signed section. I will stand by the fact that I cannot generate failures from 2003-02-15 (200+ runs), and I can from 2003-02-16. Just to make sure I didn't screw up the cvs usage, I'll try again tonight if I get the chance and re-download re-test these two days. I can set up a script that will step through weekly dates starting from 'now' and see if the 02-16 problem might of been fixed and then re-introduced if you like. 2003-02-16 fails 6/50 vacuum failed 1 times misc failed 3 times sanity_check failed 3 times inherit failed 1 times triggers failed 4 times Cheers, Rob On Mon, 28 Jul 2003 02:14:32 -0400 Tom Lane [EMAIL PROTECTED] said something like: Bruce Momjian [EMAIL PROTECTED] writes: I have only been running nightly paralell regression runs since June 27, so it is possible that the paralell regression was broken in February, fixed in May, then broken some time after that. Any further progress on this? My best theory at the moment is that we have a problem with relcache entry creation failing if it's interrupted by an SI inval message at just the right time. I don't much want to grovel through six months worth of changelog entries looking for candidate mistakes, though. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- 06:57:40 up 10 days, 10:57, 2 users, load average: 2.17, 2.08, 1.83 -- End of PGP section, PGP failed! -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Warning for undefined cursor
Peter Eisentraut [EMAIL PROTECTED] writes: In backend/commands/portalcmds.c we have /* FIXME: shouldn't this be an ERROR? */ The effect of this is that you can fetch from an undefined cursor and (successfully) get zero rows plus this warning. I propose that we change this to an error. Does anyone see a reason against this, except possibly bugward compatibility? Backwards compatibility is the reason why it isn't an error now. -hackers is not really the right forum to find out whether anyone is depending on that behavior. I'd suggest you shop the proposal around on -general or -sql and see if you get any complaints. (I think there are a couple other cursor-related warnings that really oughta be errors, too; might as well finger them all while you are at it.) regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Doubt w.r.t vacuum
Shridhar Daithankar [EMAIL PROTECTED] writes: 1. IIRC vacuum recovers/reuses dead tuples generated from update but can not do so for delete? Why? This is not correct. 2. Vacuum full locks entire table, is it possible that it locks a page at a time and deal with it. No. You can't compact the table by moving tuples without locking the entire table. (For example, if we move a tuple from the end down to an earlier page, it's quite possible that a concurrently executing sequential scan would miss that tuple entirely. Another problem is that we cannot truncate the table to fewer pages without locking out writers; else we may decide that there are N empty pages, then execute ftruncate() just after someone has put a new tuple into one of those pages.) Non-full vacuum is designed specifically to do what can be done without an exclusive lock. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Some macros for error field codes
Peter Eisentraut [EMAIL PROTECTED] writes: Should we create some user-friendly macros for the error field codes? I was actually talking of the single-letter codes that are used on the protocol layer to tag the different fields of an error or notice result. Oh, those things. Sure, if you like, though it might be just namespace clutter. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Doubt w.r.t vacuum
Alvaro Herrera [EMAIL PROTECTED] writes: Vacuum full locks the entire table and moves tuples between pages. It leaves all pages full of tuples (except, obviously, the last one), so it doesn't need to record them in the FSM. This is overoptimistic :-(. VACUUM FULL cannot necessarily compact the table completely, and so it will record free space in FSM (if there is any worth recording). An example situation is that page 1000 may contain a very large tuple, which will not fit on any earlier page. Once VACUUM FULL discovers this fact, it will not bother shuffling tuples on earlier pages, since it's not going to be able to truncate the table to less than 1000 pages. There may nonetheless be enough space available in earlier pages to store thousands of smaller-sized tuples. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Error code mixup?
Peter Eisentraut [EMAIL PROTECTED] writes: My copy of SQL99 assigns most specific type mismatch 2200G null value, no indicator parameter22002 but elog.h has it set up the other way around. Can someone clear this up for me? Hoo, that's interesting. I believe that I actually built the original version of errcodes.h by editing the list of codes in the Ada-binding part of the spec (part 2 13.4 rule 2e), which includes, in my draft copy, DATA_EXCEPTION_NULL_VALUE_NO_INDICATOR_PARAMETER: constant SQLSTATE_TYPE :=2200G; DATA_EXCEPTION_MOST_SPECIFIC_TYPE_MISMATCH: constant SQLSTATE_TYPE :=22002; But I see you're right that the table in section 22.1 has it the other way around. (Digs ... looks like the contradiction is still there in the published spec.) I wonder if there are any other inconsistencies? Probably we should assume that the table in section 22.1 is authoritative. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] is_superuser parameter creates inconsistencies
Peter Eisentraut [EMAIL PROTECTED] writes: Presumably, the is_superuser parameter was intended to make the updating of psql's prompt more accurate when SET SESSION AUTHORIZATION is used. However, if the prompt is customized to include the user name (%n), then the prompt changes to reflect the real superuser status, but does not change the user name. I guess we need to pass session_user as well. Seems reasonable. IIRC the only addition needed to the server code is to set a flag in the variable's GUC entry. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] Passing server_encoding to the client is not future-proof
Peter Eisentraut [EMAIL PROTECTED] writes: Has anyone thought of what will happen to the server_encoding parameter when the character set/encoding will be settable for individual columns and the concept of a global server encoding will go away? What will happen to clients that make use of this parameter? I would imagine that we'd keep the concept of a per-database encoding, but it would be become a default value for per-column encoding choices, rather than the One True Value. Clients could probably still make use of server_encoding, though I'm unclear on what they'd use it for now, let alone then. ISTM client_encoding is the only setting the client need deal with directly. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] SQLSTATEs for warnings
Peter Eisentraut [EMAIL PROTECTED] writes: In other words, an info, notice, or warning must have a class 00, 01, 02. I suspect though that the spec is assuming that the SQLSTATE code is the *only* way for the application to determine whether the message is success, warning, or error. Since we have other signaling mechanisms (the severity field, or even more basically the Error/Notice message type distinction), I'm not convinced we need to be entirely anal about this division. AFAICS the alternative to misusing error-class SQLSTATEs as warnings is that we invent implementation-specific warning codes. Is it really worth having two codes for what amounts to the same condition? However, I noticed several warnings that use codes that are intended for errors, for example in portalcmds.c and xact.c. Several of them are things that probably *ought* to be errors, as you noted in a later message. I don't recall exactly what other sins I committed in this area (the whole effort is already becoming a blur ;-)). We have time to rejigger around the edges though --- I have no objection to reassigning codes during beta. Once we release it will get harder. A possibly related sin in the same category is that I think that the class-40 SQLSTATEs are intended to imply that the server has forcibly aborted your whole transaction --- not just the current statement, which is what I think the spec envisions for all the other error classes. We do not currently make that distinction, but sooner or later we will. Perhaps we should avoid the class-40 codes and make our own implementation-dependent codes for deadlock detection and so forth. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] granularity of locks in postgresql
The following lines are from readme file present in the \src\backend\storage\lmgr folder of postgresql If we are setting a table level lock Both the blockId and tupleId (in an item pointer this is called the position) are set to invalid, if it is a page level lock the blockId is valid, while the tupleId is still invalid. Finally if this is a tuple level lock (we currently never do this) then both the blockId and tupleId are set to valid specifications. I dont see any field called tupleId in LockTag..does it have another name? Also, (we currently never do this)-- does this mean that we currently can acquire tuplelevel(row level) locks in postgresql? Thank you Jenny _ Protect your PC - get McAfee.com VirusScan Online http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Regression test failure date.
I am now seeing this error in 2003-03-03. CREATE TABLE INSERT_CHILD (cx INT default 42, cy INT CHECK (cy x)) INHERITS (INSERT_TBL); + ERROR: RelationClearRelation: relation 130996 deleted while still in use --- Bruce Momjian wrote: I am testing this today. I found 2003-03-03 to not generate a failure in 20 tests, so I am moving forward to April/May. --- Robert Creager wrote: -- Start of PGP signed section. I will stand by the fact that I cannot generate failures from 2003-02-15 (200+ runs), and I can from 2003-02-16. Just to make sure I didn't screw up the cvs usage, I'll try again tonight if I get the chance and re-download re-test these two days. I can set up a script that will step through weekly dates starting from 'now' and see if the 02-16 problem might of been fixed and then re-introduced if you like. 2003-02-16 fails 6/50 vacuum failed 1 times misc failed 3 times sanity_check failed 3 times inherit failed 1 times triggers failed 4 times Cheers, Rob On Mon, 28 Jul 2003 02:14:32 -0400 Tom Lane [EMAIL PROTECTED] said something like: Bruce Momjian [EMAIL PROTECTED] writes: I have only been running nightly paralell regression runs since June 27, so it is possible that the paralell regression was broken in February, fixed in May, then broken some time after that. Any further progress on this? My best theory at the moment is that we have a problem with relcache entry creation failing if it's interrupted by an SI inval message at just the right time. I don't much want to grovel through six months worth of changelog entries looking for candidate mistakes, though. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- 06:57:40 up 10 days, 10:57, 2 users, load average: 2.17, 2.08, 1.83 -- End of PGP section, PGP failed! -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Feature request -- Log Database Name
On Mon, 28 Jul 2003, Robert Treat wrote: Date: 28 Jul 2003 13:50:27 -0400 From: Robert Treat [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Larry Rosenman [EMAIL PROTECTED], Josh Berkus [EMAIL PROTECTED], pgsql-hackers list [EMAIL PROTECTED] Subject: Re: [HACKERS] Feature request -- Log Database Name On Thu, 2003-07-24 at 11:23, [EMAIL PROTECTED] wrote: Also I was thinking that we could hide a log table into a special schema like this: CREATE TABLE log ( whentimestamp, usertext, table name, query text, error text); So that iff this table exists in a databse, all error reporting would be logged in this table. This sounds complicated but IMHO would be unvaluable for debugging help I think better would be a GUC log_to_table which wrote all standard out/err to a pg_log table. of course, I doubt you could make this foolproof (how to log startup errors in this table?) but it could be a start. Robert Treat That would be great (although of course not follproof) maybe to be safe we could do both just to be on the safe side. This pg_log_table should be local to each database of course... -- Olivier PRENANT Tel: +33-5-61-50-97-00 (Work) 6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax) 31190 AUTERIVE +33-6-07-63-80-64 (GSM) FRANCE Email: [EMAIL PROTECTED] -- Make your life a dream, make your dream a reality. (St Exupery) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Feature request -- Log Database Name
On Thu, 2003-07-24 at 11:23, [EMAIL PROTECTED] wrote: Also I was thinking that we could hide a log table into a special schema like this: CREATE TABLE log ( when timestamp, user text, table name, query text, error text); So that iff this table exists in a databse, all error reporting would be logged in this table. This sounds complicated but IMHO would be unvaluable for debugging help I think better would be a GUC log_to_table which wrote all standard out/err to a pg_log table. of course, I doubt you could make this foolproof (how to log startup errors in this table?) but it could be a start. Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Is Patch Ok for deferred trigger disk queue?
Bruce Momjian wrote: I assume this will not be completed for 7.4. I will keep the emails for 7.5. One idea I had was to use the existing sort_mem parameter to control when to force the deferred trigger queue to disk --- it doesn't have anything to do with sorting, but it does have the same purpose, to force thing to disk when we consume enough RAM. --- Bruce Momjian wrote: Stuart, were are on this patch? Seems we need GUC additions, though I can do that for you, and changes to write the head to disk. Was that completed? --- Stuart wrote: Tom Lane wrote: Stephan Szabo [EMAIL PROTECTED] writes: As a side question, it looks to me that the code stores the first trigger records in memory and then after some point starts storing all new records on disk. Is this correct? I'd wonder if that's really what you want in general, since I'd think that the earliest ones are the ones you're least likely to need until end of transaction (or set constraints in the fk case) whereas the most recent ones are possibly going to be immediate triggers which you're going to need as soon as the statement is done. Good point. It would be better to push out stuff from the head of the queue, hoping that stuff near the end might never need to be written at all. regards, tom lane Hmmm I see your point. I will change the patch to write the head to disk and reenter when the development branch splits off. Also I've noticed that there is an fd.h which has file routines which I should be using rather than the stdio routines. I will also clean up those errors. Thank you, - Stuart Sorry for the tardiness in replying, I've been away for the past week or so. I didn't intend for 7.4 partly because I knew I'd be away partly because I had seen there was a problem I hadn't realised with the previous patch and didn't want to submit something that may not be stable just before beta. Currently it compiles but there are some, er, issues - shouldn't take to long to fix but it might not be till wednesday as I've got a bit of a backlog to get through. I could use sortmem, but if this is to be the case maybe there should be a change the call it something like max_local_mem with a way to register that you are using it. Maybe the memory allocs could automatically add to it and remove as memory is assigned. Alternativly just make a global to record the memory currently used by interested parties (currently the trigger sortmem I'd guess). The only trouble with this that I can see is that the first one to claim the memory may claim it all, leaving nothing for the other. I'll carry on using the dedicated guc variable for the moment as I can't really see the correct way to solve this cleanly. regards, - Stuart ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] best way to determine start of new statement within
elein wrote: Do you want me to try to write this up into more formal definitions? I think it would be a useful starting point for future discussions, but it goes way beyond what I was looking for at the moment. Joe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[HACKERS] now about psql and one function
in psql , file command.c : 1473 there is a query to check state for superuser, and this query is in begin/end transaction. When user dont have perm to read pg_user its fail but transation is never end, and ist make some confusion because user have to write END; or ROLLBACK; themself. I just removed BEGIN and END from this query. ok, and this function : create or replace func () returns INT as ' DECLARE cnt INT; BEGIN CREATE TEMP TABLE ble (i INT); SELECT INTO cnt count(*) FROM ble; DROP TABLE ble; RETURN cnt; END; ' LANGUAGE 'plpgsql'; and at first time everythink is ok, but in next time is error like this : pg_class_aclcheck : relation nr not found . (This function is only example , and i know that it doesnt sens) I think its by SysCache, how can i reset it ? or turn off ? or reload ? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Regression test failure date.
Bruce Momjian [EMAIL PROTECTED] writes: I am now seeing this error in 2003-03-03. CREATE TABLE INSERT_CHILD (cx INT default 42, cy INT CHECK (cy x)) INHERITS (INSERT_TBL); + ERROR: RelationClearRelation: relation 130996 deleted while still in use I have a theory about the failures that occur while creating tables. If a relcache flush were to occur due to SI buffer overrun between creation of the new rel's relcache entry by RelationBuildLocalRelation and completion of the command, then you'd see an error exactly like the above, because the relcache would try to rebuild the cache entry by reading the pg_class and pg_attribute rows for the relation. Which would possibly not exist yet, and even if they did exist they'd be invisible under SnapshotNow rules. However this bug is of long standing, and it doesn't seem all that probable as an explanation for your difficulties. It would be worth running the tests with log_min_messages set to DEBUG4 (along with the verbosity setting, please) and see if you observe cache state reset log entries just before the failures. In any case this would not explain failures during DROP TABLE, so there's another issue to look for. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Regression test failure date.
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: I am now seeing this error in 2003-03-03. CREATE TABLE INSERT_CHILD (cx INT default 42, cy INT CHECK (cy x)) INHERITS (INSERT_TBL); + ERROR: RelationClearRelation: relation 130996 deleted while still in use Define now seeing. Did you change something? Did you just run more test cycles and it happened one time? Did it suddenly start to happen a lot? Ran more cycles, that's all. I had reported 2003-03-03 was fine, but only ran a few tests that previous time. I am looking at the mid-February date range now. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Regression test failure date.
Bruce Momjian [EMAIL PROTECTED] writes: I am now seeing this error in 2003-03-03. CREATE TABLE INSERT_CHILD (cx INT default 42, cy INT CHECK (cy x)) INHERITS (INSERT_TBL); + ERROR: RelationClearRelation: relation 130996 deleted while still in use Define now seeing. Did you change something? Did you just run more test cycles and it happened one time? Did it suddenly start to happen a lot? regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Doubt w.r.t vacuum
Robert Treat [EMAIL PROTECTED] writes: This is overoptimistic :-(. VACUUM FULL cannot necessarily compact the table completely, and so it will record free space in FSM (if there is any worth recording). An example situation is that page 1000 may contain a very large tuple, which will not fit on any earlier page. Isn't it possible that the reshuffling of tuples before page 1000 could open up enough space to move the overly large tuple? Not in the same vacuum pass. Reshuffling opens *zero* space until you commit the shuffling transaction, because you can't destroy the old copies until you commit the moved ones. You could imagine making multiple passes, but at that point it's almost certainly faster to forget the VACUUM FULL approach entirely, and do something more like CLUSTER: copy all the live tuples into a new file. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Doubt w.r.t vacuum
On Mon, 2003-07-28 at 11:04, Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Vacuum full locks the entire table and moves tuples between pages. It leaves all pages full of tuples (except, obviously, the last one), so it doesn't need to record them in the FSM. This is overoptimistic :-(. VACUUM FULL cannot necessarily compact the table completely, and so it will record free space in FSM (if there is any worth recording). An example situation is that page 1000 may contain a very large tuple, which will not fit on any earlier page. Once VACUUM FULL discovers this fact, it will not bother shuffling tuples on earlier pages, since it's not going to be able to truncate the table to less than 1000 pages. There may nonetheless be enough space available in earlier pages to store thousands of smaller-sized tuples. Isn't it possible that the reshuffling of tuples before page 1000 could open up enough space to move the overly large tuple? Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] arrays
On Sun, 27 Jul 2003, Tom Lane wrote: ivan [EMAIL PROTECTED] writes: declaration is setof record array_to_setof ( anyarray); i trayed created it like any array_to_setof( anyarray) but i had problem with select. i also trying write setof_to_array as aggregate, but i have problem with returns type , because aggregate cat have 'anyarray' as result ;( I think you want to declare it like this: create function array_to_setof(anyarray) returns setof anyelement as ... in this way i declared it : create function array_to_setof(anyarray) returns setof record as ' .. and its working because function create new type of record (as type of this anyarray) and then return each elem. In way back from typles to array , i have problem , because i need to create aggregate function . I think the function to arrays like this and function from contrib (array by Massimo Dal Zotto) should be in pg_catalog as std. and function to checking exists. i wrote function like : create function iv_exists (text, char) returns boolean, this function as 1 arg bring relation name, 2 arg is a type of this relation (is could be 'r' = table , 'c' = compose , 'f' function, 'C' = CASE , S s i etc like in pg_class.relkind, and i reserved 'M' to module and 'K' to class. But my way is very simple , but i did not find any function like this :/ Joe Conway may already have written something like this --- look in his contrib modules. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] an aggregate array function
Dear hackers, Do you think there would be any use for an aggregate which returns an array of the aggregated (usually simple) type? Has this already been done by anyone? I looked at the source and noticed that for each inserted item, the array utility functions perform a deep copy of the entire array (plus a reallocation). Normally, this is no big deal, but if executed in the query stage, it could be kind of slow. I also noticed that null values inside is an item on the todo list. Is anybody currently working on this? Merlin
Re: [HACKERS] Regression test failure date.
Tom, is the attached regression diff considered normal? This was generated by current CVS. I am trying to determine what is a normal error and what is something to be concerned about. Also, I am up to Feb 25 with no errors, but am still testing. --- Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: I am now seeing this error in 2003-03-03. CREATE TABLE INSERT_CHILD (cx INT default 42, cy INT CHECK (cy x)) INHERITS (INSERT_TBL); + ERROR: RelationClearRelation: relation 130996 deleted while still in use I have a theory about the failures that occur while creating tables. If a relcache flush were to occur due to SI buffer overrun between creation of the new rel's relcache entry by RelationBuildLocalRelation and completion of the command, then you'd see an error exactly like the above, because the relcache would try to rebuild the cache entry by reading the pg_class and pg_attribute rows for the relation. Which would possibly not exist yet, and even if they did exist they'd be invisible under SnapshotNow rules. However this bug is of long standing, and it doesn't seem all that probable as an explanation for your difficulties. It would be worth running the tests with log_min_messages set to DEBUG4 (along with the verbosity setting, please) and see if you observe cache state reset log entries just before the failures. In any case this would not explain failures during DROP TABLE, so there's another issue to look for. regards, tom lane -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 *** ./expected/constraints.out Mon Jul 28 13:50:13 2003 --- ./results/constraints.out Mon Jul 28 18:32:55 2003 *** *** 80,102 CREATE TABLE CHECK2_TBL (x int, y text, z int, CONSTRAINT SEQUENCE_CON CHECK (x 3 and y 'check failed' and z 8)); INSERT INTO CHECK2_TBL VALUES (4, 'check ok', -2); INSERT INTO CHECK2_TBL VALUES (1, 'x check failed', -2); ! ERROR: new row for relation check2_tbl violates CHECK constraint sequence_con INSERT INTO CHECK2_TBL VALUES (5, 'z check failed', 10); ! ERROR: new row for relation check2_tbl violates CHECK constraint sequence_con INSERT INTO CHECK2_TBL VALUES (0, 'check failed', -2); ! ERROR: new row for relation check2_tbl violates CHECK constraint sequence_con INSERT INTO CHECK2_TBL VALUES (6, 'check failed', 11); ! ERROR: new row for relation check2_tbl violates CHECK constraint sequence_con INSERT INTO CHECK2_TBL VALUES (7, 'check ok', 7); SELECT '' AS two, * from CHECK2_TBL; ! two | x |y | z ! -+---+--+ ! | 4 | check ok | -2 ! | 7 | check ok | 7 ! (2 rows) ! -- -- Check constraints on INSERT -- --- 80,100 CREATE TABLE CHECK2_TBL (x int, y text, z int, CONSTRAINT SEQUENCE_CON CHECK (x 3 and y 'check failed' and z 8)); + ERROR: relation 126581 deleted while still in use INSERT INTO CHECK2_TBL VALUES (4, 'check ok', -2); + ERROR: relation check2_tbl does not exist INSERT INTO CHECK2_TBL VALUES (1, 'x check failed', -2); ! ERROR: relation check2_tbl does not exist INSERT INTO CHECK2_TBL VALUES (5, 'z check failed', 10); ! ERROR: relation check2_tbl does not exist INSERT INTO CHECK2_TBL VALUES (0, 'check failed', -2); ! ERROR: relation check2_tbl does not exist INSERT INTO CHECK2_TBL VALUES (6, 'check failed', 11); ! ERROR: relation check2_tbl does not exist INSERT INTO CHECK2_TBL VALUES (7, 'check ok', 7); + ERROR: relation check2_tbl does not exist SELECT '' AS two, * from CHECK2_TBL; ! ERROR: relation check2_tbl does not exist -- -- Check constraints on INSERT -- == *** ./expected/misc.out Mon Jul 28 13:50:13 2003 --- ./results/misc.out Mon Jul 28 18:33:04 2003 *** *** 580,586 c c_star char_tbl - check2_tbl check_seq check_tbl circle_tbl --- 580,585 *** *** 660,666 toyemp varchar_tbl xacttest ! (96 rows) --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name; SELECT hobbies_by_name('basketball'); --- 659,665 toyemp varchar_tbl xacttest ! (95 rows) --SELECT name(equipment(hobby_construct(text 'skywalking', text 'mer'))) AS equip_name; SELECT hobbies_by_name('basketball'); == ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Regression test failure date.
Bruce Momjian [EMAIL PROTECTED] writes: Tom, is the attached regression diff considered normal? This was generated by current CVS. Well, this *looks* like it could be an example of the SI-overrun- during-create behavior I was talking about. But if you weren't running a verbose log to show whether a cache flush occurred just before the error, there's no way to know for sure. Right at the moment I am more interested in the other cases though (cache lookup failure during DROP) since I have no plausible explanation for them. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Regression test failure date.
Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: Tom, is the attached regression diff considered normal? This was generated by current CVS. Well, this *looks* like it could be an example of the SI-overrun- during-create behavior I was talking about. But if you weren't running a verbose log to show whether a cache flush occurred just before the error, there's no way to know for sure. OK. Right at the moment I am more interested in the other cases though (cache lookup failure during DROP) since I have no plausible explanation for them. Thanks. That's what I need to know. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] an aggregate array function
Merlin Moncure wrote: Dear hackers, Do you think there would be any use for an aggregate which returns an array of the aggregated (usually simple) type? Has this already been done by anyone? I looked at the source and noticed that for each inserted item, the array utility functions perform a deep copy of the entire array (plus a reallocation). Normally, this is no big deal, but if executed in the query stage, it could be kind of slow. What exactly have you looked at? In current cvs there is array_append and array_cat. There *was* array_accum, but that was yanked due to an objection that it was redundant with the other two. There is a contrib (intagg) that avoids the deep copy by passing pointers as integers, but I don't think that would be accepted as a builtin solution. I've considered maybe using a hash table to register valid pointers, but have not thought too hard about it yet. In any case it won't happen for 7.4. BTW, I tried array_accum() (which is not really much different than array_append()) with groups of about 10,000 elements and feeding the array into a plr final function for a mean calculation. It was for sure slow compared to a native AVG() aggregate, but it wasn't that bad either. I don't remember the specifics, but it would be easy enough to try it out for yourself. I also noticed that null values inside is an item on the todo list. Is anybody currently working on this? No one is currently working on it that I'm aware of, but I was considering working on it for 7.5 Joe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Regression test failure date.
I said: I have a theory about the failures that occur while creating tables. If a relcache flush were to occur due to SI buffer overrun between creation of the new rel's relcache entry by RelationBuildLocalRelation and completion of the command, then you'd see an error exactly like the above, because the relcache would try to rebuild the cache entry by reading the pg_class and pg_attribute rows for the relation. After further study, though, the above theory falls flat on its face: the relcache does *not* attempt to rebuild new relcache entries after an SI overrun (see the comments to RelationCacheInvalidate). So I'm back to wondering what the heck is causing any of these messages. I think we really need to see a stack trace from one of the failures. Could you try running CVS tip with an abort() call replacing the relation %u deleted while still in use elog? (It's line 1797 in src/backend/utils/cache/relcache.c in CVS tip.) Then when you get the failure, get a stack trace with gdb from the core dump. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Regression test failure date.
OK, on it now! --- Tom Lane wrote: I said: I have a theory about the failures that occur while creating tables. If a relcache flush were to occur due to SI buffer overrun between creation of the new rel's relcache entry by RelationBuildLocalRelation and completion of the command, then you'd see an error exactly like the above, because the relcache would try to rebuild the cache entry by reading the pg_class and pg_attribute rows for the relation. After further study, though, the above theory falls flat on its face: the relcache does *not* attempt to rebuild new relcache entries after an SI overrun (see the comments to RelationCacheInvalidate). So I'm back to wondering what the heck is causing any of these messages. I think we really need to see a stack trace from one of the failures. Could you try running CVS tip with an abort() call replacing the relation %u deleted while still in use elog? (It's line 1797 in src/backend/utils/cache/relcache.c in CVS tip.) Then when you get the failure, get a stack trace with gdb from the core dump. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Feature request -- Log Database Name
Robert Treat [EMAIL PROTECTED] writes: I think better would be a GUC log_to_table which wrote all standard out/err to a pg_log table. of course, I doubt you could make this foolproof (how to log startup errors in this table?) but it could be a start. How would a failed transaction make any entries in such a table? How would you handle maintenance operations on the table that require exclusive lock? (vacuum full, reindex, etc) It seems possible that you could make this work if you piped stderr to a buffering process that was itself a database client, and issued INSERTs to put the rows into the table, and could buffer pending data whenever someone else had the table locked (eg for vacuum). I'd not care to try to get backends to do it locally. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] RPMs for 7.3.4, and a change.
Good evening. RPMs for PostgreSQL 7.3.4, built on three architectures, are in the midst of uploading to ftp.postgresql.org, in /pub/binary/v7.3.4/RPMS. As usual, inside that directory is the directory SRPMS, which contains the source RPM, as well as the three binary RPM directories I am uploading. One minor thing; a source RPM suitable for rebuilding on Red Hat 7.3 is available in the aurora-1.0 subdirectory. Aurora 1.0 is basically Red Hat 7.3 for SPARC; there are also SPARC binaries there. Other than the version change, this RPMset includes the correct JDBC jars. There are a couple of fixes that have been e-mailed to me that are not in this update; I will address those as soon as I can. In other news, I have changed jobs. Previously, I worked full-time as a broadcast engineer/IT person for WGCR Radio. I still work part-time for them, amongst other radio stations, but my full-time position is now as Director of Information Technology for Pisgah Astronomical Research Institute (PARI), a radio/optical astronomical observatory located in Western North Carolina. You can find out more about PARI at our website, www.pari.edu. PARI is already using PostgreSQL for several applications, and soon will be looking at PostgreSQL for a large data warehousing application. And, in this case, I do mean large. I will be indexing and storing over three million astronomical photographic plates (if plans come together!), where each plate will scan in at roughly 650-750MB in size (uncompressed) (and this is 8-level grayscale scanning). Mass storage will be critical of this priceless data store, and PostgreSQL may very well fit the bill. I'm still in the planning phases, and we are still trying to secure funding for this project. But I am relatively confident that PostgreSQL will rise to the occassion. Some of the plates in question are over 100 years old. New challenges, new opportunities. But still the same PostgreSQL. -- Lamar Owen Director of Information Technology Pisgah Astronomical Research Institute ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster