Re: [HACKERS] [GENERAL] Corrupt database? 8.1/FreeBSD6.0
I wrote: > ... but I suddenly fear that we've missed a fundamental point about > pg_clog truncation. And WAL wraparound for that matter. To wit, a > sufficiently long-lived temp table could contain old XIDs, and there's > no way for anyone except the owning backend to clean them out, or even > guarantee that they're marked committed. After further thought I believe this is OK as of 8.2, because a temp table's relfrozenxid is tracked independently of any other's. (This problem puts a stake through the heart of the recently-discussed idea that a temp table might be able to get along without a globally visible pg_class entry, however.) But it seems that we need a band-aid for 8.1 and earlier. The simplest fix I can think of is for vacuum not to attempt to advance the datvacuumxid/datfrozenxid fields if it skipped over any temp tables of other backends. That's a bit nasty, since in a database making heavy use of temp tables, you might do a whole lot of vacuums without ever meeting that condition. Anyone have a better idea? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PERFORM] Performance of Parser?
On 13-Jan-07, at 7:24 PM, Tom Lane wrote: Jignesh Shah <[EMAIL PROTECTED]> writes: The appserver is basically using bunch of prepared statements that the server should be executing directly without doing the parsing again. Better have another look at that theory, because you're clearly spending a lot of time in parsing (operator resolution to be specific). I think your client code is failing to re-use prepared statements the way you think it is. This is exactly what is happening. The driver needs to cache statements for this to work. Dave regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Performance of Parser?
Jignesh Shah wrote: > The appserver is basically using bunch of prepared statements that the > server should be executing directly without doing the parsing again. > The first thing you need to do is turn on statement logging, if you haven't already, to verify this statement. cheers andrew ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [PERFORM] Performance of Parser?
Jignesh Shah <[EMAIL PROTECTED]> writes: > The appserver is basically using bunch of prepared statements that the > server should be executing directly without doing the parsing again. Better have another look at that theory, because you're clearly spending a lot of time in parsing (operator resolution to be specific). I think your client code is failing to re-use prepared statements the way you think it is. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Memory context in exception handler
Peter Eisentraut <[EMAIL PROTECTED]> writes: > But this fails because CopyErrorData() complains by way of assertion > that we're still in ErrorContext. A nearby comment suggests to switch > away to another context to preserve the data across FlushErrorState(), > but that doesn't seem necessary in this situation. Are there other > reasons why this rule is so rigorously enforced? I think it's a good error check because if you are trying to make a copy of the current error data, doing so within the ErrorContext seems highly unlikely to be safe. As near as I can tell, you're using CopyErrorData not because you need an actual copy but just because elog.c doesn't export any other API to let you see the current sqlerrorcode. Perhaps adding a function to return the top stack entry's sqlerrorcode would be a better API change? (I'm a bit uncomfortable with handing out direct access to the struct, but getting a peek at sqlerrorcode or other scalar values seems safe enough.) regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [GENERAL] Autovacuum Improvements
On Fri, Jan 12, 2007 at 07:33:05PM -0300, Alvaro Herrera wrote: > Simon Riggs wrote: > > > Some feedback from initial testing is that 2 queues probably isn't > > enough. If you have tables with 100s of blocks and tables with millions > > of blocks, the tables in the mid-range still lose out. So I'm thinking > > that a design with 3 queues based upon size ranges, plus the idea that > > when a queue is empty it will scan for tables slightly above/below its > > normal range. > > Yeah, eventually it occurred to me the fact that as soon as you have 2 > queues, you may as well want to have 3 or in fact any number. Which in > my proposal is very easily achieved. > > > > Alvaro, have you completed your design? > > No, I haven't, and the part that's missing is precisely the queues > stuff. I think I've been delaying posting it for too long, and that is > harmful because it makes other people waste time thinking on issues that > I may already have resolved, and delays the bashing that yet others will > surely inflict on my proposal, which is never a good thing ;-) So maybe > I'll put in a stub about the "queues" stuff and see how people like the > whole thing. Have you made any consideration of providing feedback on autovacuum to users? Right now we don't even know what tables were vacuumed when and what was reaped. This might actually be another topic. ---elein [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] Performance of Parser?
Hello All, I am using the latest 8.2 source that I compiled with Sun Studio 11 and tested it on Solaris 10 11/06 against an app server. I find that the CPU utilization was higher than I expected and started digging through it. Aparently the top CPU usage comes from the following stack trace which is roughly about 10-15% of the total the postgresql uses. Anyway a real developer might make more sense from this than I can libc_psr.so.1`memcpy+0x524 postgres`SearchCatCache+0x24 postgres`getBaseType+0x2c postgres`find_coercion_pathway+0x14 postgres`can_coerce_type+0x58 postgres`func_match_argtypes+0x24 postgres`oper_select_candidate+0x14 postgres`make_op+0x1a8 postgres`transformAExprAnd+0x1c postgres`transformWhereClause+0x18 postgres`transformUpdateStmt+0x94 postgres`transformStmt+0x1dc postgres`do_parse_analyze+0x18 postgres`parse_analyze_varparams+0x30 postgres`exec_parse_message+0x2fc postgres`PostgresMain+0x117c postgres`BackendRun+0x248 postgres`BackendStartup+0xf4 postgres`ServerLoop+0x4c8 postgres`PostmasterMain+0xca0 FUNCTIONCOUNT PCNT postgres`can_coerce_type1 0.1% postgres`find_coercion_pathway 11 0.9% postgres`SearchCatCache43 3.4% libc_psr.so.1`memcpy 136 10.6% The appserver is basically using bunch of prepared statements that the server should be executing directly without doing the parsing again. Since it is the parser module that invokes the catalog search, does anybody know how to improve the can_coerce_type function in order to reduce the similar comparisions again and again for same type of statements. I also wanted to check if postgresql stores prepared statements at the server level or does it parse each incoming prepared statement again? Any insight will help here in understanding what it is attempting to do and what can be the possible workarounds. Regards, Jignesh ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] Memory context in exception handler
I'm trying to use the PG_TRY/PG_CATCH exception handling: bool xml_is_document(xmltype *arg) { boolresult; xmlDocPtr doc; PG_TRY(); { doc = xml_parse((text *) arg, true, true); result = true; } PG_CATCH(); { ErrorData *errdata = CopyErrorData(); if (errdata->sqlerrcode == ERRCODE_INVALID_XML_DOCUMENT) { FlushErrorState(); result = false; } else PG_RE_THROW(); } PG_END_TRY(); if (doc) xmlFreeDoc(doc); return result; } But this fails because CopyErrorData() complains by way of assertion that we're still in ErrorContext. A nearby comment suggests to switch away to another context to preserve the data across FlushErrorState(), but that doesn't seem necessary in this situation. Are there other reasons why this rule is so rigorously enforced? -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [GENERAL] Corrupt database? 8.1/FreeBSD6.0
Tom Lane <[EMAIL PROTECTED]> wrote: Really? Wow, *that's* an interesting thought. Is it likely that that temp table could contain many-hour-old data? Certainly...our connection pool used by jboss can have connections to postgres persisting for multiple days. (We're still looking for a way to tell it to recycle these occasionally). As each 'user' of our web based app performs some action, they acquire one of the connection pool connections and set their user_id in the temporary table used by that connection (we use that for our audit triggers) Once they are 'done' with the connection, the connection is just released back to the pool but not actually closed...so the temp table still contains the data from a previous iteration. - TV dinner still cooling? Check out "Tonight's Picks" on Yahoo! TV.
Re: [HACKERS] [GENERAL] Corrupt database? 8.1/FreeBSD6.0
Jeff Amiel <[EMAIL PROTECTED]> writes: > Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-1] jboss 92257 ERROR: > could not access status of transaction 2107200825 > Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-2] jboss 92257 DETAIL: > could not open file "pg_clog/07D9": No such file or directory > Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-3] jboss 92257 CONTEXT: > SQL statement "DELETE FROM audit_metadata WHERE user_id <> -1" > pg_clog dir looks like this: > -rw--- 1 pgsql wheel 262144 Jan 13 05:41 07DA > -rw--- 1 pgsql wheel 262144 Jan 13 08:06 07DB > -rw--- 1 pgsql wheel 90112 Jan 13 08:51 07DC > Now that table, audit_metadata, is a temporary table (when accessed by jboss > as it is here). There is a 'rea'l table with the same name, but only used by > batch processes that connect to postgres. Really? Wow, *that's* an interesting thought. Is it likely that that temp table could contain many-hour-old data? This seems unrelated to your issue with autovacuum (which should never touch a temp table, and certainly isn't going to find one in template0) ... but I suddenly fear that we've missed a fundamental point about pg_clog truncation. And WAL wraparound for that matter. To wit, a sufficiently long-lived temp table could contain old XIDs, and there's no way for anyone except the owning backend to clean them out, or even guarantee that they're marked committed. Thoughts? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] TODO items for removal
On Fri, 2007-01-12 at 22:24 +, Simon Riggs wrote: > This item was rejected by Tom, since a workaround exists > > Add estimated_count(*) to return an estimate of COUNT(*) > This would use the planner ANALYZE statistics to return an estimated > count. http://archives.postgresql.org/pgsql-hackers/2005-11/msg00943.php ISTM Tom didn't reject the TODO item (or the basic feature idea it describes), he just objected to the syntax -- which I can understand, count(*) is not syntax we want to be copying. AFAIK no one has actually implemented the UDF he describes, though, so there should still be a TODO item. -Neil ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] TODO items for removal
Thanks, removed. --- Simon Riggs wrote: > These two items are complete in 8.2, IIRC > > Allow constraint_exclusion to work for UNIONs like it does for > inheritance, allow it to work for UPDATE and DELETE statements, and > allow it to be used for all statements with little performance impact > > Fix memory leak from exceptions > http://archives.postgresql.org/pgsql-performance/2006-06/msg00305.php > > > This item was rejected by Tom, since a workaround exists > > Add estimated_count(*) to return an estimate of COUNT(*) > This would use the planner ANALYZE statistics to return an estimated > count. http://archives.postgresql.org/pgsql-hackers/2005-11/msg00943.php > > -- > Simon Riggs > EnterpriseDB http://www.enterprisedb.com > > > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] SPAR Simple PostgreSQL AddOn Replication System
Ok-- again subscribed to hackers till your questions answered. I think you missed some of the documentation. On the first page, click on documentation, on that (intro) page, click on 'how it works'... it will tell you about primary keys. Use this script to make a test table CREATE TABLE test ( id serial NOT NULL, "int" int4 DEFAULT 0, string varchar(100), text text, logic bool, blob bytea, "time" timestamp DEFAULT now(), CONSTRAINT test_pkey PRIMARY KEY (id) ) WITHOUT OIDS; Remember that in replication, when the tables come together, primary keys have to be unique... what SPAR does is intercept the serial key generation to make sure this happens correctly docs will explain how I did it. On your other question licensing... I havnt thought about it too much. I'm just a hobbiest and I need a web site for my estate agent business, but a) the computer business's that came to see me are not very good, most are just punting someone elses expensive product b) I live in a third world country where a leased line and an ip address costs more than a months salary and c) I love geeking. So all thats happening is that I'm building up my own technology so I can host offshore, at reasonable costs. In the next version of both Spar and Ese, (the search engine) I'll will put a license in... it will always be free for use and distribution. I being thinking about open source, and I like the business model behind it. When I have my site going I'll do that, will give my site admin a derivative business, that way I get the benefits of core ownership and code improvements... otherwise I cant see the benefits of a business model that just gives code away. I make my software free, because the feedback is tremendous, makes for a better product, and someone will always say, have you tried this or that, good guage to see if the software deserves to live. Maybe I should put this on my products, "Made by an estate agent just passing through" ha ha. In the case of Spar, the core code or "tricks", to make Postgres do it, are actually in the script thats on your dB anyway. regards, Johnny - Original Message - For the tables created by the script I sent you earlier, (one int primary-key, one normal int, columns), the entries are red; I did try the databases individually, still didn't turn green. One thing though, when I selected the databases individually, the background turned white, which, according to comments on that page, means the rep-server was not able to determine the status of this table. psql.exe -p 6543 test -c "create table test1( a int )"