[HACKERS] Enums again
I've been going through the thread that Andrew Dunstan started with his enumkit. Maybe I missed it, but I didn't see any conclusion. If I want to design an Open Source system now that may be in beta in three to six months and I'd like to use enums, is this a good place to look? I guess I'm wondering about the kit going into PgFoundry, being accepted in the main branch or being dropped. And the timeframe for all that. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] SIGSEGV taken on 8.1 during dump/reload
When grilled further on (Mon, 7 Nov 2005 22:25:17 -0700), Robert Creager <[EMAIL PROTECTED]> confessed: Sorry, I'll just trickle out the information. tassiv=# \d catalog_ra_decl_index Index "public.catalog_ra_decl_index" Column | Type +--- loc| spherekey gist, for table "public.catalog" v->spl_right is address 0xbp - uninitialized? (gdb) print *v $2 = {spl_left = 0x83e1308, spl_nleft = 8, spl_ldatum = 138286880, spl_lattr = {3930298096, 3929693296, 1075344513, 3928483696, 3927878896, 50331648, 1076099872, 1076099872, 1076100640, 1076099944, 1076099872, 0, 0, 0, 1, 1076099872, 46088, 24, 138269392, 108, 8205, 1076099872, 1076097560, 1077018624, 1223005861, 2281761506, 1072462523, 8192, 1076979200, 1348122942, 3218058668, 3588489616}, spl_lattrsize = {1072628007, 1223130252, 0, -1073754968, 1223107331, -1073755008, 1196715552, 4033364, 1076979200, 8132, 32, 138269400, 58657919, 717016950, 1071875034, 1883413536, -1077677968, -817345387, 1072225709, 138175768, 138175768, 1223130252, 1223130252, -1073754936, 1223083881, 138269472, 1196715552, 138269472, 138269428, -1073754256, -1073754256, -1073754376}, spl_lisnull = "ÍD#\bàÌÿ¿\000\000\000\000(Íÿ¿\2004;\b ×ÿ¿\000\000\000\000\000\000\000", spl_leftvalid = 20 '\024', spl_right = 0xdb, spl_nright = 138286924, spl_rdatum = 11, spl_rattr = {3463747944, 3883728496, 0, 3882518896, 3881914096, 1, 3221212568, 138097456, 138251092, 3878890096, 0, 0, 1222988060, 1222974760, 1222960776, 138097456, 3, 1075321604, 0, 1073825468, 1076097560, 3221212576, 3221212540, 1075326465, 3221212576, 909216680, 825503793, 0, 138251202, 1076097560, 136751593, 3221212860}, spl_rattrsize = {-1073754484, 1075303286, -1073754720, 136751593, -1073754428, 138251176, 0, -1073754560, 136027536, 1196670896, 138269580, 32, 1196670856, 138251176, 138251194, 138251202, 226, 138251008, 0, 0, 0, 7904, 1024, 138269400, 138269700, 138269688, 908, -1073754600, 13655, 138175768, 138269700, 908}, spl_risnull = "\030e<\b\000¼SG\001\000\000\000XÎÿ¿¤Îÿ¿\001\000\000\000 Ñÿ¿\004Ô=\b", spl_rightvalid = 108 'l', spl_idgrp = 0x83dd78c, spl_ngrp = 0x83dd378, spl_grpflag = 0x4 } > When grilled further on (Mon, 7 Nov 2005 08:07:14 -0700), > Robert Creager <[EMAIL PROTECTED]> confessed: > > I'm currently attached to the dead (dying) process. spl_nright seems pretty > large... > > (gdb) print v->spl_nright > $3 = 138311580 > > Program received signal SIGSEGV, Segmentation fault. > 0x08082057 in gistUserPicksplit (r=0x48f3f1e4, entryvec=0x83e534c, > v=0xbfffcbc0, itup=0x83e3454, len=227, giststate=0xbfffd120) at gistutil.c:833 > 833 if (v->spl_right[v->spl_nright - 1] == InvalidOffsetNumber) > (gdb) bt > #0 0x08082057 in gistUserPicksplit (r=0x48f3f1e4, entryvec=0x83e534c, > v=0xbfffcbc0, itup=0x83e3454, len=227, giststate=0xbfffd120) at gistutil.c:833 > #1 0x0807f249 in gistSplit (r=0x48f3f1e4, buffer=8917, itup=0x83e3454, > len=0xbfffcea4, dist=0xbfffcea0, giststate=0xbfffd120) at gist.c:1083 > #2 0x0807c8ab in gistplacetopage (state=0xbfffcf10, giststate=0xbfffd120) at > gist.c:331 > #3 0x0807e2cd in gistmakedeal (state=0xbfffcf10, giststate=0xbfffd120) at > gist.c:878 > #4 0x0807c7e1 in gistdoinsert (r=0x48f3f1e4, itup=0x83e339c, > giststate=0xbfffd120) at gist.c:299 > #5 0x0807c5a6 in gistbuildCallback (index=0x48f3f1e4, htup=0x83c3de8, > values=0xbfffd020, isnull=0xbfffd000 "", tupleIsAlive=1 '\001', > state=0xbfffd120) > at gist.c:207 > #6 0x080cbb14 in IndexBuildHeapScan (heapRelation=0x48f3e1cc, > indexRelation=0x48f3f1e4, indexInfo=0x83c3b6c, callback=0x807c4f0 > , > callback_state=0xbfffd120) at index.c:1573 > #7 0x0807c3b5 in gistbuild (fcinfo=0xbfffe670) at gist.c:145 > #8 0x08234dfd in OidFunctionCall3 (functionId=782, arg1=1223942604, > arg2=1223946724, arg3=138165100) at fmgr.c:1460 > #9 0x080cb8d3 in index_build (heapRelation=0x48f3e1cc, > indexRelation=0x48f3f1e4, indexInfo=0x83c3b6c) at index.c:1353 > #10 0x080cacdc in index_create (heapRelationId=128249, > indexRelationName=0x83a0b94 "catalog_ra_decl_index", indexRelationId=128443, > indexInfo=0x83c3b6c, > accessMethodObjectId=783, tableSpaceId=0, classObjectId=0x83c9cfc, > primary=0 '\0', isconstraint=0 '\0', allow_system_table_mods=0 '\0', > skip_build=0 '\0') at index.c:757 > #11 0x08110671 in DefineIndex (heapRelation=0x30f, > indexRelationName=0x83a0b94 "catalog_ra_decl_index", indexRelationId=0, > accessMethodName=0x83a0c00 "gist", tableSpaceName=0x0, > attributeList=0x83a0c58, predicate=0x0, rangetable=0x0, unique=0 '\0', > primary=0 '\0', > isconstraint=0 '\0', is_alter_table=0 '\0', check_rights=1 '\001', > skip_build=0 '\0', quiet=0 '\0') at indexcmds.c:383 > #12 0x081c409b in ProcessUtility (parsetree=0x83a0c74, params=0x0, > dest=0x83a0cf0, completionTag=0xbfffec00 "") at utility.c:748 > #13 0x081c2b84 in PortalRunUtility (portal=0x83aad14, query=0x83a0a7c, > dest=0x83a0cf0, completionTag=0xbfffec00 ""
[HACKERS] plperl error when making 8.2dev CVS
Hi, i was trying to compile CVS using --with-plperl (perl installed is 5.6.1) and i get this error when make go inside plperl: make[3]: *** No rule to make target `SPI.xs', needed by `SPI.c'. Stop. make[2]: *** [all] Error 1 make[1]: *** [all] Error 2 make: *** [all] Error 2 -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] SIGSEGV taken on 8.1 during dump/reload
When grilled further on (Mon, 7 Nov 2005 08:07:14 -0700), Robert Creager <[EMAIL PROTECTED]> confessed: I'm currently attached to the dead (dying) process. spl_nright seems pretty large... (gdb) print v->spl_nright $3 = 138311580 Program received signal SIGSEGV, Segmentation fault. 0x08082057 in gistUserPicksplit (r=0x48f3f1e4, entryvec=0x83e534c, v=0xbfffcbc0, itup=0x83e3454, len=227, giststate=0xbfffd120) at gistutil.c:833 833 if (v->spl_right[v->spl_nright - 1] == InvalidOffsetNumber) (gdb) bt #0 0x08082057 in gistUserPicksplit (r=0x48f3f1e4, entryvec=0x83e534c, v=0xbfffcbc0, itup=0x83e3454, len=227, giststate=0xbfffd120) at gistutil.c:833 #1 0x0807f249 in gistSplit (r=0x48f3f1e4, buffer=8917, itup=0x83e3454, len=0xbfffcea4, dist=0xbfffcea0, giststate=0xbfffd120) at gist.c:1083 #2 0x0807c8ab in gistplacetopage (state=0xbfffcf10, giststate=0xbfffd120) at gist.c:331 #3 0x0807e2cd in gistmakedeal (state=0xbfffcf10, giststate=0xbfffd120) at gist.c:878 #4 0x0807c7e1 in gistdoinsert (r=0x48f3f1e4, itup=0x83e339c, giststate=0xbfffd120) at gist.c:299 #5 0x0807c5a6 in gistbuildCallback (index=0x48f3f1e4, htup=0x83c3de8, values=0xbfffd020, isnull=0xbfffd000 "", tupleIsAlive=1 '\001', state=0xbfffd120) at gist.c:207 #6 0x080cbb14 in IndexBuildHeapScan (heapRelation=0x48f3e1cc, indexRelation=0x48f3f1e4, indexInfo=0x83c3b6c, callback=0x807c4f0 , callback_state=0xbfffd120) at index.c:1573 #7 0x0807c3b5 in gistbuild (fcinfo=0xbfffe670) at gist.c:145 #8 0x08234dfd in OidFunctionCall3 (functionId=782, arg1=1223942604, arg2=1223946724, arg3=138165100) at fmgr.c:1460 #9 0x080cb8d3 in index_build (heapRelation=0x48f3e1cc, indexRelation=0x48f3f1e4, indexInfo=0x83c3b6c) at index.c:1353 #10 0x080cacdc in index_create (heapRelationId=128249, indexRelationName=0x83a0b94 "catalog_ra_decl_index", indexRelationId=128443, indexInfo=0x83c3b6c, accessMethodObjectId=783, tableSpaceId=0, classObjectId=0x83c9cfc, primary=0 '\0', isconstraint=0 '\0', allow_system_table_mods=0 '\0', skip_build=0 '\0') at index.c:757 #11 0x08110671 in DefineIndex (heapRelation=0x30f, indexRelationName=0x83a0b94 "catalog_ra_decl_index", indexRelationId=0, accessMethodName=0x83a0c00 "gist", tableSpaceName=0x0, attributeList=0x83a0c58, predicate=0x0, rangetable=0x0, unique=0 '\0', primary=0 '\0', isconstraint=0 '\0', is_alter_table=0 '\0', check_rights=1 '\001', skip_build=0 '\0', quiet=0 '\0') at indexcmds.c:383 #12 0x081c409b in ProcessUtility (parsetree=0x83a0c74, params=0x0, dest=0x83a0cf0, completionTag=0xbfffec00 "") at utility.c:748 #13 0x081c2b84 in PortalRunUtility (portal=0x83aad14, query=0x83a0a7c, dest=0x83a0cf0, completionTag=0xbfffec00 "") at pquery.c:987 #14 0x081c2e0b in PortalRunMulti (portal=0x83aad14, dest=0x83a0cf0, altdest=0x83a0cf0, completionTag=0xbfffec00 "") at pquery.c:1054 #15 0x081c26a6 in PortalRun (portal=0x83aad14, count=2147483647, dest=0x83a0cf0, altdest=0x83a0cf0, completionTag=0xbfffec00 "") at pquery.c:665 #16 0x081be579 in exec_simple_query (query_string=0x83a0864 "CREATE INDEX catalog_ra_decl_index ON catalog USING gist (loc);") at postgres.c:1014 #17 0x081c1377 in PostgresMain (argc=4, argv=0x8345f3c, username=0x8345f14 "robert") at postgres.c:3168 #18 0x08198692 in BackendRun (port=0x835ea08) at postmaster.c:2854 #19 0x081980a5 in BackendStartup (port=0x835ea08) at postmaster.c:2498 #20 0x081963fe in ServerLoop () at postmaster.c:1231 #21 0x081957aa in PostmasterMain (argc=3, argv=0x8344788) at postmaster.c:943 #22 0x08158b49 in main (argc=3, argv=0x8344788) at main.c:256 -- 22:06:46 up 36 days, 14:41, 7 users, load average: 2.22, 2.55, 3.26 Linux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004 pgpbpARdi4llg.pgp Description: PGP signature
Re: [HACKERS] Old interfaces directory in CVS tree?
On Tue, November 8, 2005 00:02, Marc G. Fournier wrote: > On Mon, 7 Nov 2005, Andrew Dunstan wrote: >> Oh, the top level interfaces directory. I misunderstood. Why is anybody >> checking that out at all? Are we keeping it for historical purposes? I think the person in question was doing a regular checkout of the mainline source tree, and found this in there. > Yes, since past releases did include it, so if we check out a previous > release, it needs to be able to pull those files as well ... But 8.1 is hardly a previous release. Shouldn't this directory be in the attic or something? Jeroen ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] Copy From CSV feature request?
I import some of my data into my postgres database, win32 platform, via the COPY table FROM with CSV. My CSV file is created from a Crystal Report (v.9). I run the report and have Crystal export the results into a CSV file (using the default settings). I have some data which looks like this when stored in the source application (m$ sql server 2000) and Crystal: Line 1000 1/1/2004 Company2 Person2 Misc Line 1001 1/1/2004 Company3 Person3 " Nickname Misc Line 1002 1/1/2004 Company3 Person3 " Nickname Misc Line 1003 1/1/2004 Company4 Person4 Misc Line 1004 1/1/2004 Company5 Person5 Misc When I export the report as a CSV file Crystal exports it like this: Line 1000 "1/1/2004","Company2","Person2","Misc" Line 1001 "1/1/2004","Company3","Person3 " Nickname","Misc" Line 1002 "1/1/2004","Company3","Person3 " Nickname","Misc" Line 1003 "1/1/2004","Company4","Person4","Misc" Line 1004 "1/1/2004","Company4","Person4","Misc" Now when I execute the COPY function it aborts on Line 1002 with the error "extra data after last expected column". I am not sure why it does not abort on Line 1001. If Crystal is exporting the data in the correct format should COPY FROM CSV be modified to handle an odd number of text qualifiers in a row? Does anyone know if this is a valid format (no escape character automatically inserted)? Could COPY be modified to accept the data without having to insert a " someplace in the row so that the original data appears the same? Would it be too much of a performance hit to do this? How other applications handle it: Only Excel 2003 seems to display the data correctly. If I open the CSV file using OpenOffice Calc 2.0 it combines Line 1001 and Line 1002 into one row. If I import the data back into Crystal the data after Person3 does not appear. If I import it into Access 2003 it ships Line 1001 and Line 1002 into an import error table. Worth adding to the TODO or not a good feature? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] FSM pages stored v. pages needed
On a few occasions I've come across vacuums that result in more pages stored than needed, ie: INFO: free space map: 81 relations, 235349 pages stored; 220672 total pages needed DETAIL: Allocated FSM size: 1000 relations + 200 pages = 11817 kB shared memory. I see that PrintFreeSpaceMapStatistics holds an exclusive lock both while it counts actual space usage and while it grabs the info used to calculate pages needed, so it's not a problem of part of that data changing while the count happens. I don't think that the issue is PrintFreeSpaceMapStatistics not accounting for the fact that each relation is guaranteed CHUNKPAGES, either (I believe sumRequests + numRels handles that). Any ideas as to what else could cause this problem? If it can't be figured out, should PrintFreeSpaceMapStatistics at least ensure that needed >= storedPages? As it stands right now, users need to do that math in their head when sizing max_fsm_pages, which is a bit confusing. Another question... while looking through the code, I see that any GetPageWithFreeSpace ensures that an entry for the relation it was called for exists by calling create_fsm_rel. create_fsm_rel will create the entry if needed and put it at the head of the FSM. Doesn't this mean that a lot of repeated requests to create new tables could 'blow out' a substantial chunk of the FSM? Or more likely, some periodic process that updates a large number of tables that don't normally get other requests for free space would have the same effect. I realize that improving this would probably involve a non-trivial amount of work, so maybe a good compromise would be logging a warning anytime a relation is dropped due to hitting max_fsm_relations. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] DTrace?
From what I understand DTrace is rather tough to use. Secondly it will provide Solaris only information, so if you are suggesting helpfulness for just Solaris, then yes it would be. I don't think that DTrace is available for Solaris 8 and 9, the company I work for is still on 8 with possibly some 7's hanging around somewhere, which is where I expect alot of people to still be, Solaris 10 hasn't been adopted as widely as expected by Sun, it may gain some momentum with OpenSolaris, but we shall have to see. karen hill wrote: I skimmed the thread "Spinlocks, yet again: analysis and proposed patches". Wouldn't Solaris 10's DTrace be helpful in seeing what's going on? It seems DTrace was meant for these types of problems. __ Yahoo! FareChase: Search multiple travel sites in one click. http://farechase.yahoo.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- Aly S.P Dharshi [EMAIL PROTECTED] "A good speech is like a good dress that's short enough to be interesting and long enough to cover the subject" ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] Supporting NULL elements in arrays
I'm starting to think about what it'll take to allow arrays to contain elements that are NULL. The behavioral semantics seem pretty straightforward, but there are a couple of areas that need discussion. One trouble spot is what the I/O representation should look like. Since 8.0, the array input parser has rejected empty elements: regression=# select '{a,,c}'::text[]; ERROR: malformed array literal: "{a,,c}" regression=# select '{a, ,c}'::text[]; ERROR: malformed array literal: "{a, ,c}" -- the right way to specify a zero-length string is: regression=# select '{a,"",c}'::text[]; text -- {a,"",c} (1 row) and so the most straightforward thing to do is define an empty element as meaning a NULL. But this might be objected to on a couple of grounds: 1. "Since 8.0" isn't really old enough --- there may well be applications still out there that think '{a,,c}'::text[] should produce a zero-length string element and not a NULL element. (Note: this isn't a hazard for reloading old dump files, because the array output routine has dumped empty strings as "" since 7.0 if not before.) 2. Even today, the array documentation at http://developer.postgresql.org/docs/postgres/arrays.html doesn't actually *say* that empty elements are disallowed. I don't see any alternatives that seem better, though, and the empty-element convention at least has the virtue of being reasonably compatible with what we did for I/O of composite data types. Anyone have a better idea? Another issue is what to do with the internal representation. I think that basically we want to insert a null bitmap just following the dimension info, with the option to omit the bitmap if there are no null values. Now, when the bitmap is present, it would be fairly tedious to calculate the length of the bitmap to determine the offset to the actual data; and that's an operation that we'd like to be cheap. What I'm thinking of doing is commandeering the whole "flags" field of ArrayType (which is currently unused and should be always zero), and redefining it thus: zero: bitmap not present (fully backwards-compatible case) not zero: bitmap is present; value is offset to start of data I wouldn't propose doing this if I thought we had any pressing reason to save some array flag bits for some other purpose; but I can't think of anything else we might want 'em for. Did anyone have any pet ideas this would foreclose? Also, with respect to the binary I/O representation, I'd suggest replacing the "flag" word with "zero if no bitmap, 1 if bitmap present". This would require both sender and receiver to calculate the bitmap length from the given dimensions, but it seems best to require that for error-checking reasons. Thoughts anyone? regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] broken comment justification logic in new pgindent
Bruce Momjian writes: > My guess is that there is a one-off bug in there. At least a two-off ... but I think it's more likely some sort of wrong-state error, given the narrow places where it happens. I have not observed any non-comment code being mis-justified, for instance. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] broken comment justification logic in new pgindent
Tom Lane wrote: > Bruce Momjian writes: > > Good point. I see the maximum length changed in this commit: > > revision 1.70 > > date: 2004/10/02 01:10:58; author: momjian; state: Exp; lines: +1 -1 > > Update length from 75 to 79. > > > We were discussing some pgindent issues at that time on hackers, but I > > don't see any complaints about the length, so I am unsure why I modified > > it, but perhaps I received a private communication asking why it wasn't > > 79. > > I remember that discussion, and I remember we agreed to update it to 79. > You're missing the point completely: the problem at hand is that > pgindent is failing to honor its width parameter in some situations. My guess is that there is a one-off bug in there. > > Anyway, I have updated the script to be 78, > > This is just going to create a lot of gratutitous re-indents without > actually fixing the problem, because whatever bug is causing pgindent > to sometimes not honor the -l switch will still be there. I have no idea how we are going to find the bug. I am thinking we need to just reduce the maximum length until the length is less than 80. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Help with Array Function in C language...
Thanks a lot, but I still getting an error message like this: ERROR: cache lookup failed for type 0 What is wrong? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane Sent: Lunes, 07 de Noviembre de 2005 05:17 p.m. To: Cristian Prieto Cc: pgsql-general@postgresql.org; pgsql-hackers@postgresql.org Subject: Re: [HACKERS] Help with Array Function in C language... "Cristian Prieto" <[EMAIL PROTECTED]> writes: > Datum > test_array(PG_FUNCTION_ARGS) > { > ArrayType *v = PG_GETARG_ARRAYTYPE_P(1); > Datum element; > Oidarray_type = get_array_type(v); I think you want get_element_type, instead. And you definitely ought to be checking for a failure result (zero). regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Help with Array Function in C language...
"Cristian Prieto" <[EMAIL PROTECTED]> writes: > Datum > test_array(PG_FUNCTION_ARGS) > { > ArrayType *v = PG_GETARG_ARRAYTYPE_P(1); > Datum element; > Oidarray_type = get_array_type(v); I think you want get_element_type, instead. And you definitely ought to be checking for a failure result (zero). regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] broken comment justification logic in new pgindent
Bruce Momjian writes: > Good point. I see the maximum length changed in this commit: > revision 1.70 > date: 2004/10/02 01:10:58; author: momjian; state: Exp; lines: +1 -1 > Update length from 75 to 79. > We were discussing some pgindent issues at that time on hackers, but I > don't see any complaints about the length, so I am unsure why I modified > it, but perhaps I received a private communication asking why it wasn't > 79. I remember that discussion, and I remember we agreed to update it to 79. You're missing the point completely: the problem at hand is that pgindent is failing to honor its width parameter in some situations. > Anyway, I have updated the script to be 78, This is just going to create a lot of gratutitous re-indents without actually fixing the problem, because whatever bug is causing pgindent to sometimes not honor the -l switch will still be there. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Any advice about function caching?
"Mark Cave-Ayland" <[EMAIL PROTECTED]> writes: > My current code works by using MemoryContextCreate() to create a child > context to MessageContext and using the Init()/Delete() functions to > initialise and destroy a cache in the local backend. However, this doesn't > really work particularly well when using cursors and prepared queries since > it appears what I should be doing is using a cache per portal rather than a > cache per backend. If you want per-query state, keep it in a data structure linked from the fcinfo->flinfo->fn_extra field (physically, store it in fcinfo->flinfo->fn_mcxt, or create a subcontext of that if you wish). If you need to get control at query shutdown to free non-palloc'd resources, RegisterExprContextCallback may help. (I think such callbacks are only called during *successful* query shutdown, though, so if you have external library state you need to clean up anyway, you'll need some other approach to keeping track of it ... maybe a permanent data structure instead of a per-query one.) src/backend/utils/fmgr/funcapi.c and src/backend/executor/functions.c might be useful examples. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] broken comment justification logic in new pgindent
Tom Lane wrote: > I'm noticing that the latest pgindent run has frequently rejustified > block comments to end in column 80 or 81, causing them to wrap in an > ugly way (at least in emacs). I thought the agreement was to limit > lines to 79 chars max? > > For one example see lines 475 ff in /src/backend/access/nbtree/nbtpage.c > --- the first lines of two successive paragraphs in the comment have > been made too long, which they were not before. > > I'm not sure about this offhand, but I think that all the cases I've > seen have involved first lines of paragraphs inside block comments. Good point. I see the maximum length changed in this commit: revision 1.70 date: 2004/10/02 01:10:58; author: momjian; state: Exp; lines: +1 -1 Update length from 75 to 79. We were discussing some pgindent issues at that time on hackers, but I don't see any complaints about the length, so I am unsure why I modified it, but perhaps I received a private communication asking why it wasn't 79. Anyway, I have updated the script to be 78, and attached is a diff against nbpage.c, but I have not applied a change to that file. Would you like another pgindent run with the new value of 78? Should be run on CVS HEAD only or 8.0.X too? -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 Index: nbtpage.c === RCS file: /cvsroot/pgsql/src/backend/access/nbtree/nbtpage.c,v retrieving revision 1.89 diff -c -r1.89 nbtpage.c *** nbtpage.c 6 Nov 2005 19:29:00 - 1.89 --- nbtpage.c 7 Nov 2005 22:53:41 - *** *** 205,223 if (metad->btm_root != P_NONE) { /* !* Metadata initialized by someone else. In order to guarantee no !* deadlocks, we have to release the metadata page and start all !* over again. (Is that really true? But it's hardly worth trying !* to optimize this case.) */ _bt_relbuf(rel, metabuf); return _bt_getroot(rel, access); } /* !* Get, initialize, write, and leave a lock of the appropriate type on !* the new root page. Since this is the first page in the tree, it's !* a leaf as well as the root. */ rootbuf = _bt_getbuf(rel, P_NEW, BT_WRITE); rootblkno = BufferGetBlockNumber(rootbuf); --- 205,223 if (metad->btm_root != P_NONE) { /* !* Metadata initialized by someone else. In order to guarantee !* no deadlocks, we have to release the metadata page and start !* all over again. (Is that really true? But it's hardly worth !* trying to optimize this case.) */ _bt_relbuf(rel, metabuf); return _bt_getroot(rel, access); } /* !* Get, initialize, write, and leave a lock of the appropriate type !* on the new root page. Since this is the first page in the tree, !* it's a leaf as well as the root. */ rootbuf = _bt_getbuf(rel, P_NEW, BT_WRITE); rootblkno = BufferGetBlockNumber(rootbuf); *** *** 412,427 Pagepage = BufferGetPage(buf); /* !* ReadBuffer verifies that every newly-read page passes PageHeaderIsValid, !* which means it either contains a reasonably sane page header or is !* all-zero. We have to defend against the all-zero case, however. */ if (PageIsNew(page)) ereport(ERROR, (errcode(ERRCODE_INDEX_CORRUPTED), !errmsg("index \"%s\" contains unexpected zero page at block %u", ! RelationGetRelationName(rel), ! BufferGetBlockNumber(buf)), errhint("Please REINDEX it."))); /* --- 412,428 Pagepage = BufferGetPage(buf); /* !* ReadBuffer verifies that every newly-read page passes !* PageHeaderIsValid, which means it either contains a reasonably sane !* page header or is all-zero. We have to defend against the all-zero !* case, however. */ if (PageIsNew(page)) ereport(ERROR,
[HACKERS] Help with Array Function in C language...
Hello, I'm doing a very simple C language function in PostgreSQL but I can't figure out why this is not working, the documentation about the PostgreSQL internals is not so good about arrays and I couldn't find a suitable example of the use of some kind of array functions inside the pgsql source tree. I'm trying to get the N item from any array passed as argument, this could be the SQL declaration of my function: CREATE OR REPLACE FUNCTION test_array(integer, anyarray) RETURNS anyelement AS 'test.so' LANGUAGE 'C'; And the function could look like this: PG_FUNCTION_INFO_V1(test_array); Datum test_array(PG_FUNCTION_ARGS) { ArrayType *v = PG_GETARG_ARRAYTYPE_P(1); Datum element; Oidarray_type = get_array_type(v); inttyplen; bool typbyval; char typalign; get_typlenbyvalalign(array_type, &typlen, &typbyval, &typalign); element = array_ref(v, 1, PG_GETARG_INT32(0), ARR_SIZE(v), typlen, typbyval, typalign, false); PG_RETURN_DATUM(element); } The function compiles without error, but when I try something like that: SELECT test_array(3, array[1,2,3,4,5,6]); It returns to me an error like this: ERROR: cache lookup failed for type 0 What is wrong here? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Odd db lockup - investigation advice wanted
Marc Munro <[EMAIL PROTECTED]> writes: > PGDATA is installed on a Netapp network storage device. This is generally not recommended--it should be on a local disk (SAN, etc) rather than NFS. > We are using slony 1.1.0 for replication. > > The (provider) database locked-up after I killed a slony client process > (kill -9) on the subscriber. Psql connections would not respond to \d > and simply locked up. I was able to run a query to check for blocking > locks - this returned no rows. There was a significant test load on the > database at the time. > > I stopped the database but was unable to restart it. I was unable to > kill a number of postgres processes and could not release postgres > shared memory. Having decided that the database was toast, I discovered > that I could not even delete the database files, and eventually the only > solution was a full reboot. This sounds like you had kernel problems, not PG problems. Linux NFS support is continually improving but still seems to have issues. I would strongly recommend using local disk for your database storage. -Doug ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] DTrace?
I skimmed the thread "Spinlocks, yet again: analysis and proposed patches". Wouldn't Solaris 10's DTrace be helpful in seeing what's going on? It seems DTrace was meant for these types of problems. __ Yahoo! FareChase: Search multiple travel sites in one click. http://farechase.yahoo.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Another pgindent gripe
Pgindent adds spaces after the stars if it doesn't recognize the thing before the star as a typedef... Could it be that somehow the list of typedefs included in pgindent got corrupted? > -Original Message- > From: [EMAIL PROTECTED] [mailto:pgsql-hackers- > [EMAIL PROTECTED] On Behalf Of Tom Lane > Sent: Monday, November 07, 2005 8:19 AM > To: Neil Conway > Cc: Bruce Momjian; Hackers > Subject: Re: [HACKERS] Another pgindent gripe > > Neil Conway <[EMAIL PROTECTED]> writes: > > On a related note, most of these changes are completely bogus: > > > > http://developer.postgresql.org/cvsweb.cgi/pgsql/src/pl/plpgsql/src/pl_e xe > c.c.diff?r1=1.152;r2=1.153 > > Oy vey! Why did it insert spaces after the stars in all those function > declarations? That's certainly not in conformance with project style > ... and I don't see it having happened elsewhere. > > Seems like pgindent has suffered some significant regressions since the > 8.0 run. I thought it had not been changed much at all, but evidently > that's wrong. > > regards, tom lane > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] Odd db lockup - investigation advice wanted
Last week I managed to lock-up and then crash a development database. I'm going to try to reproduce it today and would like to know what I can do to further investigate the problem. I am running Linux 2.6.9-11.ELsmp X86_64 on a Quad Dual-Core Opteron I have the following postgres RPMs installed: postgresql-libs-7.4.7-2.RHEL4.1 postgresql-contrib-8.0.3-1PGDG postgresql-libs-8.0.3-1PGDG postgresql-server-8.0.3-1PGDG postgresql-8.0.3-1PGDG PGDATA is installed on a Netapp network storage device. We are using slony 1.1.0 for replication. The (provider) database locked-up after I killed a slony client process (kill -9) on the subscriber. Psql connections would not respond to \d and simply locked up. I was able to run a query to check for blocking locks - this returned no rows. There was a significant test load on the database at the time. I stopped the database but was unable to restart it. I was unable to kill a number of postgres processes and could not release postgres shared memory. Having decided that the database was toast, I discovered that I could not even delete the database files, and eventually the only solution was a full reboot. This hardware was destined to be put into production in the next two weeks but this crash has shaken our confidence somewhat. Any advice on how to further investigate this would be much appreciated Here is an exerpt from the logs at the time of the failure: LOG: duration: 4143.996 ms statement: execute wibble_transaction ( '2012416', '3410660', '2005-11-04 17:39:49 -0600', '20005', '3', '0', '{22000,22011,22001,22002,22003,22004,22005,22006,22007,22008,22009,22010}', '{8,0,0,0,2,1,0,7,2,23,26,0}' ) LOG: duration: 4814.012 ms statement: execute write_wibble ( '2048847', '{{15000,17660,0},{15001,3522,0},{15002,0,0},{15003,3851,0},{15004,0,0},{15005,0,0},{15006,0,0},{15007,0,0},{15011,0,0},{15012,0,0},{15013,0,0},{15014,0,0},{15015,0,0},{15016,0,0},{15017,0,0},{15018,0,0},{15019,0,0},{15020,0,0},{15021,0,0},{15022,0,0},{15023,0,0},{15024,0,0},{15025,0,0},{15026,0,0},{15048,0,0},{15030,0,0},{15031,0,0},{15032,0,0},{15033,0,0},{15035,0,0},{15036,0,0},{15037,0,0},{15038,0,0},{15040,0,0},{15041,1,0},{15042,0,0},{15043,0,0},{15046,0,0},{15047,0,0},{15049,1000,0},{15050,0,0},{15051,0,0},{15052,0,0}}' ) LOG: unexpected EOF on client connection LOG: received fast shutdown request LOG: aborting any active transactions LOG: received immediate shutdown request WARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. WARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. My lock checking query: select d.datname || '.' || c.relname as object, l.transaction as trans, l.pid, l.mode, case when b.blocked then b.blocker else null end as blocker from ( select w.pid as pid, h.pid as blocker, 't'::bool as blocked from pg_locks h, pg_locks w where h.granted andnot w.granted and ( (h.relation = w.relation and h.database = w.database) or h.transaction = w.transaction) union select h.pid, null, 'f'::bool as blocked from pg_locks h, pg_locks w where h.granted andnot w.granted and ( (h.relation = w.relation and h.database = w.database) or h.transaction = w.transaction) ) b, pg_locks l left outer join pg_database d on d.oid = l.database left outer join pg_class c on c.oid = l.relation where l.pid = b.pid order by l.pid; Thanks for any suggestions. __ Marc signature.asc Description: This is a digitally signed message part
Re: [HACKERS] Crash during elog.c...
"Jim C. Nasby" <[EMAIL PROTECTED]> writes: > On Fri, Nov 04, 2005 at 08:06:39PM -0500, Tom Lane wrote: >> Um, what's your log_line_prefix setting, and is the next format code >> %i by any chance? I've just noticed an utterly brain-dead assumption >> somebody stuck into ps_status.c awhile back. > log_line_prefix = '%t|%s|%r|%d|%i|%p' > So yeah, looks like %i is next. The quickest way to get rid of the crash will be to remove %i, then. If you don't want to do that, see the patch I committed to CVS. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Interval aggregate regression failure (expected seems
On 07 Nov 2005 14:22:37 -0500, Greg Stark <[EMAIL PROTECTED]> wrote: > IIRC, floating point registers are actually longer than a double so if the > entire calculation is done in registers and then the result rounded off to > store in memory it may get the right answer. Whereas if it loses the extra > bits on the intermediate values (the infinite repeating fractions) that might > be where you get the imprecise results. Hm. I thought -march=pentium4 -mcpu=pentium4 implies -mfpmath=sse. SSE is a much better choice on P4 for performance reasons, and never has excess precision. I'm guessing from the above that I'm incorrect, in which case we should always be compiled with -mfpmath=sse -msse2 when we are complied -march=pentium4, this should remove problems caused by excess precision. The same behavior can be had on non sse platforms with -ffloat-store. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Crash during elog.c...
On Fri, Nov 04, 2005 at 08:06:39PM -0500, Tom Lane wrote: > "Jim C. Nasby" <[EMAIL PROTECTED]> writes: > > My client (same one with the slru.c issue) has had 3 of these in the > > past day... > > > (gdb) print *str > > $39 = {data = 0x848030 "2005-11-04 00:01:02 EST|2005-11-04 00:00:08 > > EST|216.187.113.78(39476)|didit|", len = 76, > > maxlen = 256, cursor = 0} > > Um, what's your log_line_prefix setting, and is the next format code > %i by any chance? I've just noticed an utterly brain-dead assumption > somebody stuck into ps_status.c awhile back. log_line_prefix = '%t|%s|%r|%d|%i|%p' So yeah, looks like %i is next. I recall seeing something about %i in the backtrace or something else related to this, but I can't find it now. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] Any advice about function caching?
Hi everyone, I'm currently in the middle of trying to implement a cache for a PostGIS server-side function that uses an external library with an expensive startup cost, and was hoping to find some advice on the best way to implement it. The function currently takes 2 parameters - a customised C geometry datatype and a destination SRID which is the primary key to the library parameters in the spatial_ref_sys table. Currently for each invocation of the function it is necessary to lookup the parameters for both the source and destinations SRIDs in spatial_ref_sys, create a handle for each SRID in the external library, perform the calculation, and then return the result. Obviously this process is quite expensive when calling the function on large tables of data. I've currently implemented a basic cache for both the spatial_ref_sys lookups and the external library functions which gives a dramatic performance improvement, dropping my test query from over 2 minutes to 6s(!), but the part I'm experiencing difficulty with is working out when the start and end of a query occurs. My current code works by using MemoryContextCreate() to create a child context to MessageContext and using the Init()/Delete() functions to initialise and destroy a cache in the local backend. However, this doesn't really work particularly well when using cursors and prepared queries since it appears what I should be doing is using a cache per portal rather than a cache per backend. The other complication is that the external library handles cannot be free()d directly but instead a customised free function must be called. So my questions are: 1) What is the best way of keeping track of whether the cache has been initalised for a given portal? Looking in contrib/dblink.c it looks as if the best way to go would be to create a hash table per backend based upon the portal name in ActivePortal, attach the child context to PortalContext, and use the Delete() function to call the customised free function and remove the hash table entry. However, which memory context would be the best one to use in order to store the hash table entries? 2) Is there a better way of doing this? 3) Would anyone consider a patch to the source tree to make implementing something like this easier in future? Many thanks, Mark. WebBased Ltd 17 Research Way Plymouth PL6 8BT T: +44 (0)1752 797131 F: +44 (0)1752 791023 http://www.webbased.co.uk http://www.infomapper.com http://www.swtc.co.uk This email and any attachments are confidential to the intended recipient and may also be privileged. If you are not the intended recipient please delete it from your system and notify the sender. You should not copy it or use it for any purpose nor disclose or distribute its contents to any other person. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Interval aggregate regression failure (expected seems
c.f.: `-ffloat-store' Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory. This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a `double' is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use `-ffloat-store' for such programs, after modifying them to store all pertinent intermediate computations into variables. -- greg ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Interval aggregate regression failure (expected seems
Tom Lane <[EMAIL PROTECTED]> writes: > I think we can still file this as a compiler bug, because I'm pretty sure > the C spec does not allow rearrangement of floating-point calculations ... It may have more to do with whether the floating point value can stay in a floating point register long enough to complete the calculation. IIRC, floating point registers are actually longer than a double so if the entire calculation is done in registers and then the result rounded off to store in memory it may get the right answer. Whereas if it loses the extra bits on the intermediate values (the infinite repeating fractions) that might be where you get the imprecise results. It makes some sense that -mcpu and -march give the compiler enough information to keep the intermediate results in registers more effectively. -- greg ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] REL8_1_STABLE and HEAD
Teodor Sigaev <[EMAIL PROTECTED]> writes: > 8.1 had been tagged, may I develope/commit new tsearch2 code in HEAD branch? Yes, HEAD is 8.2devel now. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] parameterized limit statements
"Merlin Moncure" <[EMAIL PROTECTED]> writes: > hm...I'm using named statements over ExecPrepared. I can also confirm > the results inside psql with prepare/execute. I can send you a test > case, but was just wondering if your change to makelimit was supposed to > address this case. Nope, sorry. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] REL8_1_STABLE and HEAD
Hi! 8.1 had been tagged, may I develope/commit new tsearch2 code in HEAD branch? -- Teodor Sigaev E-mail: [EMAIL PROTECTED] WWW: http://www.sigaev.ru/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] parameterized limit statements
"Merlin Moncure" <[EMAIL PROTECTED]> writes: > The statements are invariably in form of > select a,b,c,d from t > where a >= $1 and > (a > $1 or b >= $2) and > (a > $1 or b > $2 or c >= $3) and > (a > $1 or b > $2 or c > $3 or d > $4) > order by a, b, c, d limit $5; > ^^ > If I hardcode $5 to any sub-ridiculous value, I get a proper index plan. > Does your patch assume a limit of 1 or 10% of table rows? If it doesn't have a value for the parameter, it'll assume 10% of table rows, which is what it's done for a long time if the LIMIT isn't reducible to a constant. I suspect the real issue here is that whatever you are doing doesn't give the planner a value to use for the parameter. IIRC, at the moment the only way that that happens is if you use the unnamed-statement variation of the Parse/Bind/Execute protocol. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] parameterized limit statements
> > ^^ > > If I hardcode $5 to any sub-ridiculous value, I get a proper index plan. > > Does your patch assume a limit of 1 or 10% of table rows? > > If it doesn't have a value for the parameter, it'll assume 10% of table > rows, which is what it's done for a long time if the LIMIT isn't > reducible to a constant. > > I suspect the real issue here is that whatever you are doing doesn't > give the planner a value to use for the parameter. IIRC, at the moment > the only way that that happens is if you use the unnamed-statement > variation of the Parse/Bind/Execute protocol. hm...I'm using named statements over ExecPrepared. I can also confirm the results inside psql with prepare/execute. I can send you a test case, but was just wondering if your change to makelimit was supposed to address this case. Merlin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] parameterized limit statements
> "Merlin Moncure" <[EMAIL PROTECTED]> writes: > > Is this correct? > > Sure, what do you think is wrong with it? plan_rows is initially a copy > of the child node's output-rows estimate, and then it gets modified. OK, just a stab in the dark...not familiar at all with this code (seemed odd to use value in comparison right before it was assigned). I am still getting prepared statements that are flipping to seqscan or bitmap scan. The statements are invariably in form of select a,b,c,d from t where a >= $1 and (a > $1 or b >= $2) and (a > $1 or b > $2 or c >= $3) and (a > $1 or b > $2 or c > $3 or d > $4) order by a, b, c, d limit $5; ^^ If I hardcode $5 to any sub-ridiculous value, I get a proper index plan. Does your patch assume a limit of 1 or 10% of table rows? FYI: the planner gets it right about 95% of the time and produces the best possible plan...an index filtering on a and scanning for b,c,d. Merlin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Interval aggregate regression failure (expected seems
I wrote: > Michael Paesold <[EMAIL PROTECTED]> writes: >> I am definatly not going to use -march=pentium4 in any production >> system. Should I open a bug report with RedHat (gcc vendor)? > Yeah, but they'll probably want a smaller test case than "Postgres fails > its regression tests" :-( I have just confirmed that the problem still exists in FC4's current compiler (gcc 4.0.1, gcc-4.0.1-4.fc4), which probably will boost up the priority of the complaint quite a long way in Red Hat's eyes. I've also confirmed that the problem is in interval_div; you can reproduce the failure with select '41 years 1 mon 11 days'::interval / 10; which should give '4 years 1 mon 9 days 26:24:00', but when timestamp.o is compiled with "-mcpu=pentium4 -march=pentium4", you get '4 years 1 mon 10 days 02:24:00'. --enable-integer-datetimes is not relevant because the interesting part is all double/integer arithmetic. Looking at this, though, I wonder if the pentium4 answer isn't "more correct". If I'm doing the math by hand correctly, what we end up with is having to cascade 3/10ths of a month down into the days field, and since the conversion factor is supposed to be 30 days/month, that should be exactly 9 days. Plus the one full day from the 11/10 days gives 10 days. I think what is happening on all the non-Pentium platforms is that (3.0/10.0)*30.0 is producing something just a shade less than 9.0, whereas the Pentium gives 9.0 or a shade over, possibly due to rearrangement of the calculation. I think we can still file this as a compiler bug, because I'm pretty sure the C spec does not allow rearrangement of floating-point calculations ... but we might want to think about tweaking the code's roundoff behavior just a bit. An example that's a little easier to look at is select '41 years 1 mon'::interval / 10; I get '4 years 1 mon 9 days' with the pentium4 optimization, and '4 years 1 mon 8 days 24:00:00' without, and the former seems more correct... regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Old interfaces directory in CVS tree?
On Mon, 7 Nov 2005, Andrew Dunstan wrote: Jeroen T. Vermeulen wrote: On Sat, Nov 05, 2005 at 09:46:15AM -0500, Andrew Dunstan wrote: A libpqxx user just informed me that the anonymous CVS repository at anoncvs.postgresql.org still contained a 2002 version of libpqxx in the interfaces directory. I checked it out and otherwise it seems to be the current source tree--at least I found an 8.1 version number somewhere. you informant probably needs to use the -P option for cvs checkout. Doesn't make a difference. Besides, the outdated source is there even on a fresh checkout. Oh, the top level interfaces directory. I misunderstood. Why is anybody checking that out at all? Are we keeping it for historical purposes? Yes, since past releases did include it, so if we check out a previous release, it needs to be able to pull those files as well ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Getting table name/tuple from OID
Actually I want to do this from inside the postgres code i.e. I want to get table name and tuple values from OID of corresponding table OID and tuple OID. Is there any built in function in postgres code to do this? Paresh Christopher Kings-Lynne wrote: > Try > > SELECT 12341234::regclass; > > Where 12341234 is the OID of a table. > > Otherwise try: > > SELECT tableoid, * FROM table; > > To get the tableoid on each row. > > Chris > > Paresh Bafna wrote: > >> Is there any way to retrieve table name and/or tuple values from OID of >> table/tuple? >> >> ---(end of broadcast)--- >> TIP 5: don't forget to increase your free space map settings > > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Another pgindent gripe
Neil Conway <[EMAIL PROTECTED]> writes: > On a related note, most of these changes are completely bogus: > http://developer.postgresql.org/cvsweb.cgi/pgsql/src/pl/plpgsql/src/pl_exec.c.diff?r1=1.152;r2=1.153 Oy vey! Why did it insert spaces after the stars in all those function declarations? That's certainly not in conformance with project style ... and I don't see it having happened elsewhere. Seems like pgindent has suffered some significant regressions since the 8.0 run. I thought it had not been changed much at all, but evidently that's wrong. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Another pgindent gripe
Where someone is doing real work and pgindent creates so many cosmetic changes for the current CVS repository, would it be feasible to first commit a "whitespace only" noop revision, so that real changes can be easily identified. I have seen this approach work well for others. -Kevin >>> Neil Conway <[EMAIL PROTECTED]> >>> On Mon, 2005-07-11 at 09:19 -0300, Alvaro Herrera wrote: > I have another gripe regarding pgindent. Why does it change indenting > of function declarations? On a related note, most of these changes are completely bogus: http://developer.postgresql.org/cvsweb.cgi/pgsql/src/pl/plpgsql/src/pl_exec.c.diff?r1=1.152;r2=1.153 -Neil ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Another pgindent gripe
On Mon, 2005-07-11 at 09:19 -0300, Alvaro Herrera wrote: > I have another gripe regarding pgindent. Why does it change indenting > of function declarations? On a related note, most of these changes are completely bogus: http://developer.postgresql.org/cvsweb.cgi/pgsql/src/pl/plpgsql/src/pl_exec.c.diff?r1=1.152;r2=1.153 -Neil ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Access CVS
On Mon, 2005-07-11 at 20:39 +0530, Sreejesh O S wrote: > How can I access CVS ? http://www.postgresql.org/developer/sourcecode/ Please try to do at least a little research before posting. -Neil ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] Access CVS
How can I access CVS ?
Re: [HACKERS] SIGSEGV taken on 8.1 during dump/reload
When grilled further on (Sun, 6 Nov 2005 20:00:38 -0700), Robert Creager <[EMAIL PROTECTED]> confessed: Didn't set the core big enough (1Mb). It's now at 50Mb. I am using PGSphere, which should be the only gist indexes in use. gdb /usr/local/pgsql810/bin/postgres core.28053 ... warning: core file may not match specified executable file. Core was generated by `postgres: robert tassiv [local] CREATE INDEX '. Program terminated with signal 11, Segmentation fault. warning: current_sos: Can't read pathname for load map: Input/output error Cannot access memory at address 0x400d8000 #0 0x08082057 in gistUserPicksplit (r=Cannot access memory at address 0xbfffcb28 ) at gistutil.c:833 833 if (v->spl_right[v->spl_nright - 1] == InvalidOffsetNumber) (gdb) bt #0 0x08082057 in gistUserPicksplit (r=Cannot access memory at address 0xbfffcb28 ) at gistutil.c:833 Cannot access memory at address 0xbfffcb3c Unfortunately, I have to run shortly. If someone want's a 1Mb core, I have one. I'll have (presumably) more info this evening with the bigger core, Cheers, Rob -- 07:56:01 up 36 days, 30 min, 7 users, load average: 2.25, 2.31, 2.23 Linux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004 pgpLvPxKKjjRR.pgp Description: PGP signature
Re: [HACKERS] parameterized limit statements
"Merlin Moncure" <[EMAIL PROTECTED]> writes: > Is this correct? Sure, what do you think is wrong with it? plan_rows is initially a copy of the child node's output-rows estimate, and then it gets modified. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] parameterized limit statements
I noticed your 8/18 commit to address an issue I raised regarding parameterized limit statements. Specifically, prepared statements with a variable limit would tend to revert to bitmap or seqscan. I check out cvs tip and am still getting that behavior :(. So, I had a look at createplan.c to see what was going on. Inside makelimit, there is: if (count_est != 0) { double count_rows; if (count_est > 0) count_rows = (double) count_est; else count_rows = clamp_row_est(lefttree->plan_rows * 0.10); if (count_rows > plan->plan_rows) count_rows = plan->plan_rows; if (plan->plan_rows > 0) plan->total_cost = plan->startup_cost + (plan->total_cost - plan->startup_cost) * count_rows / plan->plan_rows; plan->plan_rows = count_rows; if (plan->plan_rows < 1) plan->plan_rows = 1; } Is this correct? plan_rows is assigned (from count_rows) after it is checked to determine cost. If this is correct, would you like a test cast demonstrating the behavior? Merlin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Another pgindent gripe
Alvaro Herrera <[EMAIL PROTECTED]> writes: > I have another gripe regarding pgindent. Why does it change indenting > of function declarations? What it's doing is indenting the additional lines in the same way as they'd be indented in the function definition, that is static void foo(int p1, int p2); static void foo(int p1, int p2) { ... I've always thought this was pretty stupid, too --- I think it'd look nicer as static void foo(int p1, int p2); But I dunno whether it is easily fixable. It's always done that AFAIR. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Getting table name/tuple from OID
Try SELECT 12341234::regclass; Where 12341234 is the OID of a table. Otherwise try: SELECT tableoid, * FROM table; To get the tableoid on each row. Chris Paresh Bafna wrote: Is there any way to retrieve table name and/or tuple values from OID of table/tuple? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Interval aggregate regression failure (expected seems
Michael Paesold <[EMAIL PROTECTED]> writes: > I have tested these combination of CFLAGS: > -O2 OK > -O2 -mcpu=i686 -march=i686 OK (good, RPMS are built with these) > -O2 -mcpu=pentium4 -march=i686 OK > -O2 -mcpu=pentium4 -march=pentium4 fails > I am definatly not going to use -march=pentium4 in any production > system. Should I open a bug report with RedHat (gcc vendor)? Yeah, but they'll probably want a smaller test case than "Postgres fails its regression tests" :-( My guess is that the problem is misoptimization of the interval_div() function (look in utils/adt/timestamp.c), and that you could probably construct a nice small test case for them out of that function. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Getting table name/tuple from OID
Paresh Bafna schrieb: Is there any way to retrieve table name and/or tuple values from OID of table/tuple? Yes. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Getting table name/tuple from OID
Is there any way to retrieve table name and/or tuple values from OID of table/tuple? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Old interfaces directory in CVS tree?
Jeroen T. Vermeulen wrote: On Sat, Nov 05, 2005 at 09:46:15AM -0500, Andrew Dunstan wrote: A libpqxx user just informed me that the anonymous CVS repository at anoncvs.postgresql.org still contained a 2002 version of libpqxx in the interfaces directory. I checked it out and otherwise it seems to be the current source tree--at least I found an 8.1 version number somewhere. you informant probably needs to use the -P option for cvs checkout. Doesn't make a difference. Besides, the outdated source is there even on a fresh checkout. Oh, the top level interfaces directory. I misunderstood. Why is anybody checking that out at all? Are we keeping it for historical purposes? cheers andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Problems with index-scan on regexp in 8.1
On Mon, Nov 07, 2005 at 07:50:20AM +0100, Lars Kanis wrote: > We're using Postgres 8.0.2 on SuSE10.0 (64-Bit). Tests on 8.1 beta 4 have > shown no problems but this one: > > SELECT * FROM mitglieder WHERE lower(vorname::text)='lars' > > does a bitmap-index-scan like this: Check your locales. For non-ASCII locales the normal shortcuts for regex optimisation can't apply. Evidently your old installation uses a different locale from your new one. You should be able to make this work by declaring your index with "text_pattern_ops", like so: CREATE INDEX myindex ON mytable(mycolumn text_pattern_ops); Hope this helps, -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgpPfZE7CqX8z.pgp Description: PGP signature
[HACKERS] Problems with index-scan on regexp in 8.1
We're using Postgres 8.0.2 on SuSE10.0 (64-Bit). Tests on 8.1 beta 4 have shown no problems but this one: SELECT * FROM mitglieder WHERE lower(vorname::text)='lars' does a bitmap-index-scan like this: Bitmap Heap Scan on mitglieder (cost=10.68..3770.52 rows=1051 width=226) Recheck Cond: (lower((vorname)::text) = 'lars'::text) -> Bitmap Index Scan on mitgl_lower_namen_idx (cost=0.00..10.68 rows=1051 width=0) Index Cond: (lower((vorname)::text) = 'lars'::text) but a regular expression always results in a seqscan: SELECT * FROM mitglieder WHERE lower(vorname::text)~'^lars' Seq Scan on mitglieder (cost=0.00..79703.73 rows=1 width=226) Filter: (lower((vorname)::text) ~ '^lars'::text) whereas V8.0.2 does a proper index-scan: Index Scan using mitgl_lower_namen_idx on mitglieder (cost=0.01..18.05 rows=4 width=225) Index Cond: ((lower((vorname)::text) >= 'lars'::text) AND (lower((vorname)::text) < 'lart'::text)) Filter: (lower((vorname)::text) ~ '^lars'::text) The use of indexes for regexp is quite important for the search in our interactive frontend. kind regards Lars Kanis ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Another pgindent gripe
Hi, I have another gripe regarding pgindent. Why does it change indenting of function declarations? An example is at the end. I think it may be thinking that declarations should be aligned using 8-spaces tabs. Can this be corrected? It annoyed me just now, because I'm adjusting my vacuum patch and lots of conflicts appeared because it chose to change the spacing. A pointless change for an automatic tool to make, if I must add, and one that adds gratuituous work for the human wanting to do some real work. Index: vacuum.c === RCS file: /home/alvherre/cvs/pgsql/src/backend/commands/vacuum.c,v retrieving revision 1.316 retrieving revision 1.317 diff -c -r1.316 -r1.317 *** vacuum.c3 Oct 2005 22:52:21 - 1.316 --- vacuum.c15 Oct 2005 02:49:16 - 1.317 *** *** 198,204 /* non-export function prototypes */ static List *get_rel_oids(List *relids, const RangeVar *vacrel, ! const char *stmttype); static void vac_update_dbstats(Oid dbid, TransactionId vacuumXID, TransactionId frozenXID); --- 198,204 /* non-export function prototypes */ static List *get_rel_oids(List *relids, const RangeVar *vacrel, !const char *stmttype); static void vac_update_dbstats(Oid dbid, TransactionId vacuumXID, TransactionId frozenXID); -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Interval aggregate regression failure (expected seems
Michael Glaesemann wrote: So what do you have in results/interval.out? @ 4 years 1 mon 9 days 28 hours 18 mins 23 secs seems wrong to me, no? select avg(f1) from interval_tbl; avg - @ 4 years 1 mon 9 days 28 hours 18 mins 23 secs (1 row) The point of the change to the interval datatype in 8.1 is to keep track of months, days, and seconds (which in turn are represented as hours, minutes and seconds). Previous releases tracked only months and seconds. This has advantages for using intervals with dates and timestamps that involve daylight saving time changes. Admittedly, it looks odd at first, but it falls out of the change in behavior of the interval datatype. There are two new functions, justify_days and justify_hours, that you can use to put intervals into more traditional forms. http://developer.postgresql.org/docs/postgres/functions-datetime.html Thank you very much for the insight. Doesn't explain why you're getting a regression failure though. Well, I have something now. It seems to be a compiler/optimization issue. I wrote: > CFLAGS = -O2 -mcpu=pentium4 -march=pentium4 -Wall -Wmissing-prototypes > -Wpointer-arith -Winline -Wdeclaration-after-statement > -fno-strict-aliasing -g I had set CFLAGS to -O2 -mcpu=pentium4 -march=pentium4. I have been using these settings for testing PostgreSQL tip for some time now and never had any problems. Removing the cpu and architecture optimization part changes the behavior of the interval aggrate, so the results/interval.out now also looks like the expected output. select avg(f1) from interval_tbl; avg - @ 4 years 1 mon 9 days 28 hours 18 mins 23 secs (1 row) Switching -mcpu=pentium4 -march=pentium4 back on, results in wrong output. This is 100% reproducable. Can somebody with more knowledge explain why the compiler should stumble over just this? Pure luck? I have tested these combination of CFLAGS: -O2 OK -O2 -mcpu=i686 -march=i686 OK (good, RPMS are built with these) -O2 -mcpu=pentium4 -march=i686 OK -O2 -mcpu=pentium4 -march=pentium4 fails I am definatly not going to use -march=pentium4 in any production system. Should I open a bug report with RedHat (gcc vendor)? Best Regards, Michael Paesold ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Interval aggregate regression failure (expected seems wrong)
On Nov 7, 2005, at 18:28 , Michael Paesold wrote: Ok, forgot. This is *without* integer-datetimes, RHEL 3 (Linux 2.4.21, glibc 2.3.2, gcc 3.2.3 20030502) on i686 (Xeon without x86-64). I just ran make check on for PostgreSQL 8.1.0 on Mac OS X 10.4.3 Heh. I forgot, too ;) My test was also without integer-datetimes. [snip] I didn't have any regression failures. I'd also expect we'd see a lot more failures on the build farm if it were the case that it was broken just on the platform that the expected results were generated on. From a quick look at the current build farm failures on HEAD and REL8_1_STABLE, it doesn't look like any of the failures are failing here. I just started to wonder about buildfarm, too, but found that most build farm members have --enable-integer-datetimes. Could that be an explanation? Is it possible that the code is wrong with --enable- integer-datetimes? So what do you have in results/interval.out? @ 4 years 1 mon 9 days 28 hours 18 mins 23 secs seems wrong to me, no? select avg(f1) from interval_tbl; avg - @ 4 years 1 mon 9 days 28 hours 18 mins 23 secs (1 row) The point of the change to the interval datatype in 8.1 is to keep track of months, days, and seconds (which in turn are represented as hours, minutes and seconds). Previous releases tracked only months and seconds. This has advantages for using intervals with dates and timestamps that involve daylight saving time changes. Admittedly, it looks odd at first, but it falls out of the change in behavior of the interval datatype. There are two new functions, justify_days and justify_hours, that you can use to put intervals into more traditional forms. http://developer.postgresql.org/docs/postgres/functions-datetime.html Doesn't explain why you're getting a regression failure though. Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Interval aggregate regression failure (expected seems
Michael Paesold wrote: On Nov 7, 2005, at 17:24 , Michael Paesold wrote: Using both PostgreSQL 8.1.0 and CVS current of Nov 7, 9:00 am CET I get a regression failure in the interval tests. I am no export for the interval type, but the expected "9 days 28 hours" seem wrong, don't they? The actual value seems to be the same. Is it possible that this is broken on the platform where the expected results were generated? Perhaps it is an intended behavior? If so, it still fails without integer-datetimes. Well, no, it also fails with integer-datetimes for me in the same way. pg_config output below. And yes, I did "cvs up; make distclean; ./configure... ; make ; make install ; make check". Could this be DST-related? I thought plain interval was not affected by DST changes. BINDIR = /usr/local/postgresql-8cvs/bin DOCDIR = /usr/local/postgresql-8cvs/doc INCLUDEDIR = /usr/local/postgresql-8cvs/include PKGINCLUDEDIR = /usr/local/postgresql-8cvs/include INCLUDEDIR-SERVER = /usr/local/postgresql-8cvs/include/server LIBDIR = /usr/local/postgresql-8cvs/lib PKGLIBDIR = /usr/local/postgresql-8cvs/lib LOCALEDIR = MANDIR = /usr/local/postgresql-8cvs/man SHAREDIR = /usr/local/postgresql-8cvs/share SYSCONFDIR = /usr/local/postgresql-8cvs/etc PGXS = /usr/local/postgresql-8cvs/lib/pgxs/src/makefiles/pgxs.mk CONFIGURE = '--prefix=/usr/local/postgresql-8cvs' '--with-pgport=54321' '--with-perl' 'CFLAGS=-O2 -mcpu=pentium4 -march=pentium4' '--enable-casserts' '--enable-debug' '--enable-integer-datetimes' CC = gcc CPPFLAGS = -D_GNU_SOURCE CFLAGS = -O2 -mcpu=pentium4 -march=pentium4 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -fno-strict-aliasing -g CFLAGS_SL = -fpic LDFLAGS = -Wl,-rpath,/usr/local/postgresql-8cvs/lib LDFLAGS_SL = LIBS = -lpgport -lz -lreadline -lncurses -lcrypt -lresolv -lnsl -ldl -lm -lbsd VERSION = PostgreSQL 8.2devel Best Regards, Michael Paesold ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Interval aggregate regression failure (expected seems
Michael Glaesemann wrote: On Nov 7, 2005, at 17:24 , Michael Paesold wrote: Using both PostgreSQL 8.1.0 and CVS current of Nov 7, 9:00 am CET I get a regression failure in the interval tests. I am no export for the interval type, but the expected "9 days 28 hours" seem wrong, don't they? The actual value seems to be the same. Is it possible that this is broken on the platform where the expected results were generated? What platform are you testing on? With or without integer-datetimes? Ok, forgot. This is *without* integer-datetimes, RHEL 3 (Linux 2.4.21, glibc 2.3.2, gcc 3.2.3 20030502) on i686 (Xeon without x86-64). I just ran make check on for PostgreSQL 8.1.0 on Mac OS X 10.4.3 [snip] I didn't have any regression failures. I'd also expect we'd see a lot more failures on the build farm if it were the case that it was broken just on the platform that the expected results were generated on. From a quick look at the current build farm failures on HEAD and REL8_1_STABLE, it doesn't look like any of the failures are failing here. I just started to wonder about buildfarm, too, but found that most build farm members have --enable-integer-datetimes. Could that be an explanation? Is it possible that the code is wrong with --enable-integer-datetimes? So what do you have in results/interval.out? @ 4 years 1 mon 9 days 28 hours 18 mins 23 secs seems wrong to me, no? Tom wrote for that commit: revision 1.14 date: 2005/10/25 17:13:07; author: tgl; state: Exp; lines: +1 -1 Remove justify_hours call from interval_mul and interval_div, and make some small stylistic improvements in these functions. Also fix several places where TMODULO() was being used with wrong-sized quotient argument, creating a risk of overflow --- interval2tm was actually capable of going into an infinite loop because of this. Perhaps it is an intended behavior? If so, it still fails without integer-datetimes. Best Regards, Michael Paesold ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Possible problem with pg_reload_conf() and view pg_settings
I have tested on pgsql-8.1-beta3 on Windows 2003. It works fine. I changed the line in postmaster.conf between "on" and "off". (Remember to save it each time). And paste the two lines in psql to see the results. select pg_reload_conf(); select setting from pg_settings where name = 'constraint_exclusion'; "Tony Caduto" <[EMAIL PROTECTED]> wrote > Hi, > I have been playing around with pg_reload_conf() and the pg_settings view. > > I understand that the pg_settings view, if updated, applies to the current > session only. > However I was under the impression that if I did a pg_reload_conf(), the > pg_settings view would be updated at that time, but that does not seem to > happen. I am running on win32, but the same thing happens on Linux. > > If I restart the Postgresql service then the pg_settings view contains the > changes I made to the postgresql.conf file. > > Any ideas, does this seem like a possible bug? It just seems to me that > pg_settings should be updated if a pg_reload_conf() is executed. > > > Thanks, > > Tony > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Interval aggregate regression failure (expected seems wrong)
On Nov 7, 2005, at 17:24 , Michael Paesold wrote: Using both PostgreSQL 8.1.0 and CVS current of Nov 7, 9:00 am CET I get a regression failure in the interval tests. I am no export for the interval type, but the expected "9 days 28 hours" seem wrong, don't they? The actual value seems to be the same. Is it possible that this is broken on the platform where the expected results were generated? What platform are you testing on? With or without integer-datetimes? I just ran make check on for PostgreSQL 8.1.0 on Mac OS X 10.4.3 test=# select version(); version -- PostgreSQL 8.1.0 on powerpc-apple-darwin8.3.0, compiled by GCC powerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 (Apple Computer, Inc. build 5026) I didn't have any regression failures. I'd also expect we'd see a lot more failures on the build farm if it were the case that it was broken just on the platform that the expected results were generated on. From a quick look at the current build farm failures on HEAD and REL8_1_STABLE, it doesn't look like any of the failures are failing here. Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Interval aggregate regression failure (expected seems wrong)
Using both PostgreSQL 8.1.0 and CVS current of Nov 7, 9:00 am CET I get a regression failure in the interval tests. I am no export for the interval type, but the expected "9 days 28 hours" seem wrong, don't they? The actual value seems to be the same. Is it possible that this is broken on the platform where the expected results were generated? *** ./expected/interval.out Tue Oct 25 19:13:07 2005 --- ./results/interval.out Mon Nov 7 09:11:27 2005 *** *** 218,224 select avg(f1) from interval_tbl; avg - ! @ 4 years 1 mon 9 days 28 hours 18 mins 23 secs (1 row) -- test long interval input --- 218,224 select avg(f1) from interval_tbl; avg - ! @ 4 years 1 mon 10 days 4 hours 18 mins 23 secs (1 row) -- test long interval input The last commit to interval.out seems to be this one, and it changed exactly this line. revision 1.14 date: 2005/10/25 17:13:07; author: tgl; state: Exp; lines: +1 -1 Well, this is CVS tip, so there is a chance this is fixed in REL_8_1_STABLE which has a 1.14.0.2. At least the release tarball should be rebuilt, no? Sorry, if this is just noise. Just wanted to be sure you know about it. Best Regards, Michael ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Old interfaces directory in CVS tree?
On Sat, Nov 05, 2005 at 09:46:15AM -0500, Andrew Dunstan wrote: > >A libpqxx user just informed me that the anonymous CVS repository at > >anoncvs.postgresql.org still contained a 2002 version of libpqxx in the > >interfaces directory. I checked it out and otherwise it seems to be the > >current source tree--at least I found an 8.1 version number somewhere. > you informant probably needs to use the -P option for cvs checkout. Doesn't make a difference. Besides, the outdated source is there even on a fresh checkout. Jeroen ---(end of broadcast)--- TIP 6: explain analyze is your friend