[HACKERS] Postgresql 9.1 replication failing
All,I have a large PG 9.1.1 server (over 1TB of data) and replica using log shipping. I had some hardware issues on the replica system and now I am getting the following in my pg_log/* files. Same2 lines over and over since yesterday.2011-12-01 07:46:30 EST LOG: restored log file "0001028E00E5" from archive2011-12-01 07:46:30 EST LOG: incorrect resource manager data checksum in record at 28E/E555E1B8Anything I can do on the replica or do I have to start over?Finally, I know this is not the correct list, I tried general with no answer.ThanksJim___Jim Buttafuocoj...@contacttelecom.com603-647-7170 ext. - Office603-490-3409 - Celljimbuttafuoco - Skype
Re: [HACKERS] Postgresql 9.1 replication failing
the WAL file on the master is long gone, how would one inspect the web segment? Any way to have PG "move" on?On Dec 1, 2011, at 2:02 PM, Jerry Sievers wrote:Jim Buttafuoco j...@contacttelecom.com writes:All,I have a large PG 9.1.1 server (over 1TB of data) and replica using log shipping. I had some hardware issues on thereplica system and now I am getting the following in my pg_log/* files. Same 2 lines over and over since yesterday.2011-12-01 07:46:30 EST LOG: restored log file "0001028E00E5" from archive2011-12-01 07:46:30 EST LOG: incorrect resource manager data checksum in record at 28E/E555E1B8Anything I can do on the replica or do I have to start over?INspect that WAL segment or possibly the one immediatly following itin comparison to another copy if you still have it on the master or acentral WAL repository.A standby crashing meanwhile copying in a WAL segment and/or synchingone to disk could result in ramdon corruption.If you have another copy of the segment and does not compare equal tothe one your standby is trying to read, try another copy.Finally, I know this is not the correct list, I tried general with no answer.The admin list is the right one for such a post probably.HTHThanksJim___[cid]Jim Buttafuocoj...@contacttelecom.com603-647-7170 ext. - Office603-490-3409 - Celljimbuttafuoco - Skype-- Jerry SieversPostgres DBA/Development Consultinge: postgres.consult...@comcast.netp: 305.321.1144___Jim Buttafuocoj...@contacttelecom.com603-647-7170 ext. - Office603-490-3409 - Celljimbuttafuoco - Skype
Re: [HACKERS] Postgresql 9.1 replication failing
Simon,What do you mean, start over with a base backup?JimOn Dec 1, 2011, at 4:08 PM, Simon Riggs wrote:On Thu, Dec 1, 2011 at 7:09 PM, Jim Buttafuoco j...@contacttelecom.com wrote: the WAL file on the master is long gone, how would one inspect the web segment? Any way to have PG "move" on?Regenerate the master. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services ___Jim Buttafuocoj...@contacttelecom.com603-647-7170 ext. - Office603-490-3409 - Celljimbuttafuoco - Skype
Re: [HACKERS] How to configure Postgres to make it not to use (load) crypto libraries.
Rebuild from source and DO NOT specify --with-openssl _ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tom Dong Sent: Saturday, January 27, 2007 12:16 PM To: pgsql-hackers@postgresql.org Cc: Tom Dong Subject: [HACKERS] How to configure Postgres to make it not to use (load) crypto libraries. Hi, I am looking for a way via configuration to make Postgres not to use the openssl lib libeay32.dll as I need to delete that library. I basically need to remove any encryption (hash is fine) features from my Postgres (8.x) installation. This is quite urgent for me. I would be very grateful if someone in this communicate can reply to this email and help me. Thanks! Tom
[HACKERS] xlog flush request not satisfied after server reboot
Hackers, I had a server reboot for an unknown reason, after the server restarted one of the disks was not mounted before Postgresql tried to start. The startup failed (see log below), after I mounted the disks and tried to start Postgresql again there was a HINT about corrupted data and a WARNING about invalid page headers. The warning didn't tell me which database had the invalid page, so I decided to vacuum full all of the databases to see if I received an error which I did (see below). The database with the error is the largest ~300GB, I do have a backup if necessary, is there away to recover without doing a restore? Thanks for your help Jim startup with tablespace not mounted - LOG: database system was interrupted at 2006-12-14 19:51:26 EST LOG: checkpoint record is at 1F1/52C8A704 LOG: redo record is at 1F1/5291F0A0; undo record is at 0/0; shutdown FALSE LOG: next transaction ID: 13019683; next OID: 4167831 LOG: next MultiXactId: 1; next MultiXactOffset: 0 LOG: database system was not properly shut down; automatic recovery in progress LOG: redo starts at 1F1/5291F0A0 PANIC: could not create directory pg_tblspc/3717272: File exists LOG: startup process (PID 4987) was terminated by signal 6 LOG: aborting startup due to startup process failure LOG: logger shutting down startup after tablespace was mounted - LOG: database system was interrupted while in recovery at 2006-12-15 08:04:48 E ST HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery. LOG: checkpoint record is at 1F1/52C8A704 LOG: redo record is at 1F1/5291F0A0; undo record is at 0/0; shutdown FALSE LOG: next transaction ID: 13019683; next OID: 4167831 LOG: next MultiXactId: 1; next MultiXactOffset: 0 LOG: database system was not properly shut down; automatic recovery in progress LOG: redo starts at 1F1/5291F0A0 WARNING: invalid page header in block 6196744 of relation 4158697; zeroing ou t page LOG: connection received: host=s20a.contactbda.com port=1667 FATAL: the database system is starting up WARNING: invalid page header in block 6196934 of relation 4158697; zeroing ou t page LOG: record with zero length at 1F1/572B792C LOG: redo done at 1F1/572B5BD0 LOG: database system is ready Error message from vacuum ERROR: xlog flush request 1F6/3A0F605C is not satisfied --- flushed only to 1F1 /57CD76FC CONTEXT: writing block 3387032 of relation 3717272/4158444/4158627 _ Jim Buttafuoco Contact Telecom LLC Office: 603-647-7170 Fax: 603-606-4243 Cell: 603-490-3409
[HACKERS] ERROR: could not open relation with OID 909391158
Hackers, I have been loading 200+ million call records into a new Postgresql 8.1.4 install. Everything has been going great until a couple of minutes ago. After the process loads a single file (300k to 500k records), it summaries the data into a summary table. I have been getting the following error message ERROR: could not open relation with OID 909391158 I don't have any relations with an OID of 909391158, I checked this is the following query select * from pg_class where oid = 909391158; I don't know where to go from here. What i have don't to move on is rename the summary table to summary_old and created a new (empty) one for now. The process is happy with this. I have search the logs and there are NO hardware related errors. I am running a vacuum full verbose on the summary_old table now to see if any errors popup. Any ideas? Thanks Jim select version() returns: PostgreSQL 8.1.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20060613 (prerelease) (Debian 4.1.1-5) ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] pg_dump and inherits issue
I have an issue with pg_dump and inherits with pg 8.1.3 and 8.1.4 if I run the following SQL create table t (a text check (a = '*')); create table s () inherits (t); alter table s drop constraint t_a_check; alter table s add constraint a_check check (a='s'); I get the following Table public.t Column | Type | Modifiers +--+--- a | text | Check constraints: t_a_check CHECK (a = '*'::text) Table public.s Column | Type | Modifiers +--+--- a | text | Check constraints: a_check CHECK (a = 's'::text) Inherits: t and then create a new database and run pg_dump old_db |psql new_db I get the following Table public.t Column | Type | Modifiers +--+--- a | text | Check constraints: t_a_check CHECK (a = '*'::text) Table public.s Column | Type | Modifiers +--+--- a | text | Check constraints: a_check CHECK (a = 's'::text) t_a_check CHECK (a = '*'::text) Inherits: t The check constraints on table s are not like the original, I have an extra t_a_check constraint. Is this correct? Jim ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] drop database command blocking other connections
from time to time I have to drop a very large database (1TB+). The drop database command takes a long time to complete while its deleting the files. During this time, no one can connect to the database server, ps displays startup waiting. This is with Postgresql 7.4. Has this been addressed in 8.1, if not, does anyone have some ideas on how to speed this up. thanks Jim ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] drop database command blocking other connections
Tom, I am trying to migrate all of my database from 7.4 to 8.1, It takes alot of disk space to have both online at the same time. I have done around 2TB of actual disk space to date and have another 6TB to do over the next month or so. I have been moving (with pg_dump 7.4db | pg_dump 8.1db) each database to 8.1 and then dropping the 7.4 one (after some testing). I would be nice if this is fixed so when I have to move from 8.1 to 8.2 it will not be an issue. Thanks for your time Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Wed, 03 May 2006 14:23:08 -0400 Subject: Re: [HACKERS] drop database command blocking other connections Jim Buttafuoco [EMAIL PROTECTED] writes: from time to time I have to drop a very large database (1TB+). The drop database command takes a long time to complete while its deleting the files. During this time, no one can connect to the database server, ps displays startup waiting. This is with Postgresql 7.4. Has this been addressed in 8.1, if not, does anyone have some ideas on how to speed this up. No, it'll probably behave the same in CVS HEAD. The problem is that DROP DATABASE has to lock out new connections to the victim database, and the mechanism it's using for that is a table-level lock on pg_database, rather than something specific to one database. So new connections to *all* DBs in the cluster will be blocked while DROP DATABASE runs. It strikes me that we could fix this by taking out special locks on the database as an object (using LockSharedObject) instead of relying on locking pg_database. There wasn't any locktag convention that'd work for that back in 7.4, but it surely seems doable now. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend --- End of Original Message --- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] drop database command blocking other connections
nice workaround, I am going to modify my procedure to drop the public schema first (it is the largest one). -- Original Message --- From: Tony Wasson [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Wed, 3 May 2006 14:09:05 -0700 Subject: Re: [HACKERS] drop database command blocking other connections On 5/3/06, Jim Buttafuoco [EMAIL PROTECTED] wrote: from time to time I have to drop a very large database (1TB+). The drop database command takes a long time to complete while its deleting the files. During this time, no one can connect to the database server, ps displays startup waiting. This is with Postgresql 7.4. Has this been addressed in 8.1, if not, does anyone have some ideas on how to speed this up. I don't have a fix, but I can offer a workaround. When we need to drop large DBs we drop them a schema at a time. DROP SCHEMA does *not* block new connections into the server. Once the data it out of the schema(s), a DROP DATABASE on a nearly empty database does not block new connections for more than a moment. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] create type error message
# select version(); version PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 (Debian 1:3.3.5-13) (1 row) simple example: # create type a as (a text,b int); CREATE TYPE # create type a as (a text,b int); ERROR: relation a already exists seems like ERROR: type a already exists would be better. Thanks Jim ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Analyze and vacuum, they are sort of mandatory....
if we had a pg_vacuum table that had the last timestamp of a vacuum/analyze for each table and the stats looked like the default, why not just print a warning message out to the user? -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: Peter Eisentraut [EMAIL PROTECTED] Cc: Mark Woodward [EMAIL PROTECTED], pgsql-hackers@postgresql.org Sent: Sun, 12 Feb 2006 11:18:03 -0500 Subject: Re: [HACKERS] Analyze and vacuum, they are sort of mandatory Peter Eisentraut [EMAIL PROTECTED] writes: Yes, that is what autovacuum does. It detects changes in the database and runs analyze if failing to do so would cause PostgreSQL to behave badly. I don't know why it's not turned on by default. Conservatism. It may well be on by default in some future release, but that's not happening in the first release where the code exists at all. autovacuum isn't a 100% solution to the sort of problems Mark is complaining about anyway: on a freshly-loaded table you could get bad plans because autovacuum hadn't gotten to it yet. One thing we could consider doing is boosting up the default no-stats assumption about the number of distinct values in a column, to the point where the planner wouldn't try a hash aggregate unless it had actual stats. However, I'm unsure what negative side-effects that might have. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Fw: Is anyone interested in getting PostgreSQL working
Stefan, well that is good news, can you tell me what version of linux you are using and what gcc version also. I will let Martin know. Thanks Jim -- Original Message --- From: Stefan Kaltenbrunner [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Mon, 09 Jan 2006 08:55:06 +0100 Subject: Re: [HACKERS] Fw: Is anyone interested in getting PostgreSQL working Jim Buttafuoco wrote: Hackers, I can confirm that HEAD does not initdb because of a SIGBUS as reported below by Martin Pitt @ debian (see his email below). My build farm member (corgi) did pass all checks 6 days ago (I was having some issues with the build farm code before that). If anyone would like to SSH into the box, please contact me via email and I will get an account setup. Right now, I am trying to build 8.1 to see if it passes. I cannot confirm this - the mipsel box I have on the buildfarm (lionfish) seems to be happyily building all branches and completing make check. Stefan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly --- End of Original Message --- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Fw: Is anyone interested in getting PostgreSQL working
Stefan, My mail admin has removed the Austria block, I guess we were getting spammed by some one there. Can you send the output of dpkg --list, so I can compare what packages you are using to what I have. Thanks Jim -- Original Message --- From: Stefan Kaltenbrunner [EMAIL PROTECTED] To: pgsql-hackers pgsql-hackers@postgresql.org Cc: [EMAIL PROTECTED] Sent: Mon, 09 Jan 2006 15:03:28 +0100 Subject: Re: [HACKERS] Fw: Is anyone interested in getting PostgreSQL working Jim Buttafuoco wrote: Stefan, first i would ask you to fix your mailserver setup because my last Mail to you bounced with: 550 5.0.0 Sorry we don't accept mail from Austria which makes it rather difficult for me to reply to your personal mail well that is good news, can you tell me what version of linux you are using and what gcc version also. I will let Martin know. lionfish is a stock Debian/Sarge box (a cobalt cube) with gcc 3.3.5. Stefan ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Fw: Returned mail: see transcript for details
Tom, My email to you was blocked. Jim -- Forwarded Message --- From: Mail Delivery Subsystem [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Sun, 8 Jan 2006 20:16:27 -0500 Subject: Returned mail: see transcript for details - The following addresses had permanent fatal errors - [EMAIL PROTECTED] (reason: 550 5.7.1 Probable spam from 216.204.66.227 refused - see http://www.five-ten-sg.com/blackhole.php? 216.204.66.227) - Transcript of session follows - ... while talking to sss.pgh.pa.us.: MAIL From:[EMAIL PROTECTED] 550 5.7.1 Probable spam from 216.204.66.227 refused - see http://www.five-ten-sg.com/blackhole.php?216.204.66.227 554 5.0.0 Service unavailable Return-Path: [EMAIL PROTECTED] Received: from amanda.contactbda.com (amanda.contactbda.com [192.168.1.2]) by amanda.contactbda.com (8.12.11/8.12.11/Debian-3) with ESMTP id k091GQxs027867 for [EMAIL PROTECTED]; Sun, 8 Jan 2006 20:16:26 -0500 From: Jim Buttafuoco [EMAIL PROTECTED] To: Tom Lane [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] Subject: Re: [HACKERS] Fw: Is anyone interested in getting PostgreSQL working on mips[el]? Date: Sun, 8 Jan 2006 20:16:26 -0500 Message-Id: [EMAIL PROTECTED] In-Reply-To: [EMAIL PROTECTED] References: [EMAIL PROTECTED] [EMAIL PROTECTED] [EMAIL PROTECTED] X-Mailer: Open WebMail 2.41 20040926 X-OriginatingIP: 192.168.1.1 (jim) MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Tom, I will setup the ssh and forward you the info. Thanks Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Sun, 08 Jan 2006 16:41:38 -0500 Subject: Re: [HACKERS] Fw: Is anyone interested in getting PostgreSQL working on mips[el]? Jim Buttafuoco [EMAIL PROTECTED] writes: I can confirm that HEAD does not initdb because of a SIGBUS as reported below by Martin Pitt @ debian (see his email below). My build farm member (corgi) did pass all checks 6 days ago (I was having some issues with the build farm code before that). If anyone would like to SSH into the box, please contact me via email and I will get an account setup. Right now, I am trying to build 8.1 to see if it passes. I can take a look. regards, tom lane --- End of Original Message --- --- End of Forwarded Message --- ---BeginMessage--- Tom, I will setup the ssh and forward you the info. Thanks Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Sun, 08 Jan 2006 16:41:38 -0500 Subject: Re: [HACKERS] Fw: Is anyone interested in getting PostgreSQL working on mips[el]? Jim Buttafuoco [EMAIL PROTECTED] writes: I can confirm that HEAD does not initdb because of a SIGBUS as reported below by Martin Pitt @ debian (see his email below). My build farm member (corgi) did pass all checks 6 days ago (I was having some issues with the build farm code before that). If anyone would like to SSH into the box, please contact me via email and I will get an account setup. Right now, I am trying to build 8.1 to see if it passes. I can take a look. regards, tom lane --- End of Original Message --- ---End Message--- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Is anyone interested in getting PostgreSQL working on mips[el]?
Martin, I have installed the Sarge binutils on my testing/Etch system and all of the Postgresql regression test pass. I don't know where to go from here, any suggestions? Jim -- Original Message --- From: Martin Pitt [EMAIL PROTECTED] To: Jim Buttafuoco [EMAIL PROTECTED] Sent: Mon, 9 Jan 2006 16:10:18 +0100 Subject: Re: Is anyone interested in getting PostgreSQL working on mips[el]? Hi Jim! Jim Buttafuoco [2006-01-09 8:44 -0500]: Martin, While my build farm mipsel system (corgi) get the sigbus as reported below, lionfish does not. see http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=lionfishdt=2006-01-08%2016:51:36 This looks indeed fine. What's the difference between these two? that I have. So it must be in the software. What do you think is the best way to tell? I believe my raq is up to date as far as debian packages for sarge. I can't test this on sarge/mips myself. I have a sid dchroot on a box from Florian Lohoff, that's all. Indeed it could very well be that Debian's sarge packages worked fine on mips and that this whole story turns out to be a gcc or libc bug in sid, I don't know. Maybe you could do the test with identical compiler/postgresql versions and identical postgresql patches/configure options once on sarge and once on sid? Thanks a lot for looking into this issue! Martin -- Martin Pitthttp://www.piware.de Ubuntu Developer http://www.ubuntu.com Debian Developer http://www.debian.org In a world without walls and fences, who needs Windows and Gates? --- End of Original Message --- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] Fw: Is anyone interested in getting PostgreSQL working on mips[el]?
Hackers, I can confirm that HEAD does not initdb because of a SIGBUS as reported below by Martin Pitt @ debian (see his email below). My build farm member (corgi) did pass all checks 6 days ago (I was having some issues with the build farm code before that). If anyone would like to SSH into the box, please contact me via email and I will get an account setup. Right now, I am trying to build 8.1 to see if it passes. Thanks Jim -- Forwarded Message --- From: Martin Pitt [EMAIL PROTECTED] To: debian-mips@lists.debian.org Sent: Sat, 7 Jan 2006 16:39:58 +0100 Subject: Is anyone interested in getting PostgreSQL working on mips[el]? Hi mips porters! For a fair while now PostgreSQL (7.4, 8.0, 8.1) do not work at all on mips and mipsel. The postmaster immediately dies with a SIGBUS. It's pretty hard to track down, gdb stacktrace is unusable. Also, I was not able to get it to work in any of the various gcc version/optimization/PostgreSQL version/patch combinations I tried. [1] has some details of my findings so far. Thus I asked for removing the mips and mipsel debs for now until this is fixed [2]. Is any mips expert interested to track this down? Thanks, Martin [1] http://archives.postgresql.org/pgsql-bugs/2005-10/msg00118.php [2] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=344487 -- Martin Pitt http://www.piware.de Ubuntu Developer http://www.ubuntulinux.org Debian Developerhttp://www.debian.org --- End of Forwarded Message --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] What bison versions are installed on buildfarm machines?
Tom, On corgi (debian sarge) raq:~# bison -V bison (GNU Bison) 1.875a -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: pgsql-hackers@postgresql.org Sent: Mon, 02 Jan 2006 12:54:42 -0500 Subject: [HACKERS] What bison versions are installed on buildfarm machines? Is there any way to find out $subject? I see that several of the buildfarm machines are choking on a patch I committed yesterday: guc-file.l: In function `ProcessConfigFile': guc-file.l:162: error: `YY_FLUSH_BUFFER' undeclared (first use in this function) guc-file.l:162: error: (Each undeclared identifier is reported only once guc-file.l:162: error: for each function it appears in.) make[4]: *** [guc.o] Error 1 YY_FLUSH_BUFFER is documented as a standard macro in bison 1.875, which is the oldest version we officially support. But I'm prepared to change it if there is another way that would work with a wider range of bison versions. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] (View and SQL) VS plpgsql
try this, i had no data to check the plan and didn't have time to invent any. Jim create index idx_archive_jb_idx on archive_event(inst,utctime,src,bid,tid); explain SELECT count(cid) AS hits,src, bid, tid, (select MIN(utctime) from archive_event where src = ae.src AND bid =ae.bid AND tid = ae.tid AND inst = '3' AND utctime BETWEEN '111492' AND '1131512399' ) as min_time, (select MAX(utctime) from) as max_time archive_event where src = ae.src AND bid =ae.bid AND tid = ae.tid AND inst = '3' AND utctime BETWEEN '111492' AND '1131512399 FROM archive_event ae WHERE inst='3' AND (utctime BETWEEN '111492' AND '1131512399') GROUP BY src, bid, tid ; -- Original Message --- From: Eric Lauzon [EMAIL PROTECTED] To: pgsql-hackers@postgresql.org Sent: Fri, 11 Nov 2005 19:12:00 -0500 Subject: [HACKERS] (View and SQL) VS plpgsql This has been posted to performance but i didin't had any answer i could look forward... If anyone got some time for explanation,examples.. Abstract: The function that can be found at the end of the e-mail emulate two thing. First it will fill a record set of result with needed column from a table and two empty result column a min and a max. Those two column are then filled by a second query on the same table that will do a min and a max on an index idx_utctime. The function loop for the first recordset and return a setof record that is casted by caller to the function. The goald of this is to enabled the application that will receive the result set to minimise its work by having to group internaly two matching rowset. We use to handle two resultset but i am looking toward improving performances and at first glance it seem to speed up the process. Questions: 1. How could this be done in a single combinasion of SQL and view? 2. In a case like that is plpgsql really givig significant overhead? 3. Performance difference [I would need a working pure-SQL version to compare PLANNER and Explain results ] STUFF: --TABLE INDEX CREATE TABLE archive_event ( inst int4 NOT NULL, cid int8 NOT NULL, src int8 NOT NULL, dst int8 NOT NULL, bid int8 NOT NULL, tid int4 NOT NULL, utctime int4 NOT NULL, CONSTRAINT ids_archives_event_pkey PRIMARY KEY (inst, cid), CONSTRAINT ids_archives_event_cid_index UNIQUE (cid) ) --index CREATE INDEX idx_archive_utctime ON archive_event USING btree (utctime); CREATE INDEX idx_archive_src ON archive_event USING btree (src); CREATE INDEX idx_archive_bid_tid ON archive_event USING btree (tid, bid); --FUNCTION CREATE OR REPLACE FUNCTION console_get_source_rule_level_1() RETURNS SETOF RECORD AS ' DECLARE one_record record; r_record record; BEGIN FOR r_record IN SELECT count(cid) AS hits,src, bid, tid,NULL::int8 as min_time,NULL::int8 as max_time FROM archive_event WHERE inst=\'3\' AND (utctime BETWEEN \'111492\' AND \'1131512399\') GROUP BY src, bid, tid LOOP SELECT INTO one_record MIN(utctime) as timestart,MAX(utctime) as timestop from archive_event where src =r_record.src AND bid =r_record.bid AND tid = r_record.tid AND inst =\'3\' AND (utctime BETWEEN \'111492\' AND \'1131512399\'); r_record.min_time := one_record.timestart; r_record.max_time := one_record.timestop; RETURN NEXT r_record; END LOOP; RETURN; END; ' LANGUAGE 'plpgsql' VOLATILE; GRANT EXECUTE ON FUNCTION console_get_source_rule_level_1() TO console WITH GRANT OPTION; --FUNCTION CALLER SELECT * from get_source_rule_level_1() AS (hits int8,src int8,bid int8,tid int4,min_time int8,max_time int8) -Eric Lauzon ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings --- End of Original Message --- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Per-table freeze limit proposal
while you are at it, can you put in some audit timestamps as to when the vacuum occurred (full vs not full). -- Original Message --- From: Alvaro Herrera [EMAIL PROTECTED] To: Hackers pgsql-hackers@postgresql.org Sent: Wed, 14 Sep 2005 22:14:23 -0400 Subject: [HACKERS] Per-table freeze limit proposal Hackers, As you've probably heard too many times already, I'm thinking in improving vacuum, so we can keep track of the freeze Xid on a table level, rather than database level. Hopefully this will eliminate the need for database-wide vacuums. In fact this seems pretty easy to do. Add a field to pg_class, tell VACUUM to update it using the determined freezeLimit, and that's it. (Note that if we ever implement partial vacuum, it won't be able to update the freeze point. But that was true before anyway.) We also need to teach autovacuum to update pg_database.datfreezexid, using the minimum from pg_class. (I don't think it's a good idea to seqscan pg_class to find out the minimum on each VACUUM call.) So, an autovacuum iteration would issue all needed VACUUM/ANALYZE calls, then get the minimum freezexid from pg_class to update pg_database. This way, GetNewTransactionId can continue checking pg_database.datfreezexid as the hard limit for issuing warnings for Xid wraparound. Does anyone see a need for anything other than the autovacuum process to be updating pg_database.datfreezexid? Of course, if autovacuum is not in use, things would continue as now, that is, manual database-wide VACUUM calls updating pg_database.datfreezexid. But note that you can mark all tables as disabled on pg_autovacuum, issue your manuals VACUUM calls as needed (from cron or whatever), and use autovacuum to set pg_database.datfreezexid -- so autovacuum would in fact do nothing except set the freeze limit. The problem is, this seems so awfully simple that I fear I am missing something ... Otherwise, does this sound like a plan? -- Alvaro Herrera -- Valdivia, Chile Architect, www.EnterpriseDB.com The easiest way to resolve [trivial code guidelines disputes] is to fire one or both of the people involved. (Damian Conway) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] unexpected pageaddr on startup/recovery
Hackers, I had a system crash today. When Postgresql started I had the following in my pg.log file. 2005-08-06 14:14:26 [3352] LOG: database system was interrupted at 2005-08-06 11:57:28 EDT 2005-08-06 14:14:26 [3352] LOG: checkpoint record is at 5E5/9CAEA594 2005-08-06 14:14:26 [3352] LOG: redo record is at 5E5/9C6796A0; undo record is at 0/0; shutdown FALSE 2005-08-06 14:14:26 [3352] LOG: next transaction ID: 6273726; next OID: 4274112431 2005-08-06 14:14:26 [3352] LOG: database system was not properly shut down; automatic recovery in progress 2005-08-06 14:14:26 [3352] LOG: redo starts at 5E5/9C6796A0 2005-08-06 14:17:17 [3352] LOG: unexpected pageaddr 5E3/A7BFA000 in log file 1509, segment 171, offset 12558336 2005-08-06 14:17:17 [3352] LOG: redo done at 5E5/ABBF978C Should I worry about the unexpected pageaddr message? and if so, what do I need to do select version(); version - PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.4 (Debian 1:3.3.4-6sarge1) Thanks Jim ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] unexpected pageaddr on startup/recovery
thanks -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Sat, 06 Aug 2005 17:24:46 -0400 Subject: Re: [HACKERS] unexpected pageaddr on startup/recovery Jim Buttafuoco [EMAIL PROTECTED] writes: 2005-08-06 14:14:26 [3352] LOG: database system was not properly shut down; automatic recovery in progress 2005-08-06 14:14:26 [3352] LOG: redo starts at 5E5/9C6796A0 2005-08-06 14:17:17 [3352] LOG: unexpected pageaddr 5E3/A7BFA000 in log file 1509, segment 171, offset 12558336 2005-08-06 14:17:17 [3352] LOG: redo done at 5E5/ABBF978C Should I worry about the unexpected pageaddr message? No, that looks perfectly normal. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org --- End of Original Message --- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Buildfarm failure analysis: penguin on 7.4 branch
Tom, I agree with NOT fixing the tsearch2 code for this failure in 7.4. I have left penguin building 7.4 just to see if the core code continues to compile. I would be nice if the build farm code would let me exclude a contrib module if necessary. What do you think Andrew? Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Andrew Dunstan [EMAIL PROTECTED], pgsql-hackers@postgresql.org Sent: Mon, 18 Jul 2005 01:05:09 -0400 Subject: [HACKERS] Buildfarm failure analysis: penguin on 7.4 branch Buildfarm member penguin has never got past this point in half a year of trying: creating template1 database in /home/postgres/pgfarmbuild/REL7_4_STABLE/pgsql.7701/src/test/regress/./tmp_check/data/base/1... ok initializing pg_shadow... TRAP: FailedAssertion(!(StrategyEvaluationIsValid(evaluation)), File: istrat.c, Line: 273) Unfortunately it never will :-(, and I'd recommend removing 7.4 from its to-do list. The problem here appears to be the same unusual struct packing rules that make tsearch2 fail on this machine in the 8.0 branch. The old index strategy code assumes that given this set of struct declarations: typedef uint16 StrategyNumber; typedef struct StrategyOperatorData { StrategyNumber strategy; bits16flags;/* scan qualification flags, see skey.h */ } StrategyOperatorData; typedef struct StrategyTermData { /* conjunctive term */ uint16degree; StrategyOperatorData operatorData[1];/* VARIABLE LENGTH ARRAY */ } StrategyTermData; /* VARIABLE LENGTH STRUCTURE */ the contents of StrategyTermData are equivalent to a uint16 array containing {degree, strategy1, flags1, strategy2, flags2, ... }. This is true on 99% of machines out there, but the compiler penguin is using thinks it should align StrategyOperatorData on a word boundary, and so there is padding between the degree and the first array element. The compiler is within its rights to do this per the ANSI C spec, but considering that we'd never seen this problem in ten years of Postgres use and that the struct is gone (for unrelated reasons) in PG 8.0 and up, I don't think anyone is likely to feel like messing with the 7.* code to fix it. (This is all extrapolation, mind you, but given what we found out about the problem with tsearch2, I feel reasonably confident in the analysis.) regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Buildfarm failure analysis: penguin on 7.4 branch
must be early in the morning. I agree with removing penguin from the 7.4 branch. I have removed it on my end. Please delete it from the database -- Original Message --- From: Andrew Dunstan [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Tom Lane [EMAIL PROTECTED], [EMAIL PROTECTED], pgsql-hackers@postgresql.org Sent: Mon, 18 Jul 2005 09:38:56 -0400 Subject: Re: [HACKERS] Buildfarm failure analysis: penguin on 7.4 branch Jim, There seems to be some confusion. The error Tom remarked on 7.4 is an initdb failure during the core check tests, not a tsearch2 failure, which is why I think he recommends discontinuing to build that branch on this machine. Regarding the tsearch2 failure on the 8.0 branch, I am ambivalent about your proposed change, to say the least. Tom made a comment some time ago about not hiding errors/limitations, and I agree with him. This is in a different class from the geometry tests, where the differences were largely cosmetic. I don't want people to have to read the fine print to see what tests were excluded - if we show a machine as green it should mean that machine passed the whole test suite. cheers andrew Jim Buttafuoco wrote: Tom, I agree with NOT fixing the tsearch2 code for this failure in 7.4. I have left penguin building 7.4 just to see if the core code continues to compile. I would be nice if the build farm code would let me exclude a contrib module if necessary. What do you think Andrew? Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Andrew Dunstan [EMAIL PROTECTED], pgsql-hackers@postgresql.org Sent: Mon, 18 Jul 2005 01:05:09 -0400 Subject: [HACKERS] Buildfarm failure analysis: penguin on 7.4 branch Buildfarm member penguin has never got past this point in half a year of trying: creating template1 database in /home/postgres/pgfarmbuild/REL7_4_STABLE/pgsql.7701/src/test/regress/./tmp_check/data/base/1... ok initializing pg_shadow... TRAP: FailedAssertion(!(StrategyEvaluationIsValid(evaluation)), File: istrat.c, Line: 273) Unfortunately it never will :-(, and I'd recommend removing 7.4 from its to-do list. The problem here appears to be the same unusual struct packing rules that make tsearch2 fail on this machine in the 8.0 branch. The old index strategy code assumes that given this set of struct declarations: typedef uint16 StrategyNumber; typedef struct StrategyOperatorData { StrategyNumber strategy; bits16flags;/* scan qualification flags, see skey.h */ } StrategyOperatorData; typedef struct StrategyTermData { /* conjunctive term */ uint16degree; StrategyOperatorData operatorData[1];/* VARIABLE LENGTH ARRAY */ } StrategyTermData; /* VARIABLE LENGTH STRUCTURE */ the contents of StrategyTermData are equivalent to a uint16 array containing {degree, strategy1, flags1, strategy2, flags2, ... }. This is true on 99% of machines out there, but the compiler penguin is using thinks it should align StrategyOperatorData on a word boundary, and so there is padding between the degree and the first array element. The compiler is within its rights to do this per the ANSI C spec, but considering that we'd never seen this problem in ten years of Postgres use and that the struct is gone (for unrelated reasons) in PG 8.0 and up, I don't think anyone is likely to feel like messing with the 7.* code to fix it. (This is all extrapolation, mind you, but given what we found out about the problem with tsearch2, I feel reasonably confident in the analysis.) regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] BuildFarm status: recent check failures
Why doesn't the regression test set the timezone to GMT instead of PST. I believe the horology test would just work then. Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: Michael Glaesemann [EMAIL PROTECTED] Cc: PostgreSQL-development Hackers pgsql-hackers@postgresql.org Sent: Mon, 04 Apr 2005 02:08:46 -0400 Subject: Re: [HACKERS] BuildFarm status: recent check failures Michael Glaesemann [EMAIL PROTECTED] writes: Just a quick heads-up: I'm not sure of the cause, but it looks like something recent change to HEAD and REL8_0_STABLE is causing check failures on the buildfarm machines. A brief (and thoroughly naive) glance at the details shows a horology test failure. http://developer.postgresql.org/docs/postgres/regress-evaluation.html#AEN22575 regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org --- End of Original Message --- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] minor windows cygwin regression failures on stable
Andrew, I can confirm that the latest cygwin snapshot (cygwin1-20050328.dll) corrects the stats regression failure. Jim -- Original Message --- From: Andrew Dunstan [EMAIL PROTECTED] To: Tom Lane [EMAIL PROTECTED] Cc: [EMAIL PROTECTED], pgsql-hackers@postgresql.org Sent: Wed, 30 Mar 2005 07:21:50 -0500 Subject: Re: [HACKERS] minor windows cygwin regression failures on stable Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: I'm not quite sure what question I should be asking of the Cygwin people. Tom, Can you suggest something? It sounds to me like the problem is that the backend executing the test script is in a tight loop (due to the half-baked implementation of sleep()) and for some reason this prevents the stats processes from running --- for a far longer period than it by rights ought to. Ask about recent changes in process scheduling policy. (I suppose that actually it's Windows doing the scheduling, but what we want to know about is cygwin changes that might have affected Windows scheduling parameters.) The only answer so far received says: Sounds to me like yet another case of http://cygwin.com/ml/cygwin/2005-03/msg00730.html cheers andrew ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) --- End of Original Message --- ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] minor windows cygwin regression failures on stable
Andrew, I can set one up a dedicated windows XP system on monday. I also have some w2k systems that can be used.Are there directions anywhere? Jim -- Original Message --- From: Andrew Dunstan [EMAIL PROTECTED] To: Tom Lane [EMAIL PROTECTED] Cc: PostgreSQL-development pgsql-hackers@postgresql.org Sent: Fri, 25 Mar 2005 22:19:25 -0500 Subject: Re: [HACKERS] minor windows cygwin regression failures on stable Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: Windows has ordering failures on the join and rules tests - Cygwin has a failures on the stats test. See buildfarm for details. The ordering failures seem to be because the recent planner hacking has taken us back to preferring merge joins for these tests, and Windows' version of qsort has bizarre behavior for equal keys. I put an ORDER BY in the rules test. For join, I'm inclined to think that the best bet is to resurrect the join_1.out variant comparison file that we had awhile ago. Unfortunately, what's in the CVS archives is out of date and can't be used directly. Could you send me the actual rules.out you get on Windows to use for a comparison file? join.out sent off list Dunno about the stats failure. It looks like the stats collector just isn't working on Cygwin, but AFAIR no one has touched that code lately. It's worked before, that's the strange thing. I'll check some more. Do these machines fail on HEAD too? (There don't seem to be any active Windows buildfarm machines for HEAD, which is surely ungood. Won't someone step up and put one into the regular rotation?) Sufficient unto the day is the evil thereof (appropriate quotation for Good friday). I will address HEAD in due course. The buildfarm members for both of these are in reality my laptop, which doesn't even run Windows all the time, and has lots of other duties anyway. We (or rather Josh Berkus and Bruce, at my request) are looking for a replacement. cheers andrew ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 8: explain analyze is your friend
[HACKERS] Missing segment 3 of index
All, I had to abort a vacuum full after 36 hours on a large table (16 million rows). I started the vacuum again and after 10 minutes in got to the place I aborted it (control-c) yesterday. I recieved the following error ERROR: could not open segment 3 of relation emi_110101_idx1 (target block 2079965576): No such file or directory Will a REINDEX fix this or do I need to drop the index instead? Thanks Jim ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Missing segment 3 of index
After I do a vacuum full, I will run memtest and some disk diags. Thanks Jim -- Original Message --- From: Gavin Sherry [EMAIL PROTECTED] To: Jim Buttafuoco [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Sat, 26 Mar 2005 11:02:39 +1100 (EST) Subject: Re: [HACKERS] Missing segment 3 of index On Fri, 25 Mar 2005, Jim Buttafuoco wrote: All, I had to abort a vacuum full after 36 hours on a large table (16 million rows). I started the vacuum again and after 10 minutes in got to the place I aborted it (control-c) yesterday. I recieved the following error ERROR: could not open segment 3 of relation emi_110101_idx1 (target block 2079965576): No such file or directory Will a REINDEX fix this or do I need to drop the index instead? A reindex or drop/recreate will do the trick. However, I'd be concerned about your hardware. I've seen this kind of problem on systems with bad memory, CPU and disk controllers in the past. Gavin ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] Recording vacuum/analyze/dump times
Its there a reason postgresql doesn't record vacuum/analyze and dump times in pg_class (or another table). This seems like it would be a very helpful feature. for pg_dump I would add an option --record=YES|NO for vacuum and analyze I would add a NORECORD|RECORD option Jim ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Recording vacuum/analyze/dump times
This is what I was thinking about doing. It would be nicer if the system just did it for me. I have 100+'s of databases with 100+'s of tables in each and run pg_autovacuum on them all. I also do nightly dumps and any database that has been modified (my application keeps track). I was just thinking of using these dates as a check that the automated processes are working. Jim -- Original Message --- From: Heikki Linnakangas [EMAIL PROTECTED] To: Jim Buttafuoco [EMAIL PROTECTED] Cc: pgsql-hackers@postgresql.org Sent: Mon, 7 Mar 2005 20:35:21 +0200 (EET) Subject: Re: [HACKERS] Recording vacuum/analyze/dump times On Mon, 7 Mar 2005, Jim Buttafuoco wrote: Its there a reason postgresql doesn't record vacuum/analyze and dump times in pg_class (or another table). This seems like it would be a very helpful feature. for pg_dump I would add an option --record=YES|NO for vacuum and analyze I would add a NORECORD|RECORD option You could easily do this in application level: CREATE TABLE vacuums (relname name, last_vacuum timestamp); Every time you vacuum, do: VACUUM foobar; UPDATE dumps set last_dump = now() WHERE relname = 'foobar'; Same for pg_dump. - Heikki ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] --- End of Original Message --- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Recording vacuum/analyze/dump times
But what happens if I go in and manually vacuum a table (either because I just deleted a bunch of records or whatever). This is why I think the backend should record the date in pg_class. -- Original Message --- From: Matthew T. O'Connor matthew@zeut.net To: [EMAIL PROTECTED] Cc: pgsql-hackers@postgresql.org Sent: Mon, 07 Mar 2005 13:56:04 -0500 Subject: Re: [HACKERS] Recording vacuum/analyze/dump times Jim Buttafuoco wrote: Its there a reason postgresql doesn't record vacuum/analyze and dump times in pg_class (or another table). This seems like it would be a very helpful feature. for pg_dump I would add an option --record=YES|NO for vacuum and analyze I would add a NORECORD|RECORD option For what it's worth, integrated pg_autovacuum will have something like this. At least my initial design does, since the autovacuum daemon needs to know remember when the last time a table was vacuumed. --- End of Original Message --- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Recording vacuum/analyze/dump times
I agree 100%. -- Original Message --- From: Matthew T. O'Connor matthew@zeut.net To: [EMAIL PROTECTED] Cc: pgsql-hackers@postgresql.org Sent: Mon, 07 Mar 2005 16:15:45 -0500 Subject: Re: [HACKERS] Recording vacuum/analyze/dump times Right, once autovacuum is integrated, then I think vacuum and analyze should update the autovacuum table this way autovacuum won't redundantly vacuum tables that were just vacuumed manually. Jim Buttafuoco wrote: But what happens if I go in and manually vacuum a table (either because I just deleted a bunch of records or whatever). This is why I think the backend should record the date in pg_class. -- Original Message --- From: Matthew T. O'Connor matthew@zeut.net To: [EMAIL PROTECTED] Cc: pgsql-hackers@postgresql.org Sent: Mon, 07 Mar 2005 13:56:04 -0500 Subject: Re: [HACKERS] Recording vacuum/analyze/dump times Jim Buttafuoco wrote: Its there a reason postgresql doesn't record vacuum/analyze and dump times in pg_class (or another table). This seems like it would be a very helpful feature. for pg_dump I would add an option --record=YES|NO for vacuum and analyze I would add a NORECORD|RECORD option For what it's worth, integrated pg_autovacuum will have something like this. At least my initial design does, since the autovacuum daemon needs to know remember when the last time a table was vacuumed. --- End of Original Message --- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) --- End of Original Message --- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] buildfarm issues
Andrew, A couple of things, 1. we need to develop a matrix of systems/os/compiler to see what coverage we do have and compare it to the INSTALL guide. 2. the run_build.pl should be changed to keep the information on the system to date (and have the matrix in 1 change) 3. have the run_build.pl script check the build farm CVS to see if there is a new version of the build farm code and either download and/or alert the owner of the system I am able and willing to help out in all of the above. I am also willing to help fill the gap in systems if needed. I have been using perl for over 10 years now (since perl 4 days) and have been doing web/postgres coding for the last 6 years. Let me know what I can do to help Jim -- Original Message --- From: Andrew Dunstan [EMAIL PROTECTED] To: Darcy Buskermolen [EMAIL PROTECTED] Cc: PostgreSQL-development pgsql-hackers@postgresql.org Sent: Fri, 04 Mar 2005 14:28:09 -0500 Subject: Re: [HACKERS] buildfarm issues Darcy Buskermolen wrote: On Friday 04 March 2005 10:11, Andrew Dunstan wrote: Now that we've been running for a while there are a few buildfarm issues that I need to address. First, do we keep the right data about the members? Essentially, we keep: operating_system, os_version, compiler, compiler_version, architecture. For Linux, we genarlly ask for the Distribution/distro-version instead of the OS/os-version. However, that lead to interesting situations - Gentoo for example is so flexible that in version 2004.03 you might easily be using kernel version 2.4.x or 2.6.x ... in fact it's almost impossible to tell what might be installed on a Gentoo system, or how it was compiled. So I'm really not sure how we should treat such systems. Second is the fact that systems change over time. People upgrade their machines. I'm considering a facility to allow people to change the os-version,compiler-version aspects of their registered personality - these will become essentially timestamped pieces of information, so we'll still be able to tie a set of values to a history item. What about using uname(1), cc -v, etc to glean this information and post it with each event logged? I belive you have all this stuff already in the config.log that is used already ? See previous para - on Linux we want the distro name and version, not Linux plus kernel version. uname doesn't seem to help much there. Also, I have no idea how portable cc -v is. Can we guarantee to have the compiler version properly identified on every platform? cheers andrew ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Fw: Re: [HACKERS] float4 regression test failed on linux parisc
Tom, I'm back with this issue. I have comparied the src/backend/utils/adt/float.c from 7.4.6 against CVS HEAD. There was some work done on the infinity handling (don't know who, I am NOT a CVS expert/user). The problem I see is that the float4in does a check to see if the value is infinity BEFORE calling CheckFloat4Val (this was added for 8.0) but the float4div (and friends) doesn't. All I want to do is add a check in CheckFloat4Val for infinity (and remove the individual checks before the CheckFloat4Val call in other routines). I hope I have explained my problem and solution. Jim -- Forwarded Message --- From: Jim Buttafuoco [EMAIL PROTECTED] To: Tom Lane [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Tue, 1 Feb 2005 17:20:17 -0500 Subject: Re: [HACKERS] float4 regression test failed on linux parisc Tom, The issue is with a select 'Infinity'::float4/'Infinity'::float4; which should return NAN. without the cast I get the overflow message from CheckFloat4Val with the cast I get NAN (as expected). How about testing for isnan() inside CheckFloat4Val (just for PARISC / Linux)? I am trying to get this system working for the buildfarm as there are NO other HP PARISC system on the farm. Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Tue, 01 Feb 2005 17:13:52 -0500 Subject: Re: [HACKERS] float4 regression test failed on linux parisc Jim Buttafuoco [EMAIL PROTECTED] writes: Change: CheckFloat4Val(result); To: CheckFloat4Val((float4)result); CheckFloat4Val is defined to take a double, so whatever the above is accomplishing is wrong: probably it's masking an out-of-range result. I think you've hit a bug in Debian's version of gcc for PA-RISC. regards, tom lane --- End of Original Message --- --- End of Forwarded Message --- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] float4 regression test failed on linux parisc
Tom, The other option is to note that on older ( and I mean real old systems where the fp unit is sub par) systems that this test is likely to fail. I have now seen this on my real old Alpha and now HP PARISC systems. Is there a way to just modify the regression test to pass by these test on these platforms? Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Tue, 08 Feb 2005 10:25:26 -0500 Subject: Re: [HACKERS] float4 regression test failed on linux parisc Jim Buttafuoco [EMAIL PROTECTED] writes: All I want to do is add a check in CheckFloat4Val for infinity (and remove the individual checks before the CheckFloat4Val call in other routines). That's not at all what you proposed before, and it would have vastly more side-effects than just removing the platform-dependent behavior you are on about. If we did that then this would work: regression=# select ('infinity'::float4) / (1::float4); ERROR: type real value out of range: overflow ... which arguably it ought to, but you'd be changing the behavior everywhere not just for your broken compiler. I think the real question we ought to face up to sometime is what it is we are trying to accomplish with CheckFloat4Val and CheckFloat8Val in the first place. The latter routine in particular seems pretty ill-advised to me: if something can be represented as a double then why don't we just allow it? ISTM that what we really want is to reject out-of-range results, as in these examples: regression=# select (1e37::float4) / (1e-37::float4); ERROR: type real value out of range: overflow regression=# select (1e300::float8) / (1e-37::float8); ERROR: type double precision value out of range: overflow regression=# On machines that have IEEE infinity, I think it would work to report overflow if the result is infinity when neither input is. But I dunno how well that works on non-IEEE hardware. Also, what about rejecting NaN results? Thoughts anyone? regards, tom lane --- End of Original Message --- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] float4 regression test failed on linux parisc
except isinf() works just fine on my system. It's just when CheckFloat4Val is called with infinity as the val you you get the overflow message. If I move the isinf into CheckFloat4Val all is fine. If you don't want to fix this, it's fine with me. I am just reporting problems and trying to fix them. I will shut up now and put my energy into other causes! Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Tue, 08 Feb 2005 11:42:11 -0500 Subject: Re: [HACKERS] float4 regression test failed on linux parisc Jim Buttafuoco [EMAIL PROTECTED] writes: this test is likely to fail. I have now seen this on my real old Alpha and now HP PARISC systems. It works fine on PARISC, and has ever since I've been associated with this project --- I run these tests multiple times a day on old HP hardware, and they have always passed with every compiler I've used (both gcc and HP's). Lots of people have reported clean passes on Alpha as well. One more time: you have a compiler bug, and you really ought to be griping to the gcc people not us. regards, tom lane --- End of Original Message --- ---(end of broadcast)--- TIP 8: explain analyze is your friend
[HACKERS] float4 regression test failed on linux parisc
I am getting a float4 regression test failure. I have extracted the SQL from both the float4 and float8 tests below. Both should return NAN I looked at the code, The float4div does the operation as float8's then checks the value. The value is a valid float8 NAN. The call to CheckFloat4Val is missing a cast back to float4. If I put the cast in I get the expected results (NAN). SELECT 'Infinity'::float4 / 'Infinity'::float4; psql:test.sql:1: ERROR: type real value out of range: overflow SELECT 'Infinity'::float8 / 'Infinity'::float8; ?column? -- NaN (1 row) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] float4 regression test failed on linux parisc
Source: CSV HEAD (As of yesterday) Platform: HP PARISC (HP 710) OS: Debian Sarge File: src/backend/utils/adt/float.c Change: CheckFloat4Val(result); To: CheckFloat4Val((float4)result); I tested this on my parisc box and it passed all tests. This could just be an issue with a very old CPU/floating point unit. I can send a patch in, but since this seems to work on other platforms, I thought I would ask here first. Sorry about the cryptic message, it was before my first cup of java. Jim Datum float4div(PG_FUNCTION_ARGS) { float4 arg1 = PG_GETARG_FLOAT4(0); float4 arg2 = PG_GETARG_FLOAT4(1); double result; if (arg2 == 0.0) ereport(ERROR, (errcode(ERRCODE_DIVISION_BY_ZERO), errmsg(division by zero))); /* Do division in float8, then check for overflow */ result = (float8) arg1 / (float8) arg2; CheckFloat4Val(result); PG_RETURN_FLOAT4((float4) result); } -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: Jim Buttafuoco [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Tue, 01 Feb 2005 12:06:30 -0500 Subject: Re: [HACKERS] float4 regression test failed on linux parisc Jim Buttafuoco [EMAIL PROTECTED] writes: I am getting a float4 regression test failure. I have extracted the SQL from both the float4 and float8 tests below. Both should return NAN I looked at the code, The float4div does the operation as float8's then checks the value. The value is a valid float8 NAN. The call to CheckFloat4Val is missing a cast back to float4. If I put the cast in I get the expected results (NAN). This report is about as clear as mud :-(. What platform is this, and what source code change are you proposing *exactly* ? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] --- End of Original Message --- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] float4 regression test failed on linux parisc
Tom, The issue is with a select 'Infinity'::float4/'Infinity'::float4; which should return NAN. without the cast I get the overflow message from CheckFloat4Val with the cast I get NAN (as expected). How about testing for isnan() inside CheckFloat4Val (just for PARISC / Linux)? I am trying to get this system working for the buildfarm as there are NO other HP PARISC system on the farm. Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Tue, 01 Feb 2005 17:13:52 -0500 Subject: Re: [HACKERS] float4 regression test failed on linux parisc Jim Buttafuoco [EMAIL PROTECTED] writes: Change: CheckFloat4Val(result); To: CheckFloat4Val((float4)result); CheckFloat4Val is defined to take a double, so whatever the above is accomplishing is wrong: probably it's masking an out-of-range result. I think you've hit a bug in Debian's version of gcc for PA-RISC. regards, tom lane --- End of Original Message --- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] pg_clog problem (PG version 7.4.5)
hackers, I am having a problem with table (identified by pg_dump). I get the follow error when I try to COPY the table to stdout (or /dev/null). DB=# copy rnk to '/dev/null'; ERROR: could not access status of transaction 1076101119 DETAIL: could not open file /usr/local/pgsql/data/pg_clog/0402: No such file or directory Pg version is select version(); version - PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.4 (Debian 1:3.3.4-6sarge1) I do have a good backup from the day before, if needed. is there a magic command to get me out of this? Thanks Jim ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] pg_clog problem (PG version 7.4.5)
I just upgraded to 7.4.6 and have the same error message. -- Original Message --- From: Jim Buttafuoco [EMAIL PROTECTED] To: pgsql-hackers pgsql-hackers@postgresql.org Sent: Sat, 22 Jan 2005 09:35:02 -0500 Subject: [HACKERS] pg_clog problem (PG version 7.4.5) hackers, I am having a problem with table (identified by pg_dump). I get the follow error when I try to COPY the table to stdout (or /dev/null). DB=# copy rnk to '/dev/null'; ERROR: could not access status of transaction 1076101119 DETAIL: could not open file /usr/local/pgsql/data/pg_clog/0402: No such file or directory Pg version is select version(); version - PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.4 (Debian 1:3.3.4-6sarge1) I do have a good backup from the day before, if needed. is there a magic command to get me out of this? Thanks Jim ---(end of broadcast)--- TIP 8: explain analyze is your friend --- End of Original Message --- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] pg_clog problem (PG version 7.4.5)
didn't work. ERROR: could not access status of transaction 1076101119 DETAIL: could not read from file /usr/local/pgsql/data/pg_clog/0402 at offset 57344: Success any more ideas? -- Original Message --- From: Joshua D. Drake [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Sat, 22 Jan 2005 08:00:25 -0800 Subject: Re: [HACKERS] pg_clog problem (PG version 7.4.5) Jim Buttafuoco wrote: hackers, I am having a problem with table (identified by pg_dump). I get the follow error when I try to COPY the table to stdout (or /dev/null). DB=# copy rnk to '/dev/null'; ERROR: could not access status of transaction 1076101119 DETAIL: could not open file /usr/local/pgsql/data/pg_clog/0402: No such file or directory Pg version is select version(); version - PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.4 (Debian 1:3.3.4-6sarge1) I do have a good backup from the day before, if needed. is there a magic command to get me out of this? You could try creating that file by writing a bunch of zeros to it, for example: Shutdown PostgreSQL Then run the following command in $PGDATA/pg_clog dd bs=8K count=1 if=/dev/zero of=filename Where the filename is the name of the file missing. No promises though. Sincerely, Joshua D. Drake Thanks Jim ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com PostgreSQL Replicator -- production quality replication for PostgreSQL --- End of Original Message --- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] pg_clog problem (PG version 7.4.5)
Alvaro, Thanks for the reply. here is an ls of my pg_clog directory. The 0402 file, I created as per Joshua's directions. I might have created one too small. If so, what size do you think I should use. Jim bda1:/usr/local/pgsql/data# ls -l pg_clog total 992 -rw--- 1 postgres dba 262144 Sep 7 10:12 -rw--- 1 postgres dba 262144 Nov 12 09:57 0001 -rw--- 1 postgres dba 262144 Dec 7 17:31 0002 -rw--- 1 postgres dba 204800 Jan 22 13:11 0003 -rw-r--r-- 1 postgres dba 8192 Jan 22 12:05 0402 -- Original Message --- From: Alvaro Herrera [EMAIL PROTECTED] To: Jim Buttafuoco [EMAIL PROTECTED] Cc: Joshua D. Drake [EMAIL PROTECTED], pgsql-hackers pgsql-hackers@postgresql.org Sent: Sat, 22 Jan 2005 15:07:35 -0300 Subject: Re: [HACKERS] pg_clog problem (PG version 7.4.5) On Sat, Jan 22, 2005 at 12:06:46PM -0500, Jim Buttafuoco wrote: didn't work. ERROR: could not access status of transaction 1076101119 DETAIL: could not read from file /usr/local/pgsql/data/pg_clog/0402 at offset 57344: Success any more ideas? You need to extend the file further than byte 57344. So repeat the dd command, with count=8. Anyway, this situation is suspect anyway. There were bugs related to pg_clog files not being there, but AFAIR they were triggered on segment edges, not in the middle like this one. What other files there are in the pg_clog directory? If they are nowhere near the 0402 vicinity, you may have a memory or disk corruption problem. FWIW, 1076101119 is 1100011 in binary. Taking the highest 1 yeilds Xid 2359295. So, what files do you have? -- Alvaro Herrera ([EMAIL PROTECTED]) La primera ley de las demostraciones en vivo es: no trate de usar el sistema. Escriba un guión que no toque nada para no causar daños. (Jakob Nielsen) ---(end of broadcast)--- TIP 8: explain analyze is your friend --- End of Original Message --- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] pg_clog problem (PG version 7.4.5)
I was able to copy the table over to a temp table and truncate it with only a little loss. I will be able to recover the lost data from backup so no big deal. I will have to schedule downtime to do the memory test with the big snow storm it will not be until monday night. thanks for the help Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Alvaro Herrera [EMAIL PROTECTED], pgsql-hackers pgsql-hackers@postgresql.org Sent: Sat, 22 Jan 2005 13:41:04 -0500 Subject: Re: [HACKERS] pg_clog problem (PG version 7.4.5) Jim Buttafuoco [EMAIL PROTECTED] writes: Thanks for the reply. here is an ls of my pg_clog directory. The 0402 file, I created as per Joshua's directions. I might have created one too small. If so, what size do you think I should use. bda1:/usr/local/pgsql/data# ls -l pg_clog total 992 -rw--- 1 postgres dba 262144 Sep 7 10:12 -rw--- 1 postgres dba 262144 Nov 12 09:57 0001 -rw--- 1 postgres dba 262144 Dec 7 17:31 0002 -rw--- 1 postgres dba 204800 Jan 22 13:11 0003 -rw-r--r-- 1 postgres dba 8192 Jan 22 12:05 0402 Given that set of pre-existing files, there is no possible way that you really had a transaction in the range of IDs that 0402 would cover. I agree with Alvaro's theory of a corrupted tuple. In fact it seems plausible that the error is a single high-order 1 bit and the ID that appears to be in the range of 0402 really belonged to file 0002. A single dropped bit sounds more like RAM flakiness than disk problems to me, so I'd get out the memory tester programs and start looking. As far as recovering the data goes, you can use the usual techniques for homing in on the location of the bad tuple and getting rid of it (or try manually patching the XID field with a hex editor...) regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match --- End of Original Message --- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] PANIC: right sibling's left-link doesn't match
Postgres on one of my big database servers just crashed with the following message PANIC: right sibling's left-link doesn't match Does any one have any idea's what might cause this. Some background. This is a Debian Sarge system running PG 7.4.5 on i386 dual XEON system with 4G of memory. I just rebooted the system because a fan had failed and was replaced. Thanks Jim ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] PANIC: right sibling's left-link doesn't match
It did print the query right after the PANIC message, so I do have the table name. I just completed the reindex. Thanks -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Wed, 12 Jan 2005 16:45:11 -0500 Subject: Re: [HACKERS] PANIC: right sibling's left-link doesn't match Jim Buttafuoco [EMAIL PROTECTED] writes: Postgres on one of my big database servers just crashed with the following message PANIC: right sibling's left-link doesn't match Does any one have any idea's what might cause this. Corrupted btree index. REINDEX should help, though I'm afraid the error message isn't very helpful about identifying which index is busted. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) --- End of Original Message --- ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] CSV arm check failure
it looks like a sqrt problem that has been fixed with the linux 2.6 kernel series. I am going to look and see if I can get a 2.6 kernel to check it out. since all of the other tests pass, maybe just a note in the read me file. Jim -- Original Message --- From: Peter Eisentraut [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers pgsql-hackers@postgresql.org Sent: Thu, 6 Jan 2005 10:18:58 +0100 Subject: Re: [HACKERS] CSV arm check failure Am Dienstag, 4. Januar 2005 19:03 schrieb Jim Buttafuoco: ARM platform fails the point test see below. For the 7.4 release we got a report for the ARM platform where all tests passed: http://archives.postgresql.org/pgsql-hackers/2003-10/msg01212.php So either there are various degrees of ARM processors or something is broken. Ideas? -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org --- End of Original Message --- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] CSV arm check failure
Marko, I am using the stock Debian 2.4.27 kernel. Don't know how to change the fp setup. Do you have any instructions for me? Thanks Jim -- Original Message --- From: Marko Kreen marko@l-t.ee To: Peter Eisentraut [EMAIL PROTECTED] Cc: [EMAIL PROTECTED], pgsql-hackers pgsql-hackers@postgresql.org Sent: Thu, 6 Jan 2005 15:26:05 +0200 Subject: Re: [HACKERS] CSV arm check failure On Thu, Jan 06, 2005 at 10:18:58AM +0100, Peter Eisentraut wrote: Am Dienstag, 4. Januar 2005 19:03 schrieb Jim Buttafuoco: ARM platform fails the point test see below. For the 7.4 release we got a report for the ARM platform where all tests passed: http://archives.postgresql.org/pgsql-hackers/2003-10/msg01212.php So either there are various degrees of ARM processors or something is broken. Ideas? Yes, there are various degrees of those, but most of them should be FPU-less. So FPU-emulation details would be interesting. In case of Linux there are 3 variants: NWFPE: default FastFPE: only 32-bit mantissa, 4-8x faster than NWFPE gcc -msoft-float: no FP instructions, direct calls. This changes calling convention, so requires that all code is compiled with this. Jim, do you happen to use FastFPE? -- marko ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match --- End of Original Message --- ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] CSV arm check failure
Marko/All, I wrote the following test program #include stdio.h #include math.h #define HYPOT(A, B) sqrt((A) * (A) + (B) * (B)) int main() { printf(SQRT Test\n); long double a; a = HYPOT(0-10,0-10); printf(double a = %20.12Lf\n,a); exit(0); } and compiled it as follows gcc -lm -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels -fno-strict-aliasing -g -o sqrttest sqrt.c with the following results: SQRT Test double a = 14.142135623731 which is the exact answer in the results file for point. Now if I use perl instead of C I get the wrong answer 14.1421356237309 which is what postgres is also reporting. So this looks like a compile time problem which is alittle over my head. Any idea's Jim Jim -- Original Message --- From: Marko Kreen marko@l-t.ee To: Peter Eisentraut [EMAIL PROTECTED] Cc: [EMAIL PROTECTED], pgsql-hackers pgsql-hackers@postgresql.org Sent: Thu, 6 Jan 2005 15:26:05 +0200 Subject: Re: [HACKERS] CSV arm check failure On Thu, Jan 06, 2005 at 10:18:58AM +0100, Peter Eisentraut wrote: Am Dienstag, 4. Januar 2005 19:03 schrieb Jim Buttafuoco: ARM platform fails the point test see below. For the 7.4 release we got a report for the ARM platform where all tests passed: http://archives.postgresql.org/pgsql-hackers/2003-10/msg01212.php So either there are various degrees of ARM processors or something is broken. Ideas? Yes, there are various degrees of those, but most of them should be FPU-less. So FPU-emulation details would be interesting. In case of Linux there are 3 variants: NWFPE: default FastFPE: only 32-bit mantissa, 4-8x faster than NWFPE gcc -msoft-float: no FP instructions, direct calls. This changes calling convention, so requires that all code is compiled with this. Jim, do you happen to use FastFPE? -- marko --- End of Original Message --- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] CSV arm check failure
Marko, See my email with test program. I will recompile the kernel and get back to the list Jim -- Original Message --- From: Marko Kreen marko@l-t.ee To: Jim Buttafuoco [EMAIL PROTECTED] Cc: Peter Eisentraut [EMAIL PROTECTED], pgsql-hackers pgsql-hackers@postgresql.org Sent: Thu, 6 Jan 2005 16:58:03 +0200 Subject: Re: [HACKERS] CSV arm check failure On Thu, Jan 06, 2005 at 09:07:14AM -0500, Jim Buttafuoco wrote: I am using the stock Debian 2.4.27 kernel. Don't know how to change the fp setup. Do you have any instructions for me? It can be changed by configuring and recompiling kernel. I checked the kernel-image-2.4.27-arm package from Debian/testing and indeed it uses FastFPE emulation. To be specific, the 'bast' and 'netwinder' targets do. The 'lart', 'riscpc' and 'riscstation' targets use NWFPE. I guess 'lart' and 'bast' are some devel boards and 'netwinder' is the main target. Looking at handhelds.org kernels they mostly use NWFPE although there are couple of configs with FastFPE. I have no clue on other Linux distros or *BSD's on ARM. It seems PostgreSQL may encounter both NWFPE and FastFPE on Linux/ARM. How to handle this I do not know. -- marko --- End of Original Message --- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] CSV arm check failure
Marko, I couldn't get 2.4.27 to patch with the arm patches, so I downloaded 2.4.25 (with has CONFIG_FPE_NWFPE=y) and ALL tests passed. So I will file a bug report with Debian. We should also put something in the Postgresql readme about this issue. Jim -- Original Message --- From: Marko Kreen marko@l-t.ee To: Jim Buttafuoco [EMAIL PROTECTED] Cc: Peter Eisentraut [EMAIL PROTECTED], pgsql-hackers pgsql-hackers@postgresql.org Sent: Thu, 6 Jan 2005 17:25:20 +0200 Subject: Re: [HACKERS] CSV arm check failure On Thu, Jan 06, 2005 at 10:21:43AM -0500, Jim Buttafuoco wrote: I will recompile the kernel and get back to the list Thanks. This way we can be sure it is FP-emulation effect. -- marko ---(end of broadcast)--- TIP 8: explain analyze is your friend --- End of Original Message --- ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] [ANNOUNCE] PostgreSQL 8.0.0 Release Candidate 3
I have both a MIPS and MIPSEL in the buildfarm. i have also reported a good build on a arm (Debian/Sarge/Netwinder) (not yet in the build farm) -- Original Message --- From: Peter Eisentraut [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Robert Treat [EMAIL PROTECTED], pgsql-hackers@postgresql.org Sent: Mon, 3 Jan 2005 22:56:22 +0100 Subject: Re: [HACKERS] [ANNOUNCE] PostgreSQL 8.0.0 Release Candidate 3 Jim Buttafuoco wrote: I also don't see MIPSEL and ARM on the list, both running debian sarge (in the build farm). The mips entry is actually a mipsel, but uname identifies them the same. I don't see any arm machine in the build farm. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] --- End of Original Message --- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] [ANNOUNCE] PostgreSQL 8.0.0 Release Candidate 3
what failure. both my MIPS and MIPSEL system are reporting OK on the builds. The ARM failed only the point test. It looks like a rounding error (ARM doesn't have a build in fp unit) -- Original Message --- From: Peter Eisentraut [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: Robert Treat [EMAIL PROTECTED], pgsql-hackers@postgresql.org Sent: Tue, 4 Jan 2005 15:07:38 +0100 Subject: Re: [HACKERS] [ANNOUNCE] PostgreSQL 8.0.0 Release Candidate 3 Am Dienstag, 4. Januar 2005 14:53 schrieb Jim Buttafuoco: I have both a MIPS and MIPSEL in the buildfarm. i have also reported a good build on a arm (Debian/Sarge/Netwinder) (not yet in the build farm) Well, a build with regression test failures is not good, in particular when it is a regression from previously behavior. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) --- End of Original Message --- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[HACKERS] CSV arm check failure
ARM platform fails the point test see below. parallel group (13 tests): text name char boolean varchar oid int8 int2 float4 int4 float8 bit numeric boolean ... ok char ... ok name ... ok varchar ... ok text ... ok int2 ... ok int4 ... ok int8 ... ok oid ... ok float4 ... ok float8 ... ok bit ... ok numeric ... ok test strings ... ok test numerology ... ok parallel group (20 tests): lseg point box path circle polygon time timetz comments reltime interval tinterval abstime inet date timestamp timestamptz type_sanity oidjoins opr_sanity point... FAILED lseg ... ok box ... ok path ... ok polygon ... ok circle ... ok date ... ok time ... ok timetz ... ok timestamp... ok timestamptz ... ok interval ... ok abstime ... ok reltime ... ok tinterval... ok inet ... ok comments ... ok oidjoins ... ok type_sanity ... ok opr_sanity ... ok test geometry ... ok test horology ... ok test insert ... ok test create_function_1... ok test create_type ... ok test create_table ... ok test create_function_2... ok test copy ... ok parallel group (7 tests): create_aggregate create_operator triggers vacuum constraints inherit create_misc constraints ... ok triggers ... ok create_misc ... ok create_aggregate ... ok create_operator ... ok inherit ... ok vacuum ... ok parallel group (2 tests): create_view create_index create_index ... ok create_view ... ok test sanity_check ... ok test errors ... ok test select ... ok parallel group (18 tests): select_distinct_on select_into select_having update select_distinct case select_implicit union namespace random aggregates hash_index arrays transactions btree_index portals join subselect select_into ... ok select_distinct ... ok select_distinct_on ... ok select_implicit ... ok select_having... ok subselect... ok union... ok case ... ok join ... ok aggregates ... ok transactions ... ok random ... ok portals ... ok arrays ... ok btree_index ... ok hash_index ... ok update ... ok namespace... ok test privileges ... ok test misc ... ok parallel group (5 tests): portals_p2 cluster rules foreign_key select_views select_views ... ok portals_p2 ... ok rules... ok foreign_key ... ok cluster ... ok parallel group (14 tests): truncate sequence limit temp copy2 prepare polymorphism conversion domain rowtypes rangefuncs without_oid plpgsql alter_table limit... ok plpgsql ... ok copy2... ok temp ... ok domain ... ok rangefuncs ... ok prepare ... ok without_oid ... ok conversion ... ok truncate ... ok alter_table ... ok sequence ... ok polymorphism ... ok rowtypes ... ok test stats... ok test tablespace ... ok *** ./expected/point.outTue Jan 4 10:55:16 2005 --- ./results/point.out Tue Jan 4 12:40:50 2005 *** *** 101,107 | (-3,4) |5 | (-10,0)| 10 | (-5,-12) | 13 ! | (10,10)| 14.142135623731 | (5.1,34.5) | 34.8749193547455 (6 rows) --- 101,107 | (-3,4) |5 | (-10,0)| 10 | (-5,-12) | 13 ! | (10,10)| 14.1421356237309 | (5.1,34.5) | 34.8749193547455 (6 rows) *** *** 127,134 | (-5,-12) | (-10,0)| 13 | (-5,-12) | (0,0) | 13 | (0,0) | (-5,-12) | 13 !| (0,0) | (10,10)| 14.142135623731 !| (10,10)| (0,0) |
Re: [HACKERS] [ANNOUNCE] PostgreSQL 8.0.0 Release Candidate 3
I also don't see MIPSEL and ARM on the list, both running debian sarge (in the build farm). Jim -- Original Message --- From: Robert Treat [EMAIL PROTECTED] To: pgsql-hackers@postgresql.org Sent: 03 Jan 2005 08:35:19 -0500 Subject: Re: [HACKERS] [ANNOUNCE] PostgreSQL 8.0.0 Release Candidate 3 On Sun, 2005-01-02 at 09:56, Marc G. Fournier wrote: As was anticipated, time between Release Candidate 2 and 3 was nice and short, with more changes being made now to Documentation vs Code. A current list of *known* supported platforms can be found at: http://developer.postgresql.org/supported-platforms.html Are you sure this list is complete? I don't see the ps2, xbox, or gamecube listed under supported platforms :-) Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org --- End of Original Message --- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] race condition for drop schema cascade?
Andrew/all I have not seen any problems on my MIPS systems since the rebuild ext3 (I ran badblocks during fs creation). I should have the alpha running about soon, the disk died and I am waiting a replacement. I do believe there is a floating point problem with older alpha's out there. The seems to have a problem with INFINITY and NAN's. I did some checking on the net and the problem seems know (with no solution). Maybe something can go into the readme or such. If anyone is interested in looking at this for pg8.0 I can give SSH access in a week or so. Jim -- Original Message --- From: Andrew Dunstan [EMAIL PROTECTED] To: Tom Lane [EMAIL PROTECTED] Cc: Kurt Roeckx [EMAIL PROTECTED], PostgreSQL-development pgsql-hackers@postgresql.org Sent: Wed, 29 Dec 2004 13:05:26 -0500 Subject: Re: [HACKERS] race condition for drop schema cascade? Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: You're right - my query was not sufficiently specific. There have in fact been 4 failures: pgbuildfarm=# select sysname, snapshot, stage, branch from build_status where log ~ 'tablespace testspace is not empty.*tablespace testspace is not empty' and not log ~ 'No space left'; sysname | snapshot |stage | branch +-+--+ hare| 2004-12-09 05:15:05 | Check| HEAD otter | 2004-12-11 15:50:09 | Check| HEAD otter | 2004-12-15 15:50:10 | Check| HEAD gibbon | 2004-12-28 23:55:05 | InstallCheck | HEAD Why does the last show as an install failure? We run the standard regression suite twice - the failure on Gibbon occurred on the second of these. Clearly this is very transient. Anyway, given the small number of machines involved, I'm once again wondering what filesystem they are using. They wouldn't be running the check over NFS, by any chance, for instance? The theory that is in my mind is that the bgwriter could have written out a page for the table in the test tablespace, and thereby be holding an open file pointer for it. On standard Unix filesystems this would not disrupt the backend's ability to unlink the table at the DROP stage, but I'm wondering about nonstandard filesystems ... Jim Buttafuoco reported on December 16th that he had rebuilt the filesystem on his MIPS box - I assume this means that he isn't using NFS. In any case, we have not seen the problem since then. His Alpha box has not been reporting buildfarm results since before then. The Cygwin box is running on NTFS - and we know we've encountered plenty of problems with unlinking on Windows. I know it's not much to go on. cheers andrew ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly --- End of Original Message --- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] race condition for drop schema cascade?
Tom, my systems are all EXT3 (Debian 3.1) (andrew can tell you which ones they are). Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: Andrew Dunstan [EMAIL PROTECTED] Cc: Kurt Roeckx [EMAIL PROTECTED], PostgreSQL-development pgsql-hackers@postgresql.org Sent: Wed, 29 Dec 2004 12:26:56 -0500 Subject: Re: [HACKERS] race condition for drop schema cascade? Andrew Dunstan [EMAIL PROTECTED] writes: You're right - my query was not sufficiently specific. There have in fact been 4 failures: pgbuildfarm=# select sysname, snapshot, stage, branch from build_status where log ~ 'tablespace testspace is not empty.*tablespace testspace is not empty' and not log ~ 'No space left'; sysname | snapshot |stage | branch +-+--+ hare| 2004-12-09 05:15:05 | Check| HEAD otter | 2004-12-11 15:50:09 | Check| HEAD otter | 2004-12-15 15:50:10 | Check| HEAD gibbon | 2004-12-28 23:55:05 | InstallCheck | HEAD Why does the last show as an install failure? Anyway, given the small number of machines involved, I'm once again wondering what filesystem they are using. They wouldn't be running the check over NFS, by any chance, for instance? The theory that is in my mind is that the bgwriter could have written out a page for the table in the test tablespace, and thereby be holding an open file pointer for it. On standard Unix filesystems this would not disrupt the backend's ability to unlink the table at the DROP stage, but I'm wondering about nonstandard filesystems ... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [Fwd: Re: [HACKERS] race condition for drop schema cascade?]
I have rebuild the filesystem on my indy (MIPS) that Andrew reported on. The first run completed 100%, I would give it a couple more runs before we can say its the filesystem not Postgresql that was causing the drop to fail. -- Original Message --- From: Andrew Dunstan [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wed, 15 Dec 2004 16:42:59 -0500 Subject: [Fwd: Re: [HACKERS] race condition for drop schema cascade?] Jim, please advise? thanks andrew Original Message Subject: Re: [HACKERS] race condition for drop schema cascade? Date: Wed, 15 Dec 2004 16:29:01 -0500 From: Tom Lane [EMAIL PROTECTED] To: Andrew Dunstan [EMAIL PROTECTED] CC: PostgreSQL-development [EMAIL PROTECTED] References: [EMAIL PROTECTED] Andrew Dunstan [EMAIL PROTECTED] writes: I have seen this failure several times, but not consistently, on the buildfarm member otter (Debian/MIPS) and possible on others, and am wondering if it indicates a possible race condition on DROP SCHEMA CASCADE. Hard to see what, considering that there's only one backend touching that tablespace in the test. I'd be inclined to wonder if there's a filesystem-level problem on that platform. What filesystem are you running on anyway? regards, tom lane --- End of Original Message --- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] arm rc1 regression failures
Just compiled RC1 on a netwinder ARM system running Debian Linux (sarge). All tests passed except point with the following in results/point Jim -- -- POINT -- CREATE TABLE POINT_TBL(f1 point); INSERT INTO POINT_TBL(f1) VALUES ('(0.0,0.0)'); INSERT INTO POINT_TBL(f1) VALUES ('(-10.0,0.0)'); INSERT INTO POINT_TBL(f1) VALUES ('(-3.0,4.0)'); INSERT INTO POINT_TBL(f1) VALUES ('(5.1, 34.5)'); INSERT INTO POINT_TBL(f1) VALUES ('(-5.0,-12.0)'); -- bad format points INSERT INTO POINT_TBL(f1) VALUES ('asdfasdf'); ERROR: invalid input syntax for type point: asdfasdf INSERT INTO POINT_TBL(f1) VALUES ('10.0,10.0'); INSERT INTO POINT_TBL(f1) VALUES ('(10.0 10.0)'); ERROR: invalid input syntax for type point: (10.0 10.0) INSERT INTO POINT_TBL(f1) VALUES ('(10.0,10.0'); ERROR: invalid input syntax for type point: (10.0,10.0 SELECT '' AS six, POINT_TBL.*; six | f1 -+ | (0,0) | (-10,0) | (-3,4) | (5.1,34.5) | (-5,-12) | (10,10) (6 rows) -- left of SELECT '' AS three, p.* FROM POINT_TBL p WHERE p.f1 '(0.0, 0.0)'; three |f1 ---+-- | (-10,0) | (-3,4) | (-5,-12) (3 rows) -- right of SELECT '' AS three, p.* FROM POINT_TBL p WHERE '(0.0,0.0)' p.f1; three |f1 ---+-- | (-10,0) | (-3,4) | (-5,-12) (3 rows) -- above SELECT '' AS one, p.* FROM POINT_TBL p WHERE '(0.0,0.0)' ^ p.f1; one |f1 -+-- | (-5,-12) (1 row) -- below SELECT '' AS one, p.* FROM POINT_TBL p WHERE p.f1 ^ '(0.0, 0.0)'; one |f1 -+-- | (-5,-12) (1 row) -- equal SELECT '' AS one, p.* FROM POINT_TBL p WHERE p.f1 ~= '(5.1, 34.5)'; one | f1 -+ | (5.1,34.5) (1 row) -- point in box SELECT '' AS three, p.* FROM POINT_TBL p WHERE p.f1 @ box '(0,0,100,100)'; three | f1 ---+ | (0,0) | (5.1,34.5) | (10,10) (3 rows) SELECT '' AS three, p.* FROM POINT_TBL p WHERE not p.f1 @ box '(0,0,100,100)'; three |f1 ---+-- | (-10,0) | (-3,4) | (-5,-12) (3 rows) SELECT '' AS two, p.* FROM POINT_TBL p WHERE p.f1 @ path '[(0,0),(-10,0),(-10,10)]'; two | f1 -+- | (0,0) | (-10,0) (2 rows) SELECT '' AS six, p.f1, p.f1 - point '(0,0)' AS dist FROM POINT_TBL p ORDER BY dist; six | f1 | dist -++-- | (0,0) |0 | (-3,4) |5 | (-10,0)| 10 | (-5,-12) | 13 | (10,10)| 14.1421356237309 | (5.1,34.5) | 34.8749193547455 (6 rows) SET geqo TO 'off'; SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1 - p2.f1 AS dist FROM POINT_TBL p1, POINT_TBL p2 ORDER BY dist, point1 using , point2 using ; thirtysix | point1 | point2 | dist ---+++-- | (-10,0)| (-10,0)|0 | (-5,-12) | (-5,-12) |0 | (-3,4) | (-3,4) |0 | (0,0) | (0,0) |0 | (5.1,34.5) | (5.1,34.5) |0 | (10,10)| (10,10)|0 | (-3,4) | (0,0) |5 | (0,0) | (-3,4) |5 | (-10,0)| (-3,4) | 8.06225774829855 | (-3,4) | (-10,0)| 8.06225774829855 | (-10,0)| (0,0) | 10 | (0,0) | (-10,0)| 10 | (-10,0)| (-5,-12) | 13 | (-5,-12) | (-10,0)| 13 | (-5,-12) | (0,0) | 13 | (0,0) | (-5,-12) | 13 | (0,0) | (10,10)| 14.1421356237309 | (10,10)| (0,0) | 14.1421356237309 | (-3,4) | (10,10)| 14.3178210632764 | (10,10)| (-3,4) | 14.3178210632764 | (-5,-12) | (-3,4) | 16.1245154965971 | (-3,4) | (-5,-12) | 16.1245154965971 | (-10,0)| (10,10)| 22.3606797749979 | (10,10)| (-10,0)| 22.3606797749979 | (5.1,34.5) | (10,10)| 24.9851956166046 | (10,10)| (5.1,34.5) | 24.9851956166046 | (-5,-12) | (10,10)| 26.6270539113887 | (10,10)| (-5,-12) | 26.6270539113887 | (-3,4) | (5.1,34.5) | 31.5572495632937 | (5.1,34.5) | (-3,4) | 31.5572495632937 | (0,0) | (5.1,34.5) | 34.8749193547455 | (5.1,34.5) | (0,0) | 34.8749193547455 | (-10,0)| (5.1,34.5) | 37.6597928831267 | (5.1,34.5) | (-10,0)| 37.6597928831267 | (-5,-12) | (5.1,34.5) | 47.5842410888311 | (5.1,34.5) | (-5,-12)
Re: [HACKERS] arm rc1 regression failures
See attached -- Original Message --- From: Bruce Momjian [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers [EMAIL PROTECTED] Sent: Mon, 6 Dec 2004 12:24:18 -0500 (EST) Subject: Re: [HACKERS] arm rc1 regression failures Would you send us regression.diff? That should show the differences. --- Jim Buttafuoco wrote: Just compiled RC1 on a netwinder ARM system running Debian Linux (sarge). All tests passed except point with the following in results/point Jim -- -- POINT -- CREATE TABLE POINT_TBL(f1 point); INSERT INTO POINT_TBL(f1) VALUES ('(0.0,0.0)'); INSERT INTO POINT_TBL(f1) VALUES ('(-10.0,0.0)'); INSERT INTO POINT_TBL(f1) VALUES ('(-3.0,4.0)'); INSERT INTO POINT_TBL(f1) VALUES ('(5.1, 34.5)'); INSERT INTO POINT_TBL(f1) VALUES ('(-5.0,-12.0)'); -- bad format points INSERT INTO POINT_TBL(f1) VALUES ('asdfasdf'); ERROR: invalid input syntax for type point: asdfasdf INSERT INTO POINT_TBL(f1) VALUES ('10.0,10.0'); INSERT INTO POINT_TBL(f1) VALUES ('(10.0 10.0)'); ERROR: invalid input syntax for type point: (10.0 10.0) INSERT INTO POINT_TBL(f1) VALUES ('(10.0,10.0'); ERROR: invalid input syntax for type point: (10.0,10.0 SELECT '' AS six, POINT_TBL.*; six | f1 -+ | (0,0) | (-10,0) | (-3,4) | (5.1,34.5) | (-5,-12) | (10,10) (6 rows) -- left of SELECT '' AS three, p.* FROM POINT_TBL p WHERE p.f1 '(0.0, 0.0)'; three |f1 ---+-- | (-10,0) | (-3,4) | (-5,-12) (3 rows) -- right of SELECT '' AS three, p.* FROM POINT_TBL p WHERE '(0.0,0.0)' p.f1; three |f1 ---+-- | (-10,0) | (-3,4) | (-5,-12) (3 rows) -- above SELECT '' AS one, p.* FROM POINT_TBL p WHERE '(0.0,0.0)' ^ p.f1; one |f1 -+-- | (-5,-12) (1 row) -- below SELECT '' AS one, p.* FROM POINT_TBL p WHERE p.f1 ^ '(0.0, 0.0)'; one |f1 -+-- | (-5,-12) (1 row) -- equal SELECT '' AS one, p.* FROM POINT_TBL p WHERE p.f1 ~= '(5.1, 34.5)'; one | f1 -+ | (5.1,34.5) (1 row) -- point in box SELECT '' AS three, p.* FROM POINT_TBL p WHERE p.f1 @ box '(0,0,100,100)'; three | f1 ---+ | (0,0) | (5.1,34.5) | (10,10) (3 rows) SELECT '' AS three, p.* FROM POINT_TBL p WHERE not p.f1 @ box '(0,0,100,100)'; three |f1 ---+-- | (-10,0) | (-3,4) | (-5,-12) (3 rows) SELECT '' AS two, p.* FROM POINT_TBL p WHERE p.f1 @ path '[(0,0),(-10,0),(-10,10)]'; two | f1 -+- | (0,0) | (-10,0) (2 rows) SELECT '' AS six, p.f1, p.f1 - point '(0,0)' AS dist FROM POINT_TBL p ORDER BY dist; six | f1 | dist -++-- | (0,0) |0 | (-3,4) |5 | (-10,0)| 10 | (-5,-12) | 13 | (10,10)| 14.1421356237309 | (5.1,34.5) | 34.8749193547455 (6 rows) SET geqo TO 'off'; SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1 - p2.f1 AS dist FROM POINT_TBL p1, POINT_TBL p2 ORDER BY dist, point1 using , point2 using ; thirtysix | point1 | point2 | dist ---+++-- | (-10,0)| (-10,0)|0 | (-5,-12) | (-5,-12) |0 | (-3,4) | (-3,4) |0 | (0,0) | (0,0) |0 | (5.1,34.5) | (5.1,34.5) |0 | (10,10)| (10,10)|0 | (-3,4) | (0,0) |5 | (0,0) | (-3,4) |5 | (-10,0)| (-3,4) | 8.06225774829855 | (-3,4) | (-10,0)| 8.06225774829855 | (-10,0)| (0,0) | 10 | (0,0) | (-10,0)| 10 | (-10,0)| (-5,-12) | 13 | (-5,-12) | (-10,0)| 13 | (-5,-12) | (0,0) | 13 | (0,0) | (-5,-12) | 13 | (0,0) | (10,10)| 14.1421356237309 | (10,10)| (0,0) | 14.1421356237309 | (-3,4) | (10,10)| 14.3178210632764 | (10,10)| (-3,4) | 14.3178210632764 | (-5,-12) | (-3,4) | 16.1245154965971 | (-3,4) | (-5,-12) | 16.1245154965971 | (-10,0)| (10,10
Re: Buildfarm coverage (was Re: [HACKERS] OK, ready for RC1 or Beta6)
Tom/all, I have setup the following running debian linux. MIPS, MIPSEL, ALPHA, PARISC, M68K, ARM, SPARC, I386. I have the build farm running local and I have just started to get the systems registered. I am also willing to aquire other hardware/ operating systems in an effort to give something back to the Postgresql community Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: Joshua D. Drake [EMAIL PROTECTED] Cc: PostgreSQL-development [EMAIL PROTECTED] Sent: Fri, 03 Dec 2004 15:20:48 -0500 Subject: Buildfarm coverage (was Re: [HACKERS] OK, ready for RC1 or Beta6) Joshua D. Drake [EMAIL PROTECTED] writes: 1. Buildfarm doesn't yet have that many platforms on it. It's not as bad as all that. Our current list of supported platforms (ie, things that got tested last time) is AIX Free/Open/NetBSDcovered by buildfarm HPUXused daily by moi IRIX Linux covered by buildfarm OS Xtested pretty often by moi Solaris covered by buildfarm Tru64 UnixWare Windows/Cygwin covered by buildfarm With respect to hardware it's x86 covered by buildfarm ia64 x86_64 covered by buildfarm ARM Alpha MIPScovered by buildfarm m68k PA-RISC used daily by moi PPC tested pretty often by moi RS6000 isn't this same as PPC? S/390 Sparc covered by buildfarm Considering that we have both 32- and 64-bit, little- and big-endian hardware in there, most of the basic hardware gotchas are covered; the only thing I think is at much risk is the spinlock assembler code, which we change seldom. Where the buildfarm falls down a bit is on the cross-product coverage. But I think you're not going to get the cross product without a call for port reports; there aren't that many people who are going to offer dedicated time on every random platform there is. It would be nice to get an Alpha into the buildfarm, and PPC too. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly --- End of Original Message --- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[HACKERS] Fw: float4/float8 regression failure on Alpha Linux
I am still having this problem with the latest CSV snapshot. Is anyone else running on an Alpha. Can any of the hackers point me to where in the code this might be failing? Thanks Jim -- Forwarded Message --- From: Jim Buttafuoco [EMAIL PROTECTED] To: pgsql-hackers [EMAIL PROTECTED] Sent: Sun, 31 Oct 2004 07:09:32 -0500 Subject: float4/float8 regression failure on Alpha Linux Hi all, I am getting a regression failure on float8 (and float4) when running on Debian Sarge on Alpha (gcc 3.3.4). Postgres is a HEAD checkout from yesterday. test=# select version(); version --- PostgreSQL 8.0.0beta4 on alpha-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.3.4 (Debian 1:3.3.4-13) (1 row) --This test sould give an error SELECT 'Infinity'::float8 + 100.0; ?column? -- 0 Is this a problem with my system, Alpha's or postgres. Thanks Jim --- End of Forwarded Message --- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Fw: float4/float8 regression failure on Alpha Linux
just to follow up. On i386/mipsel/mips I get the following for pow(10,309) ERROR: result is out of range on alpha, I get 3.09434604738258e-308 -- Original Message --- From: Jim Buttafuoco [EMAIL PROTECTED] To: pgsql-hackers [EMAIL PROTECTED] Sent: Wed, 3 Nov 2004 08:44:02 -0500 Subject: [HACKERS] Fw: float4/float8 regression failure on Alpha Linux I am still having this problem with the latest CSV snapshot. Is anyone else running on an Alpha. Can any of the hackers point me to where in the code this might be failing? Thanks Jim -- Forwarded Message --- From: Jim Buttafuoco [EMAIL PROTECTED] To: pgsql-hackers [EMAIL PROTECTED] Sent: Sun, 31 Oct 2004 07:09:32 -0500 Subject: float4/float8 regression failure on Alpha Linux Hi all, I am getting a regression failure on float8 (and float4) when running on Debian Sarge on Alpha (gcc 3.3.4). Postgres is a HEAD checkout from yesterday. test=# select version(); version --- PostgreSQL 8.0.0beta4 on alpha-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.3.4 (Debian 1:3.3.4-13) (1 row) --This test sould give an error SELECT 'Infinity'::float8 + 100.0; ?column? -- 0 Is this a problem with my system, Alpha's or postgres. Thanks Jim --- End of Forwarded Message --- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match --- End of Original Message --- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] float4/float8 regression failure on Alpha Linux
Hi all, I am getting a regression failure on float8 (and float4) when running on Debian Sarge on Alpha (gcc 3.3.4). Postgres is a HEAD checkout from yesterday. test=# select version(); version --- PostgreSQL 8.0.0beta4 on alpha-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.3.4 (Debian 1:3.3.4-13) (1 row) --This test sould give an error SELECT 'Infinity'::float8 + 100.0; ?column? -- 0 Is this a problem with my system, Alpha's or postgres. Thanks Jim ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] Beta 4 on Debian Sarge (MIPS/MIPSEL)
Darcy, I have been working on it all day. Not the fastest systems. Jusy an FYI, I have started on a Debian PKG file for the pg build system. My plan is to get ALL of the dependencies worked out and let APT fix my system. Jim -- Original Message --- From: Darcy Buskermolen [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers [EMAIL PROTECTED] Sent: Thu, 28 Oct 2004 10:35:04 -0700 Subject: Re: [HACKERS] Beta 4 on Debian Sarge (MIPS/MIPSEL) On October 27, 2004 05:57 pm, Jim Buttafuoco wrote: Hackers, just an fyi, Beta 4 passed ALL tests on Debian Sarge for both MIPS (Indy) and MIPSEL (Cobalt RAQ) I can test Debian Sarge Sparc, Alpha, PowerPC, PA-RISC and M68K if no one else has reported on these systems yet. Also, with a little work I could test Solaris, Tru64 (or what ever its called these days) and IRIX You could always make an application to pgbuildfarm.org and add those OS to the farm. Keep up the good work. Jim ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) -- Darcy Buskermolen Wavefire Technologies Corp. ph: 250.717.0200 fx: 250.763.1759 http://www.wavefire.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] --- End of Original Message --- ---(end of broadcast)--- TIP 8: explain analyze is your friend
[HACKERS] Beta 4 on Debian Sarge (MIPS/MIPSEL)
Hackers, just an fyi, Beta 4 passed ALL tests on Debian Sarge for both MIPS (Indy) and MIPSEL (Cobalt RAQ) I can test Debian Sarge Sparc, Alpha, PowerPC, PA-RISC and M68K if no one else has reported on these systems yet. Also, with a little work I could test Solaris, Tru64 (or what ever its called these days) and IRIX Keep up the good work. Jim ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] System crash - invalid page header messages in log
One of my systems crashed today and when Postgres started it gave the following warnings. Is this OK? I am going to find which database has these relations and do some checking. It would be nice if the startup wal code gave the database oid also and database version. Thanks Jim select version(); version -- PostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 (Debian 20040401) 2004-09-28 16:10:47 [979] LOG: database system was interrupted at 2004-09-28 16:04:49 EDT 2004-09-28 16:10:47 [979] LOG: checkpoint record is at F4/FE424 2004-09-28 16:10:47 [979] LOG: redo record is at F4/4A94; undo record is at 0/0; shutdown FALSE 2004-09-28 16:10:47 [979] LOG: next transaction ID: 1332030; next OID: 920402645 2004-09-28 16:10:47 [979] LOG: database system was not properly shut down; automatic recovery in progress 2004-09-28 16:10:47 [979] LOG: redo starts at F4/4A94 2004-09-28 16:10:50 [979] WARNING: invalid page header in block 75660 of relation 743151645; zeroing out page 2004-09-28 16:10:50 [979] WARNING: invalid page header in block 75672 of relation 743151645; zeroing out page 2004-09-28 16:10:50 [979] WARNING: invalid page header in block 75698 of relation 743151645; zeroing out page 2004-09-28 16:10:50 [979] WARNING: invalid page header in block 75786 of relation 743151645; zeroing out page 2004-09-28 16:10:51 [979] WARNING: invalid page header in block 11193 of relation 743151650; zeroing out page 2004-09-28 16:10:51 [979] WARNING: invalid page header in block 78348 of relation 743151645; zeroing out page 2004-09-28 16:10:51 [979] WARNING: invalid page header in block 78452 of relation 743151645; zeroing out page 2004-09-28 16:10:51 [979] WARNING: invalid page header in block 11279 of relation 743151650; zeroing out page 2004-09-28 16:10:51 [979] WARNING: invalid page header in block 11283 of relation 743151651; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79467 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79479 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79493 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 11425 of relation 743151650; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79529 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79547 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79597 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79647 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79789 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 11466 of relation 743151650; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79815 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79865 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79885 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79897 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 11481 of relation 743151650; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79923 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79935 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 79947 of relation 743151645; zeroing out page 2004-09-28 16:10:52 [979] WARNING: invalid page header in block 11478 of relation 743151651; zeroing out page 2004-09-28 16:10:52 [979] LOG: incorrect resource manager data checksum in record at F4/72BEFA0 2004-09-28 16:10:52 [979] LOG: redo done at F4/72BEF3C 2004-09-28 16:11:02 [979] LOG: recycled transaction log file 00F300F9 2004-09-28 16:11:02 [979] LOG: recycled transaction log file 00F300FA 2004-09-28 16:11:02 [979] LOG: recycled transaction log file 00F300FB 2004-09-28 16:11:02 [979] LOG: recycled transaction log file 00F300FC 2004-09-28 16:11:02 [979] LOG: recycled transaction log file 00F300FD 2004-09-28 16:11:02 [979] LOG: recycled transaction log file 00F300FE 2004-09-28 16:11:02 [979] LOG: database system is ready ---(end of
Re: [HACKERS] System crash - invalid page header messages in log
thanks for the reply. I will do a dump of the databases tonight (around 400GB of data) Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers [EMAIL PROTECTED] Sent: Tue, 28 Sep 2004 17:42:04 -0400 Subject: Re: [HACKERS] System crash - invalid page header messages in log Jim Buttafuoco [EMAIL PROTECTED] writes: One of my systems crashed today and when Postgres started it gave the following warnings. Is this OK? Should theoretically be OK --- all of those pages were overwritten with valid data from WAL playback. What you need to worry about more is whether the crash lost any pages that were *not* restored from the log (because they hadn't been modified lately). I'd suggest running a pg_dump to see if any bad data turns up. regards, tom lane --- End of Original Message --- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] beta 1 failed on linux mipsel
Tom, Still having problems. This is the output of make check. I am going to compile with debug symbols and try to attach with gdb next. any other ideas? Thanks Jim - Running in noclean mode. Mistakes will not be cleaned up. The files belonging to this database system will be owned by user pgsql. This user must also own the server process. The database cluster will be initialized with locale C. creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data ... ok creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/global ... ok creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/pg_xlog ... ok creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/pg_xlog/archive_status ... ok creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/pg_clog ... ok creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/pg_subtrans ... ok creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/base ... ok creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/base/1 ... ok creating directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/pg_tblspc ... ok selecting default max_connections ... 10 selecting default shared_buffers ... 50 creating configuration files ... ok creating template1 database in /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data/base/1 ... PANIC: stuck spinlock (0x2b0 52030) detected at lwlock.c:246 child process was terminated by signal 6 initdb: failed initdb: data directory /home/pgsql/postgresql-8.0.0beta1/src/test/regress/./tmp_check/data not removed at user's request -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers [EMAIL PROTECTED] Sent: Mon, 30 Aug 2004 13:23:03 -0400 Subject: Re: [HACKERS] beta 1 failed on linux mipsel Jim Buttafuoco [EMAIL PROTECTED] writes: I have confirmed that 7.4.3 works on the cobalt raq mipsel system. I have not looked at the s_lock.[ch] code as I have not coded in C for years and don't know the backend code very well. Do you have any ideas? Do you want to give me or another pg-hacker a temporary account on your machine? regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match --- End of Original Message --- ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] beta 1 failed on linux mipsel
trying to test beta 1 on Debian linux mipsel (sarge). I am getting the following error PANIC: stuck spinlock (0x2b052030) detected at lwlock.c:246 during initdb. here is the complete initdb run. [EMAIL PROTECTED]:~$ initdb The files belonging to this database system will be owned by user postgres. This user must also own the server process. The database cluster will be initialized with locale C. creating directory /usr/local/pgsql/data ... ok creating directory /usr/local/pgsql/data/global ... ok creating directory /usr/local/pgsql/data/pg_xlog ... ok creating directory /usr/local/pgsql/data/pg_xlog/archive_status ... ok creating directory /usr/local/pgsql/data/pg_clog ... ok creating directory /usr/local/pgsql/data/pg_subtrans ... ok creating directory /usr/local/pgsql/data/base ... ok creating directory /usr/local/pgsql/data/base/1 ... ok creating directory /usr/local/pgsql/data/pg_tblspc ... ok selecting default max_connections ... 10 selecting default shared_buffers ... 50 creating configuration files ... ok creating template1 database in /usr/local/pgsql/data/base/1 ... PANIC: stuck spinlock (0x2b052030) detected at lwlock.c:246 child process was terminated by signal 6 initdb: failed initdb: removing data directory /usr/local/pgsql/data let me know if this is a know issue and/or what other info you need. Thanks Jim ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] beta 1 failed on linux mipsel
The system is a cobalt raq, I am also going to test on some decstations (also mipsel) and SGI (mips) systems. I didn't use the disable-spinlock option. will try tomorrow. i did see some mips spinlock code in s_lock.c. Shouldn't this also work on mipsel CPU's? Jim -- Original Message --- From: Tom Lane [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: pgsql-hackers [EMAIL PROTECTED] Sent: Sun, 29 Aug 2004 15:27:32 -0400 Subject: Re: [HACKERS] beta 1 failed on linux mipsel Jim Buttafuoco [EMAIL PROTECTED] writes: trying to test beta 1 on Debian linux mipsel (sarge). What is that, a playstation 2? Last we heard, that chip had no spinlock support. Did you use configure --disable-spinlocks? regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] --- End of Original Message --- ---(end of broadcast)--- TIP 8: explain analyze is your friend
[HACKERS] PANIC: hash table Shared Buffer Lookup Table corrupted
All, Just started an upgrade from 7.2.X to 7.4.2. I am getting the following PANIC when loading the data from a 7.2.4 db using 7.4.2 pg_dump via a pipe pg_dump -h bda4c OLD_DB |psql -h bda5 -e NEW_DB bda4c isPostgreSQL 7.2.4 on i686-pc-linux-gnu, compiled by GCC 2.95.4 bda5 isPostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC 2.95.4 both systems are debian 3 with updates but Postgresql was compiled from source. I stopped and started the postmaster and am still getting this. any ideas? The system has been loading data (via INSERTS) for days (93936795 rows 10 tables) with no problem. This is my first try at copy. Thanks Jim 2004-03-23 21:56:48 [6868] PANIC: hash table Shared Buffer Lookup Table corrupted CONTEXT: COPY emi_110101, line 26679: 18860 A-9417-7011-044-89 200304 110101 2003-03-28 0 0 0 000 10 3056217468 2 050102225 0 0 0... STATEMENT: COPY emi_110101 (df_id, bi_billing_number, mdf_id, recid, message_date, from_number_len, from_number, ove rflow_digits, to_number_len, to_number, orig_term_id, fgd_tgn, carrier_id, access_method, routing_method, dialing_met hod, ani, ncta, connect_time, billable_time, method_of_rec, from_rac, local_co_info, type_of_service, method_of_signa ling, indicator_1, indicator_2, indicator_3, indicator_4, indicator_5, indicator_6, indicator_7, indicator_8, indicat or_9, indicator_10, indicator_11, indicator_12, indicator_13, indicator_14, indicator_15, indicator_16, indicator_17, indicator_18, indicator_19, indicator_20, opr_unit, rec_pt_id, bill_rac_rao, indicator_21, indicator_22, indicator_2 3, indicator_24, indicator_25, indicator_26, indicator_27, indicator_28, indicator_29, indicator_30, bill_number, nec a_company_code, fgd_call_event_status, fg_id_code, toll_library_code, toll_settlement, conversion_time, orig_lrn, ori g_ocn, orig_lrn_source_ind, term_lrn, term_ocn, term_lrn_source_ind, send_to_ocn, bda_from_place, bda_from_place_st, bda_to_place, bda_to_place_st, emi_modules, bda_from_clli, bda_to_clli, bda_to_ocn, bda_csc) FROM stdin; 2004-03-23 21:56:48 [6854] LOG: server process (PID 6868) was terminated by signal 6 2004-03-23 21:56:48 [6854] LOG: terminating any other active server processes 2004-03-23 21:56:50 [6869] WARNING: terminating connection because of crash of another server process DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because anot her server process exited abnormally and possibly corrupted shared memory. HINT: In a moment you should be able to reconnect to the database and repeat your command. 2004-03-23 21:56:50 [6854] LOG: all server processes terminated; reinitializing 2004-03-23 21:56:51 [6870] LOG: database system was interrupted at 2004-03-23 21:56:47 EST 2004-03-23 21:56:51 [6870] LOG: checkpoint record is at 27/90395DE4 2004-03-23 21:56:51 [6870] LOG: redo record is at 27/90395DE4; undo record is at 0/0; shutdown TRUE 2004-03-23 21:56:51 [6870] LOG: next transaction ID: 17674; next OID: 119938497 2004-03-23 21:56:51 [6870] LOG: database system was not properly shut down; automatic recovery in progress 2004-03-23 21:56:51 [6870] LOG: redo starts at 27/90395E24 2004-03-23 21:56:57 [6870] LOG: unexpected pageaddr 27/6FF6 in log file 39, segment 180, offset 16121856 2004-03-23 21:56:57 [6870] LOG: redo done at 27/B4F5FB98 2004-03-23 21:57:18 [6870] LOG: database system is ready ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] more contrib: log rotator
Would the plan be to add it to pg_ctl? Andrew Sullivan [EMAIL PROTECTED] writes: Is anyone interested in having pglog-rotator? FWIW, I saw an early version of pglog-rotator about a year and a half ago (while consulting for LibertyRMS), and thought at the time that it was pretty cool. So I'm for including it ... maybe even as mainstream instead of contrib. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: Table spaces again [was Re: [HACKERS] Threaded Sorting]
Is this NOT what I have been after for many months now. I dropped the tablespace/location idea before 7.2 because that didn't seem to be any interest. Please see my past email's for the SQL commands and on disk directory layout I have proposed. I have a working 7.2 system with tablespaces/locations (what ever you want to call them, I like locations because tablespace are an Oracle thing). I would like to get this code ported into 7.4. Jim =?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes: how would you handle table spaces? The plan that's been discussed simply defines a tablespace as being a directory somewhere; physical storage of individual tables would remain basically the same, one or more files under the containing directory. The point of this being, of course, that the DBA could create the tablespace directories on different partitions or volumes in order to provide the behavior he wants. In my mind this would be primarily a cleaner, more flexible reimplementation of the existing database location feature. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] How to REINDEX in high volume environments?
Just wanted to pipe in here. I am still very interested in tablespaces ( I have many database systems that are over 500GB and growing) and am willing to port my tablespace patch to 7.4. I have everything (but only tested here) working in 7.2 but the patch was not accepted. I didn't see a great speed improvement but the patch helps with storage management. Recap. the patch would enable the following a database to have a default data tablespace and index tablespace a user to have a default data and index tablespace a table to have a specific tablespace an index to have a specfic tablespace I would like to also add namespace (schema) to have a default data and index tablespaces Jim Justin Clift [EMAIL PROTECTED] writes: Shridhar Daithankar wrote: Looks like we should have a subdirectory in database directory which stores index. That was my first thought also, but an alternative/additional approach would be this (not sure if it's workable): See the tablespaces TODO item. I'm not excited about building half-baked versions of tablespaces before we get around to doing the real thing ... regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[HACKERS] bt_fixroot: not valid old root page
Hi all, I started getting these errors today on my test database (pg 7.2.1). I have been vacuum/reindex/analyze(ing) the table all day (after updating 10+ rows) and wondering what could have caused this. Thanks Jim 2002-05-19 18:16:18 [1673] NOTICE: bt_getroot[billed_features_btn_idx2]: fixing root page 2002-05-19 18:16:18 [1673] ERROR: bt_fixroot: not valid old root page ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Bulkloading using COPY - ignore duplicates?
I have used Oracle SQLOADER for many years now. It has the ability to put rejects/discards/bad into an output file and keep on going, maybe this should be added to the copy command. COPY [ BINARY ] table [ WITH OIDS ] FROM { 'filename' | stdin } [ [USING] DELIMITERS 'delimiter' ] [ WITH NULL AS 'null string' ] [ DISCARDS 'filename' ] what do you think??? Tom Lane writes: It occurs to me that skip-the-insert might be a useful option for INSERTs that detect a unique-key conflict, not only for COPY. (Cf. the regular discussions we see on whether to do INSERT first or UPDATE first when the key might already exist.) Maybe a SET variable that applies to all forms of insertion would be appropriate. What we need is: 1. Make errors not abort the transaction. 2. Error codes Then you can make your client deal with this in which ever way you want, at least for single-value inserts. However, it seems to me that COPY ignoring duplicates can easily be done by preprocessing the input file. -- Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter ---(end of broadcast)- -- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Status of index location patch
Vadim, I guess I am still confused... In dbcommands.c resolve_alt_dbpath() takes the db oid as a argument. This number is used to find the directory where the data files live. All the patch does is put the indexes into a db oid_index directory instead of db oid This is for tables snprintf(ret, len, %s/base/%u, prefix, dboid); This is for indexes snprintf(ret, len, %s/base/%u_index, prefix, dboid); And in catalog.c tables: sprintf(path, %s/base/%u/%u, DataDir, rnode.tblNode, rnode.relNode); indexes: sprintf(path, %s/base/%u_index/%u, DataDir, rnode.tblNode,rnode.relNode); Can you explain how I would get the tblNode for an existing database index files if it doesn't have the same OID as the database entry in pg_databases. Jim Just wondering what is the status of this patch. Is seems from comments that people like the idea. I have also looked in the archives for other people looking for this kind of feature and have found alot of interest. If you think it is a good idea for 7.2, let me know what needs to be changed and I will work on it this weekend. Just change index' dir naming as was already discussed. Vadim ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Status of index location patch
Yes that is exactly what I am going to do for 7.3 (had trouble adding tblNode to pg_class so I stopped for now...) Can you explain how I would get the tblNode for an existing database index files if it doesn't have the same OID as the database entry in pg_databases. Well, keeping in mind future tablespace implementation I would add tblNode to pg_class and in pg_databases I'd have defaultTblNode and indexTblNode. If it's too late to do for 7.2 then let's wait till 7.3. Vadim ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Status of index location patch
All, Just wondering what is the status of this patch. Is seems from comments that people like the idea. I have also looked in the archives for other people looking for this kind of feature and have found alot of interest. If you think it is a good idea for 7.2, let me know what needs to be changed and I will work on it this weekend. Thanks Jim ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Index location patch for review
just change the work tablespace below to location and that is exactly what this patch is trying to do. You can think of the LOCATION and INDEX_LOCATION provided to the create database command as the default storage locations for these objects. In the future, I want to enable the DBA to specify LOCATIONS any object just like Oracle. I am also planning on a pg_locations table and create location command which will do what the current initlocation script does and more. Jim I am very new to this mailinglist so I apologize if I start talking early but I've been working as a sysadmin and that kind of problems for a long while now and my suggestion is that it is a start but I think that we should aim a little higher than this and use something more like the Oracle approach instead. Where they introduce an abstraction layer in the form of a tablespace. And this tablespace is then referenced from the create table or create index instead. eg: table - tablespace - path to physical storage index - tablespace - path to physical storage Advantages: Changes can be done to storage whithout need to change create scripts for db, tables and so on. Designers can specify in which tablespace tables/indexes should reside based on usage. Sysadmins can work with tablespaces and change paths without changing anything in the database/table/index definitions. The alternative is symlinks to distribute the load and that is not a pretty sight dba-wise. Hope you can bare with me on this, since I think it is an very important issue. I'm unfortunately not a fast coder yet (but I'm getting faster :-) ). But I could start writing a spec if someone is interrested. Bruce Momjian wrote: Hi all, Attached is a patch that adds support for specifying a location for indexes via the create database command. I believe this patch is complete, but it is my first . This patch allows index locations to be specified as different from data locations. Is this a feature direction we want to go in? Comments? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [HACKERS] Index location patch for review
I agree that groups of objects in separate data storage areas are needed and that is what I am trying to get to. Don't you think that Postgresql with locations/files is the same as Oracle tablespaces. I don't think we want to invent our own filesystem (which is what a tablespace really is...). Jim Attached is a patch that adds support for specifying a location for indexes via the create database command. I believe this patch is complete, but it is my first . This patch allows index locations to be specified as different from data locations. Is this a feature direction we want to go in? Comments? The more general and standard way to go are TABLESPACEs. But probably proposed feature will be compatible with tablespaces, when we'll got them: we could use new create database syntax to specify default tablespace for indices. Unfortunately I removed message with patch, can you send it to me, Bruce? Vadim ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [HACKERS] Index location patch for review
Vadim, I don't understand the WAL issue below, can you explain. The dir name is the same name as the database with _index added to it. This is how the current datpath stuff works. I really just copied the datpath code to get this patch to work... Also I have been running this patch (both 7.1.3 and 7.2devel) against some of my companies applications. I have loaded a small database 10G data and 15G indexes both with and without the patch. There seems to be between 5% and 10% performance gain doing most common db commands (selects, selects with joins and inserts). The system is a DUAL P3 733 with 3 IDE disks. One for PGDATA, second for APPDATA and third for APPIDX. As you can see I have seperated WAL files, GLOBAL, Application data and application indexes over 3 disks. Our production systems have around 50k queries/day ( not including data loads), so I believe that when this patch get put into production, with 20 disks and 10 database the performance increase should go up. I should also add, that I have been working on the second part of this patch, which will allow tables and indexes to be put into LOCATIONS also. I am going planning on having a PG_LOCATIONS table and CREATE|DROP|ALTER location SQL command instead of the initlocation shell script we currently have. The only thing stopping me now is 7.2 testing I am planning on doing once the beta begins and problems adding a location column to the pg_class table with the necessary support code in heap.c... Thanks for all the comments (keep them comming) Jim The more general and standard way to go are TABLESPACEs. But probably proposed feature will be compatible with tablespaces, when we'll got them: Will it be? I'm afraid of creating a backwards-compatibility problem for ourselves when it comes time to implement tablespaces. As I said, INDEX_LOCATION in CREATE DATABASE could mean location of default tablespace for indices in future and one will be able to override tablespace for particular index with TABLESPACE clause in CREATE INDEX command. At the very least I'd like to see some information demonstrating how much benefit there is to this proposed patch, before we consider whether to adopt it. If there's a significant performance benefit to splitting a PG database along the table-vs-index divide, then it's interesting as a short-term improvement ... but Jim didn't even make that assertion, let alone provide evidence to back it up. Agreed. He mentioned significant performance difference but it would be great to see results of pgbench tests with scaling factor of = 10. Jim? Also, after reviewing patch I have to say that it will NOT work with WAL. Jim, please do not name index' dir as TBL_NODE_index. Instead, just use different TBL_NODE for indices (different number). It's not good to put if(reln-rd_rel-relkind == RELKIND_INDEX) stuff into storage manager - only two numbers (tblnode relnode) must be used to identify file, no any other logical information totally unrelated to storage issues. Vadim ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Index location patch for review
I could also symlink all index files back to the tblnode directory? I don't understand the WAL issue below, can you explain. The dir name is the same name as the database with _index added to it. This is how the current datpath stuff works. I really just copied the datpath code to get this patch to work... At the time of after crash recovery WAL is not able to read relation description from catalog and so only relfilenode is provided for storage manager in relation structure (look backend/access/transam/ xlogutils.c:XLogOpenRelation). Well, we could add Index/Table file type identifier to RmgrData (rmgr.c in the same dir) to set relkind in relation structure, but I don't see any reason to do so when we can just use different tblnode number for indices and name index dirs just like other dirs under 'base' named - ie only tblnode number is used for dir names, without any additions unrelated to storage issues. Vadim ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Index location patch for review (more pgbench results)
Moving the test to a system with SCSI disks gave different results. There is NO difference between having the indexes on the same disk or different disk with the data while running pgbench. So I leave it up to you guys as to include the patch or not. I do believe that even if performance doesn't increase, this patch as alot of other benefits for admins. Let me know Jim Here is my pgbench results. As you can see the I am getting 2X tps with the 2 directories. I believe this is a BIG win for Postgresql if we can figure out the WAL recovery issues. Can someone other than me apply the patch and verify the pgbench results. My hardward setup is a dual processor P3/733 running Redhat 7.1 with 512 megs of memory. The postgresql.conf file is the installed version with NO changes. Jim template1=# create database one_dir with location='PGDATA1'; template1=# create database two_dir with location='PGDATA1' index_location='PGIDX1'; for X in 1 2 3 4 5 6 7 8 9 10 do pgbench -i -s 10 one_dir one_dir.log pgbench -i -s 10 two_dir two_dir.log done bash-2.04$ grep 'excluding' one_dir.log tps = 44.319306(excluding connections establishing) tps = 34.641020(excluding connections establishing) tps = 50.516889(excluding connections establishing) tps = 52.747039(excluding connections establishing) tps = 16.203821(excluding connections establishing) tps = 36.902861(excluding connections establishing) tps = 52.511769(excluding connections establishing) tps = 53.479882(excluding connections establishing) tps = 54.599429(excluding connections establishing) tps = 36.780419(excluding connections establishing) tps = 48.048279(excluding connections establishing) bash-2.04$ grep 'excluding' two_dir.log tps = 58.739049(excluding connections establishing) tps = 100.259270(excluding connections establishing) tps = 103.156166(excluding connections establishing) tps = 110.829358(excluding connections establishing) tps = 111.929690(excluding connections establishing) tps = 106.840118(excluding connections establishing) tps = 101.563159(excluding connections establishing) tps = 102.877060(excluding connections establishing) tps = 103.784717(excluding connections establishing) tps = 53.056309(excluding connections establishing) tps = 73.842428(excluding connections establishing) Also I have been running this patch (both 7.1.3 and 7.2devel) against some of my companies applications. I have loaded a small database 10G We are not familiar with your applications. It would be better to see results of test suit available to the community. pgbench is first to come in mind. Such tests would be more valuable. Vadim ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Index location patch for review
Hi all, Attached is a patch that adds support for specifying a location for indexes via the create database command. I believe this patch is complete, but it is my first . Thanks Jim location.diffs ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[HACKERS] pg_dump -C option
All, I am working a some patches to the code and I noticed that pg_dump -C database doesn't provide the database location information in the dump file. Is this correct? Thanks Jim Example: datname | datdba | encoding | datistemplate | datallowconn | datlastsysoid | datpath | idxpath ---++--+---+--+---+-+- jb1 | 5433 |0 | f | t| 18540 | | PGIDX1 template1 | 5433 |0 | t | t| 18540 | | template0 | 5433 |0 | t | f| 18540 | | jb2 | 5433 |0 | f | t| 18540 | PGDATA1 | PGIDX1 jb3 | 5433 |0 | f | t| 18540 | PGDATA1 | 4051 | 5433 |0 | f | t| 18540 | PGDATA1 | PGIDX1 (Please ignore the IDXPATH column for now as I am trying to add support for INDEX locations as I am running out of room on my current system and I don't like the symlink your own tables/index files idea) and the output of pg_dump -C -- -- Selected TOC Entries: -- \connect - pgtest -- -- TOC Entry ID 1 (OID 0) -- -- Name: jb2 Type: DATABASE Owner: pgtest -- Create Database jb2; \connect jb2 pgtest ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] PG_DUMP -C option
All, I am working a some patches to the code and I noticed that pg_dump -C database doesn't provide the database location information in the dump file. Is this correct? Thanks Jim Example: datname | datdba | encoding | datistemplate | datallowconn | datlastsysoid | datpath | idxpath ---++--+---+--+---+-+- jb1 | 5433 |0 | f | t| 18540 | | PGIDX1 template1 | 5433 |0 | t | t| 18540 | | template0 | 5433 |0 | t | f| 18540 | | jb2 | 5433 |0 | f | t| 18540 | PGDATA1 | PGIDX1 jb3 | 5433 |0 | f | t| 18540 | PGDATA1 | 4051 | 5433 |0 | f | t| 18540 | PGDATA1 | PGIDX1 (Please ignore the IDXPATH column for now as I am trying to add support for INDEX locations as I am running out of room on my current system and I don't like the symlink your own tables/index files idea) and the output of pg_dump -C -- -- Selected TOC Entries: -- \connect - pgtest -- -- TOC Entry ID 1 (OID 0) -- -- Name: jb2 Type: DATABASE Owner: pgtest -- Create Database jb2; \connect jb2 pgtest ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [HACKERS] pg_dump -C option
will do. Jim Buttafuoco writes: I am working a some patches to the code and I noticed that pg_dump -C database doesn't provide the database location information in the dump file. Is this correct? Your observation is correct, but the behaviour is not. Feel free to send a patch. -- Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS]
All, I am working a some patches to the code and I noticed that pg_dump -C database doesn't provide the database location information in the dump file. Is this correct? Thanks Jim Example: datname | datdba | encoding | datistemplate | datallowconn | datlastsysoid | datpath | idxpath ---++--+---+--+---+-+- jb1 | 5433 |0 | f | t| 18540 | | PGIDX1 template1 | 5433 |0 | t | t| 18540 | | template0 | 5433 |0 | t | f| 18540 | | jb2 | 5433 |0 | f | t| 18540 | PGDATA1 | PGIDX1 jb3 | 5433 |0 | f | t| 18540 | PGDATA1 | 4051 | 5433 |0 | f | t| 18540 | PGDATA1 | PGIDX1 (Please ignore the IDXPATH column for now as I am trying to add support for INDEX locations as I am running out of room on my current system and I don't like the symlink your own tables/index files idea) and the output of pg_dump -C -- -- Selected TOC Entries: -- \connect - pgtest -- -- TOC Entry ID 1 (OID 0) -- -- Name: jb2 Type: DATABASE Owner: pgtest -- Create Database jb2; == This is missing the with location... stuff \connect jb2 pgtest ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] pg_dump -C and locations (with subject this time)
(sorry for the repost. I forgot the subject last time...) All, I am working a some patches to the code and I noticed that pg_dump -C database doesn't provide the database location information in the dump file. Is this correct? Thanks Jim Example: datname | datdba | encoding | datistemplate | datallowconn | datlastsysoid | datpath | idxpath ---++--+---+--+---+-+- jb1 | 5433 |0 | f | t| 18540 | | PGIDX1 template1 | 5433 |0 | t | t| 18540 | | template0 | 5433 |0 | t | f| 18540 | | jb2 | 5433 |0 | f | t| 18540 | PGDATA1 | PGIDX1 jb3 | 5433 |0 | f | t| 18540 | PGDATA1 | 4051 | 5433 |0 | f | t| 18540 | PGDATA1 | PGIDX1 (Please ignore the IDXPATH column for now as I am trying to add support for INDEX locations as I am running out of room on my current system and I don't like the symlink your own tables/index files idea) and the output of pg_dump -C -- -- Selected TOC Entries: -- \connect - pgtest -- -- TOC Entry ID 1 (OID 0) -- -- Name: jb2 Type: DATABASE Owner: pgtest -- Create Database jb2; == This is missing the with location... stuff \connect jb2 pgtest ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[HACKERS] Running config vars
Hi all (I hope this is the correct list), Under Oracle there is v$parameter which list ALL config varables. Under psql there is the SHOW command, but this only lists 1 variable. I have written a shell script (attached) that shows ALL know variables. My questions is can this script get included under contrib directory and is there a way to make it into a view. I believe this kind of info will help in trouble shooting problems. thanks Jim pg_conf ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Fw: Running config vars
Hi all (I hope this is the correct list), Under Oracle there is v$parameter which list ALL config varables. Under psql there is the SHOW command, but this only lists 1 variable. I have written a shell script (attached) that shows ALL know variables. My questions is can this script get included under contrib directory and is there a way to make it into a view. I believe this kind of info will help in trouble shooting problems. thanks Jim pg_conf ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Fw: Running config vars
I was looking for some way via standard SQL (I use perl DBI) to list these variables. I don't believe the SHOW command is available via DBI Jim I think the way to do this is for SHOW ALL to show all setttings. [ Charset ISO-8859-1 unsupported, converting... ] Hi all (I hope this is the correct list), Under Oracle there is v$parameter which list ALL config varables. Under psql there is the SHOW command, but this only lists 1 variable. I have written a shell script (attached) that shows ALL know variables. My questions is can this script get included under contrib directory and is there a way to make it into a view. I believe this kind of info will help in trouble shooting problems. thanks Jim [ application/octet-stream is not supported, skipping... ] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [HACKERS] Problem with group by in conjuction with Views
This seems to work for me. I used the snapshot from 3/28 on Solaris 8 SELECT service, count(*) AS GebruikersAantal FROM tbtrouble GROUP BY service; service | gebruikersaantal ---+-- Service 1 |2 Service 3 |2 Service 4 |1 (3 rows) SELECT service, count(*) AS GebruikersAantal FROM vwtrouble GROUP BY service; service | gebruikersaantal ---+-- Service 1 |2 Service 3 |2 Service 4 |1 (3 rows) This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. Hi there, I have found a small but annoying bug. I have created a view. The SQL-statement in the view contains a GROUP BY statement. Then I compose a SQL-statement using this view and another GROUP BY statement and a COUNT(*) statement. The count(*) statement doesn't count the amount of grouped record's of the view, but it count's the amount of grouped records of the GROUP BY in the view and of the GROUP BY in the select statement. It counts all the records grouped instead of only the records grouped from the view. This is wrong (IMHO). When I use a temporary table instead of a view all things work OK. IMHO views shouldn't differ from temporary tables. To make things a bit more clear I have add an SQL-attachment. You can run the attachment in an empty database form psql to have a look at the problem. I sometimes get another database.sql error too while executing these group statements: 'My bit's blew right of the end of the world'. (This is when i am using an ODBC link to my db.) Mighty thanks in advance, Jeroen Eitjes j.eitjesnospam@chem.leidenuniv.nl eitjesnospam@walras.nl ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] Re: Final Call: RC1 about to go out the door ...
Time to speak up, I have a HPUX 9.07 system and will test today. Jim Thomas Lockhart [EMAIL PROTECTED] writes: HPUX 10.20 (HP-PA architecture) Time to drop 9.2 from the list? I don't have it running here anymore. Is there anyone on the list who can test on HPUX 9? Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think) What processor? Tatsuo had tested on a 603... It's a Powerbook G3 (FireWire model), but I'm not sure which chip is inside (and Apple's spec sheet isn't too helpful)... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] Re: Final Call: RC1 about to go out the door ...
HPUX 9.07 with GCC 2.8.1 fails the regression tests. I will look into this later. I would NOT hold anything up because of this Jim Time to speak up, I have a HPUX 9.07 system and will test today. Jim Thomas Lockhart [EMAIL PROTECTED] writes: HPUX 10.20 (HP-PA architecture) Time to drop 9.2 from the list? I don't have it running here anymore. Is there anyone on the list who can test on HPUX 9? Linux/PPC (LinuxPPC 2000 Q4 distro tested here; 2.2.18 kernel I think) What processor? Tatsuo had tested on a 603... It's a Powerbook G3 (FireWire model), but I'm not sure which chip is inside (and Apple's spec sheet isn't too helpful)... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)- -- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly