Re: [GENERAL] Anything I can do to speed up this query?
[ Marcus, you should folow up to the lists, so that other people can benefit from the discussion ] On fim, 2006-12-07 at 09:25 +0100, Marcus Engene wrote: Ragnar skrev: On mið, 2006-12-06 at 17:34 -0500, Wei Weng wrote: On Tue, 2006-12-05 at 15:56 -0500, Wei Weng wrote: I have a table that has roughly 200,000 entries and many columns. SELECT Field1, Field2, Field3... FieldN FROM TargetTable ORDER BY Field1 DESC; you might consider CLUSTER Would putting the index on a separate tablespace on another harddisk have a similar effect? we haven't had any real information from the OP. as far as I can tell, he has not answered any questions about his case, so we really have no idea where his problem is. if he has extra harddisks that are not used, there may be many ways of taking advantage of that. gnari ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] how to install 8.2 with yum on centos 4?
Hi, On Wed, 2006-12-06 at 23:32 -0800, [EMAIL PROTECTED] wrote: I got installed on my windows box and my debian box. But is there a way to install 8.2 on centos 4 using yum? I am about to create a yum repo for PGDG RPM sets. Clodoaldo Pinto Neto sent me the instructions on how to create one. I'll post to the list when it is ready. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/ signature.asc Description: This is a digitally signed message part
Re: [GENERAL] From Clause
Bob Pawley wrote: I am receiving an error message about a missing From Clause in an insert statement. I've tried a number of variations of adding a from clause with no success. Could someone help with the following statement?? Insert Into p_id.loop_sequence (monitor) values(p_id.loops.monitor) ; You can add that schema name to your search ppath, you know... saves typing. What kind of value is p_id.loops.monitor? I think you meant to do a select instead of values; probably this: set search_path to 'p_id,public'; insert into loop_sequence (monitor) select monitor from loops; Bob Pawley -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Cast boolean to text
Hi, I've noticed that postgresql 8.1.0 can cast a boolean to text, but version 8.1.5 CAN'T. Is this a bug or a feature? as proof, try to run this query: select 't'::bool::text On version 8.1.5 one will recieve the error message can't convert boolean to text.
Re: [GENERAL] dynamic SQL - variable substitution in plpgsql
km wrote: Hi all, i could not do variable substitution in plpgsql procedure. The variable names are taken as it is but not substituted in the SQL query. what could be the problem ? Does this example even compile? I doubt that... code looks like this: -- CREATE OR REPLACE FUNCTION test(a text) RETURNS SETOF RECORD AS $$ DECLARE a text; b text; I think you meant: DECLARE b text; c text; You redeclared a and never declared c. BEGIN IF a = 'odd' THEN b := 10; c := 30; ELIF a = 'even' THEN b := 20; c := 40; END IF; -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Internal function call from C-language function
Hi, I need to call date_part() from a C function. How to do that? Thanks in advance, Zoltán Böszörményi ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Uninstall pg_trgm
Simple question: How do I uninstall pg_trgm? I can't seem to find a uninstall sql script for it like uninstall_tsearch2.sql. Regards, Henrik ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Problems connecting to server
Make sure the database server machine is up and running, if it is then check for the firewall setting that if the IP table entries are good or not Hope this helps... Thank you, Shoaib On 12/7/06, Scott Marlowe [EMAIL PROTECTED] wrote: On Wed, 2006-12-06 at 14:48, Curtis Braly wrote: I recently installed this database to use with Sam Broadcaster for my internet radio station. When I log into Sam Broadcaster is should automatically connect to my database. It is giving me an error message stating that it unable to connect to server. I am not a big computer guy, so I am very confused on how to fix it. I have ready through FAQ and troubleshooting forums with solutions but none of it makes sense to me. It is keeping me from broadcasting. Can anyone help me get this problem resolved ASAP.Help me please! What error are you getting? (cut and paste it) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: SOLVED [GENERAL] Uninstall pg_trgm
I found an uninstall script in the svn repos... cheers, 7 dec 2006 kl. 10:21 skrev Henrik Zagerholm: Simple question: How do I uninstall pg_trgm? I can't seem to find a uninstall sql script for it like uninstall_tsearch2.sql. Regards, Henrik ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] pg_dump/pg_Restore 8.1.5 db - 8.2 tsearch2 problem
Hi list, I've downloaded and compiled the new 8.2 to a new debian box. I also compile and installed tsearch2 support. Now I have a db on a 8.1.5 box with tsearch2 support. How do a dump and restore my database to the new 8.2 box? I get all kinds of errors when trying to restore the db. Should I uninstall tsearch2 before dumping or? pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 67; 1255 26918 FUNCTION snb_ru_init(internal) postgres pg_restore: [archiver (db)] could not execute query: ERROR: could not find function snb_ru_init in file /usr/local/pgsql/lib/ tsearch2.so Command was: CREATE FUNCTION snb_ru_init(internal) RETURNS internal AS '$libdir/tsearch2', 'snb_ru_init' LANGUAGE c; pg_restore: [archiver (db)] could not execute query: ERROR: function public.snb_ru_init(internal) does not exist Command was: ALTER FUNCTION public.snb_ru_init(internal) OWNER TO postgres; WARNING: errors ignored on restore: 2 Regards, Henrik ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Cast boolean to text
You can create a custome cast for this purpose that can convert bool to text... Regards, Shoaib On 12/7/06, Willy-Bas Loos [EMAIL PROTECTED] wrote: Hi, I've noticed that postgresql 8.1.0 can cast a boolean to text, but version 8.1.5 CAN'T. Is this a bug or a feature? as proof, try to run this query: select 't'::bool::text On version 8.1.5 one will recieve the error message can't convert boolean to text.
[GENERAL] Asynchronous replication of a PostgreSQL DB to a MySQL target
Hi! I'd like to export schema and data from a PostgreSQL database to a remote MySQL database; any changes to the PG-master should be reflected on the MySQL target in a matter of a few minutes to one hour max. Has anybody done something like this before? Here's some more background: We've got an Oracle database as our backend and a couple of PostgreSQL-DBs as our frontend databases; the schema of the backend DB is stable. There are so called publishing jobs running every few minutes; these jobs not only update the frontend databases with any changes in the backend, they also make changes to the frontend dbs schemas whenever the backend says so - the frontend schemas differ from the backend's, the DDL of the frontend dbs is partly defined by data in the backend. The logical thing to do would be to create another set of publishing jobs for the MySQL databases; however our current network layout makes this quite difficult, so I'd rather try and keep the MySQL db and one of the PostgreSQL dbs in near sync. My first problem is that the PostgreSQLs schema is not stable, so if I simply write a couple of jobs to transport the data, I need to alter these jobs and the MySQL schema whenever there are changes to the PG schema. The second problem lies in PostgreSQL-specifics such as tsearch2 - I actually do not need nor want to replicate such metadata. Custom datatypes and functions should also be exempt from this kind of replication. My hopes aren't all too high that there's an easy way to accomplish what I wish to do, so any advice would be very much welcome - even a can't be done that way by somebody who has tried to travel that path before :) Kind regards Markus ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Internal function call from C-language function
On Thu, Dec 07, 2006 at 09:48:25AM +0100, Zoltan Boszormenyi wrote: Hi, I need to call date_part() from a C function. How to do that? Look in fmgr.h for the functions {Oid,Direct,}FunctionCall* which provide various ways to call other functions. There's also FunctionCallInvoke() which is more efficient if you're going to call it lots of times. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] pg_dump/pg_Restore 8.1.5 db - 8.2 tsearch2 problem
Henrik -- I have only dabbled in tsearch2 so I can'toffer direct advise, but this looks like the problem when upgrading the postGIS stuff ... tsearch2 might have an upgrade functionality (postGIS does), but you can also do a more lbaorious method that strips out the unwanted tsearch2 definitions from the old version; leave the newly compiled tsearch2 in place. a) dump the DDL for your database b) dump the data as its own file c) edit the ddl sql file and break it up into three parts: 1) All definitions *except* tsearch2 related ones; postGIS stuff is always clumped together and easy to identify; don't know about tsearch2 though. 2) the tsearch2 related stuff 3) the indexes, constraints and other stuff best applied after data is loaded d) run the ddl with just the table, type, etc definitions c) load the data e) run the ddl sql that definex indexes etc. f) run vacuum analyze, tweak config settings and start testing! HTH a little, Greg Williamson DBA GlobeXplorer LLC -Original Message- From: [EMAIL PROTECTED] on behalf of Henrik Zagerholm Sent: Thu 12/7/2006 1:35 AM To: pgsql-general@postgresql.org Cc: Subject:[GENERAL] pg_dump/pg_Restore 8.1.5 db - 8.2 tsearch2 problem Hi list, I've downloaded and compiled the new 8.2 to a new debian box. I also compile and installed tsearch2 support. Now I have a db on a 8.1.5 box with tsearch2 support. How do a dump and restore my database to the new 8.2 box? I get all kinds of errors when trying to restore the db. Should I uninstall tsearch2 before dumping or? pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 67; 1255 26918 FUNCTION snb_ru_init(internal) postgres pg_restore: [archiver (db)] could not execute query: ERROR: could not find function snb_ru_init in file /usr/local/pgsql/lib/ tsearch2.so Command was: CREATE FUNCTION snb_ru_init(internal) RETURNS internal AS '$libdir/tsearch2', 'snb_ru_init' LANGUAGE c; pg_restore: [archiver (db)] could not execute query: ERROR: function public.snb_ru_init(internal) does not exist Command was: ALTER FUNCTION public.snb_ru_init(internal) OWNER TO postgres; WARNING: errors ignored on restore: 2 Regards, Henrik ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- Click link below if it is SPAM [EMAIL PROTECTED] https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4577e027268986467114494[EMAIL PROTECTED]retrain=spamtemplate=historyhistory_page=1 !DSPAM:4577e027268986467114494! --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_dump/pg_Restore 8.1.5 db - 8.2 tsearch2 problem
Thanks for the info! So its that easy, huh? =) I found a solution that worked but I don't know the effects yet. =) I noticed that it complained a lot about snb_ru_init function which apparently is not present in the new tsearch2.so file. So I removed the SP snb_ru_init() from my old database. Made a new dump and voilá it worked. I don't have that much russian texts so I hope the removal of the SP won't impact on the functionality in the future.. =P Cheers, henrik 7 dec 2006 kl. 11:14 skrev Gregory S. Williamson: Henrik -- I have only dabbled in tsearch2 so I can'toffer direct advise, but this looks like the problem when upgrading the postGIS stuff ... tsearch2 might have an upgrade functionality (postGIS does), but you can also do a more lbaorious method that strips out the unwanted tsearch2 definitions from the old version; leave the newly compiled tsearch2 in place. a) dump the DDL for your database b) dump the data as its own file c) edit the ddl sql file and break it up into three parts: 1) All definitions *except* tsearch2 related ones; postGIS stuff is always clumped together and easy to identify; don't know about tsearch2 though. 2) the tsearch2 related stuff 3) the indexes, constraints and other stuff best applied after data is loaded d) run the ddl with just the table, type, etc definitions c) load the data e) run the ddl sql that definex indexes etc. f) run vacuum analyze, tweak config settings and start testing! HTH a little, Greg Williamson DBA GlobeXplorer LLC -Original Message- From: [EMAIL PROTECTED] on behalf of Henrik Zagerholm Sent: Thu 12/7/2006 1:35 AM To: pgsql-general@postgresql.org Cc: Subject:[GENERAL] pg_dump/pg_Restore 8.1.5 db - 8.2 tsearch2 problem Hi list, I've downloaded and compiled the new 8.2 to a new debian box. I also compile and installed tsearch2 support. Now I have a db on a 8.1.5 box with tsearch2 support. How do a dump and restore my database to the new 8.2 box? I get all kinds of errors when trying to restore the db. Should I uninstall tsearch2 before dumping or? pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 67; 1255 26918 FUNCTION snb_ru_init(internal) postgres pg_restore: [archiver (db)] could not execute query: ERROR: could not find function snb_ru_init in file /usr/local/pgsql/lib/ tsearch2.so Command was: CREATE FUNCTION snb_ru_init(internal) RETURNS internal AS '$libdir/tsearch2', 'snb_ru_init' LANGUAGE c; pg_restore: [archiver (db)] could not execute query: ERROR: function public.snb_ru_init(internal) does not exist Command was: ALTER FUNCTION public.snb_ru_init(internal) OWNER TO postgres; WARNING: errors ignored on restore: 2 Regards, Henrik ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq --- Click link below if it is SPAM [EMAIL PROTECTED] https://mailscanner.globexplorer.com/dspam/dspam.cgi? signatureID=4577e027268986467114494[EMAIL PROTECTED]retrain= spamtemplate=historyhistory_page=1 !DSPAM:4577e027268986467114494! --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] DBI module for postgres 1.4.3
Jasbinder Singh Bali wrote: Trying to connect to it throught perl code. Just wondering if DBI would be the best tool to use to accomplish this task. Which version of DBI should I be using. I mean if any one of you could give me exact pointers to it, would be highly appreciated. Yes, perl(DBI) is the canonical way to connect to a database from Perl. You will need the DBD::Pg driver too. Both modules can be obtained from CPAN (e.g. http://www.cpan.org ), maybe there are even binary packages for your operating system available. I'd use the latest stable version. Whats the difference between a module and a bundle as i can see while downloading DBI from CPAN website. What exactly needs to be downloaded . I'm kind of not sure about it. I have never heard of a 'bundle' in context with Perl but that may be because I know little about Perl. DBI can be got from http://www.cpan.org/modules/by-module/DBI/DBI-1.53.tar.gz and DBD-Pg is on http://www.cpan.org/modules/by-module/DBD/DBD-Pg-1.49.tar.gz This is the source code. Yours, Laurenz Albe ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Tsearch2 / PG 8.2 Which stemmer files?
Which stemmer files is one supposed to use with 8.2 Tsearch2? Trying to compile the output from Gendict with: stem_UTF_8_german.c stem_UTF_8_german.h from: http://snowball.tartarus.org/dist/libstemmer_c.tgz gives: http://hannes.imos.net/make.txt Thanks! -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Internal function call from C-language function
Hi, Martijn van Oosterhout írta: On Thu, Dec 07, 2006 at 09:48:25AM +0100, Zoltan Boszormenyi wrote: Hi, I need to call date_part() from a C function. How to do that? Look in fmgr.h for the functions {Oid,Direct,}FunctionCall* which provide various ways to call other functions. There's also FunctionCallInvoke() which is more efficient if you're going to call it lots of times. Have a nice day, thanks, I found the DirectFunctionCall family, that wasn't the problem. The real trick was that inside a C function, I have to use timestamp_part(), as date_part() doesn't even exists. The header catalog/pg_proc.h proves it. date_part() is an SQL wrapper around real C functions: timestamp[tz]_part(), time[tz]_part() and interval_part(). However, I have another problem. I have this in the code: HeapTupleHeader t; Datum timest; boolisnull; t = PG_GETARG_HEAPTUPLEHEADER(0); timest = DatumGetTimestamp(GetAttributeByName(t, ts_today, isnull)); elog(NOTICE, DatumGetTimestamp() OK, value is %s, isnull ? NULL : NOT NULL); if (isnull) PG_RETURN_BOOL(false); yeardatum = CStringGetDatum(year); elog(NOTICE, CStringGetDatum() 1 OK); returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest); elog(NOTICE, date_part() 1 OK); year = DatumGetFloat8(returndatum); elog(NOTICE, conversion 1 OK); ... But I get this: NOTICE: PG_GETARG OK NOTICE: DatumGetTimestamp() OK, value is NOT NULL NOTICE: CStringGetDatum() 1 OK ERROR: invalid memory alloc request size 1951613700 So DirectFunctionCall2() fails. How can I fix it? Best regards, Zoltán ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Internal function call from C-language function
On Thu, Dec 07, 2006 at 12:55:47PM +0100, Zoltan Boszormenyi wrote: However, I have another problem. I have this in the code: snip yeardatum = CStringGetDatum(year); elog(NOTICE, CStringGetDatum() 1 OK); returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest); You're passing a cstring as first argument, whereas I'm fairly sure you should be passing text. When calling from C the're no argument checking. I think what you're looking for is: yeardatum = text_in(year); Or something like that. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Tsearch2 / PG 8.2 Which stemmer files?
On 07.12.2006 12:42, Hannes Dorbath wrote: Which stemmer files is one supposed to use with 8.2 Tsearch2? Found an answer myself. Seems I need: http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/tsearch_snowball_82.gz -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Online index builds
On mið, 2006-12-06 at 18:22 -0800, Jeff Davis wrote: On Wed, 2006-12-06 at 15:00 -0500, Chris Browne wrote: Let me add another question to this; this might possibly be worthy of a TODO for 8.3 or so... What if I wanted to: ALTER TABLE distributors ADD PRIMARY KEY CONCURRENTLY (dist_id); Interesting, I was just thinking about this today as well. I am thinking it would be nice if we could: ALTER TABLE SET PRIMARY KEY INDEX foo_pkey; If it's already got a primary key we switch the primary key to be the new primary key (throwing an error if the columns don't match up to the existing primary key, not sure what you mean by this or if it's not unique). must also be NOT NULL If not, the primary key attribute is added to the existing index and the columns in the index now make up the primary key (throwing an error if the index is not unique). What about existing foreign key constraints ? as the only function of the PRIMARY key property of an index is making it the default target of a foreign key reference, you would have to decide what implications this has. Possibly none, as I am not sure the foreign key constraint remembers if the target was a primary key or not. also, your proposed syntax muddies the relationship between the PRIMARY KEY constraint and the existence of an INDEX. There is no such relationship in the SQL standards. possibly more appropriate would be ALTER TABLE SET PRIMARY KEY (columns) and an error issued if no UNIQUE NOT NULL index is found on the relevant columns one other question is what shuld happen to the original index that was implicitly created. should it be dropped automatically ? gnari ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Internal function call from C-language function
Martijn van Oosterhout írta: On Thu, Dec 07, 2006 at 12:55:47PM +0100, Zoltan Boszormenyi wrote: However, I have another problem. I have this in the code: snip yeardatum = CStringGetDatum(year); elog(NOTICE, CStringGetDatum() 1 OK); returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest); You're passing a cstring as first argument, whereas I'm fairly sure you should be passing text. When calling from C the're no argument checking. I think what you're looking for is: yeardatum = text_in(year); Or something like that. Hope this helps, text_in() doesn't exists, it's textin() but I have to call it through DirectFunctionCall1(), like this: yeardatum = DirectFunctionCall1(textin, CStringGetDatum(year)); However, the session crashes on the subsequent returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest); Best regards, Zoltán ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Internal function call from C-language function
On Thu, Dec 07, 2006 at 01:40:22PM +0100, Zoltan Boszormenyi wrote: text_in() doesn't exists, it's textin() but I have to call it through DirectFunctionCall1(), like this: yeardatum = DirectFunctionCall1(textin, CStringGetDatum(year)); However, the session crashes on the subsequent returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest); It would be a good idea to actually find out where it crashes, that would help you work out what the actual problem is. Just looking at the code you posted, I only see this other bit that looks a bit suspect: Datum timest; boolisnull; t = PG_GETARG_HEAPTUPLEHEADER(0); timest = DatumGetTimestamp(GetAttributeByName(t, ts_today, isnull)); You're calling DatumGetTimestamp() which would return a timestamp (probably some structure) but you're storing it in a Datum. Just take the result of GetAttributeByName directly. Get at least a backtrace next time it crashes... Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Tsearch2 / PG 8.2 Which stemmer files?
Hannes, please download patch tsearch_snowball_82.gz http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ which updates API to snowball. Oleg On Thu, 7 Dec 2006, Hannes Dorbath wrote: Which stemmer files is one supposed to use with 8.2 Tsearch2? Trying to compile the output from Gendict with: stem_UTF_8_german.c stem_UTF_8_german.h from: http://snowball.tartarus.org/dist/libstemmer_c.tgz gives: http://hannes.imos.net/make.txt Thanks! Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] VACUUM and transactions in different databases
In response to Christopher Browne [EMAIL PROTECTED]: Oops! [EMAIL PROTECTED] (Cornelia Boenigk) was seen spray-painting on a wall: Hi all If I have a running transaction in database1 and try to vacuum database2 but the dead tuples in database2 cannot be removed. INFO: vacuuming public.dummy1 INFO: dummy1: found 0 removable, 14 nonremovable row versions in 1341 pages DETAIL: 135000 dead row versions cannot be removed yet. How can I achieve that database2 is vacuumed while a transaction in database1 is not yet commited? You can't, unless you're on 8.1, and the not-yet-committed transaction is VACUUM. I'm a little confused. First off, it would seem as if this is completely eliminated in 8.2, as I tested a scenario involving an idle transaction in one database, and both vacuum and vacuum full were able to complete in another database without completing the first transaction. Are you saying that in 8.1, there is a single exception to this, which is that if db1 (for example) is in the process of running vacuum, it won't block db2 from vacuuming? But that any other type of transaction can block operations in other databases? -- Bill Moran Collaborative Fusion Inc. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] VACUUM and transactions in different databases
Bill Moran wrote: In response to Christopher Browne [EMAIL PROTECTED]: Oops! [EMAIL PROTECTED] (Cornelia Boenigk) was seen spray-painting on a wall: Hi all If I have a running transaction in database1 and try to vacuum database2 but the dead tuples in database2 cannot be removed. INFO: vacuuming public.dummy1 INFO: dummy1: found 0 removable, 14 nonremovable row versions in 1341 pages DETAIL: 135000 dead row versions cannot be removed yet. How can I achieve that database2 is vacuumed while a transaction in database1 is not yet commited? You can't, unless you're on 8.1, and the not-yet-committed transaction is VACUUM. I'm a little confused. First off, it would seem as if this is completely eliminated in 8.2, as I tested a scenario involving an idle transaction in one database, and both vacuum and vacuum full were able to complete in another database without completing the first transaction. Of course they are able to complete, but the point is that they would not remove the tuples that would be visible to that idle open transaction. Are you saying that in 8.1, there is a single exception to this, which is that if db1 (for example) is in the process of running vacuum, it won't block db2 from vacuuming? But that any other type of transaction can block operations in other databases? In 8.2, a process running lazy vacuum (but not vacuum full) will not interfere with another process running vacuum, i.e., the second vacuum will be able to remove the tuples even if they would be seen by the transaction doing the first vacuum -- regardless of the database to which any of them is connected (i.e., it may be the same database or different databases). I don't remember if this was in 8.1 or was introduced in 8.2. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Tsearch2 / PG 8.2 Which stemmer files?
Thank you Oleg. I have a bit more trouble migrating from 8.1.5 TSearch2 + Gin/UTF-8 to PG 8.2. First I tried to use existing dict and affix files, which triggered that oldFormat condition. So I tried to start from scratch. The thing I can't get to work is compound word support for German again. What I did: 1. OpenOffice Dictionary from http://j3e.de/hunspell/de_DE.zip 2. extract de_DE.dic 3. Run compound.pl on de_DE.dic 4. Put modified de_DE.dic back in the zip, run my2ispell on them 5. Convert both to UTF-8 Do I need to hack compound.pl to do something different, as the affix format changed? I'd really appreciate any hint. Thanks! On 07.12.2006 14:52, Oleg Bartunov wrote: please download patch tsearch_snowball_82.gz http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/ which updates API to snowball. -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Why does explain differ from explan analyze?
Hi Tom, On Wed, 2006-12-06 at 14:48 -0500, Tom Lane wrote: Joost Kraaijeveld [EMAIL PROTECTED] writes: I have a query that if I do explain shows an other plan than if I do explain analyze with that same query (immediately after the explain). Really? What PG version is this? Can you provide a self-contained test case? Yes, really ;-). PostgreSQL 8.1.5 on Windows XP. Alas, no self contained test case, I do have the query, the explain and the explain analyse output. Sorry about the long text. SELECT * FROM deliver_cares t0_$deliver_cares LEFT OUTER JOIN cares t1_$deliver_cares_care ON t0_$deliver_cares.care_id=t1_$deliver_cares_care.care_id LEFT OUTER JOIN components t2_$deliver_cares_care_component ON t1_$deliver_cares_care.component_id=t2_$deliver_cares_care_component.component_id LEFT OUTER JOIN indication_functions t3_$deliver_cares_care_indicatio ON t1_$deliver_cares_care.indication_function_id=t3_$deliver_cares_care_indicatio.indication_function_id LEFT OUTER JOIN indications t4_$deliver_cares_care_indicatio ON t3_$deliver_cares_care_indicatio.indication_id=t4_$deliver_cares_care_indicatio.indication_id LEFT OUTER JOIN patients t5_$deliver_cares_care_indicatio ON t4_$deliver_cares_care_indicatio.patient_id=t5_$deliver_cares_care_indicatio.patient_id LEFT OUTER JOIN org_personnels t6_$deliver_cares_registeredOrgP ON t0_$deliver_cares.registered_org_personnel_id=t6_$deliver_cares_registeredOrgP.org_personnel_id LEFT OUTER JOIN org_personnels t7_$deliver_cares_assignedOrgPer ON t0_$deliver_cares.assigned_org_personnel_id=t7_$deliver_cares_assignedOrgPer.org_personnel_id, timeframes t1_pdam__$deliver_cares_timeframe WHERE ((((((((( (t4_$deliver_cares_care_indicatio.patient_id = 21) -- pdam AND (t0_$deliver_cares.deliver_date = current_date - 200) -- pdam ) AND (t0_$deliver_cares.deliver_date current_date) -- pdam ) AND (t0_$deliver_cares.timeframe_id = 45) -- pdam ) AND (t1_$deliver_cares_care.workers_number = 1) -- pdam ) AND (t0_$deliver_cares.status = 'P') -- pdam ) AND (t1_$deliver_cares_care.status 'S') -- pdam ) AND (t0_$deliver_cares.assigned_org_personnel_id IS NULL) ) AND t0_$deliver_cares.care_id=t1_$deliver_cares_care.care_id AND t1_$deliver_cares_care.indication_function_id=t3_$deliver_cares_care_indicatio.indication_function_id AND t3_$deliver_cares_care_indicatio.indication_id=t4_$deliver_cares_care_indicatio.indication_id )) AND t0_$deliver_cares.timeframe_id=t1_pdam__$deliver_cares_timeframe.timeframe_id ORDER BY t0_$deliver_cares.deliver_date ASC , t1_pdam__$deliver_cares_timeframe.start_time ASC explain: -- Sort (cost=38222.31..38222.49 rows=71 width=3089) Sort Key: t0_$deliver_cares.deliver_date, t1_pdam__$deliver_cares_timeframe.start_time - Nested Loop (cost=1457.34..38220.13 rows=71 width=3089) - Seq Scan on timeframes t1_pdam__$deliver_cares_timeframe (cost=0.00..1.25 rows=1 width=126) Filter: (timeframe_id = 45) - Nested Loop Left Join (cost=1457.34..38218.17 rows=71 width=2963) - Nested Loop Left Join (cost=1457.34..37803.85 rows=71 width=2175) - Hash Left Join (cost=1457.34..37389.53 rows=71 width=1387) Hash Cond: (outer.patient_id = inner.patient_id) - Hash Join (cost=1451.54..37383.36 rows=71 width=486) Hash Cond: (outer.indication_function_id = inner.indication_function_id) - Hash Left Join (cost=1438.21..37258.40 rows=22198 width=323) Hash Cond: (outer.component_id = inner.component_id) - Hash Join (cost=1409.91..36897.13 rows=22198 width=233) Hash Cond: (outer.care_id = inner.care_id) - Bitmap Heap Scan on deliver_cares t0_$deliver_cares (cost=1249.99..36124.88 rows=39035 width=105) Recheck Cond: (timeframe_id = 45) Filter:
[GENERAL] partition insert question
Hello - I find myself trying to find a way to have the table name used with the insert command be generated on the fly in a rule. For example, consider the following: create table foobars ( id bigserial, created_at timestamp not null, name ); create table foobars_200612 ( check (created_at = timestamp '2006-12-01 00:00:00' and created_at timestamp '2007-01-01 00:00:00') ) inherits (foobars); create table foobars_200701 ( check (created_at = timestamp '2007-01-01 00:00:00' and created_at timestamp '2007-02-01 00:00:00') ) inherits (foobars); create rule foobars_insert as on insert to foobars do instead insert into (select 'foobars_' || extract(year from NEW.created_at) || extract(month from NEW.created_at)) (created_at,name) values (now(),'hello'); I realize that I could craft a list of many ON INSERT TO foobars WHERE xxx constructs, but am trying to be a bit more maintainable than having each of the where clauses hard-defined. Any suggestions? Thanks in advance - Marc ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] VACUUM and transactions in different databases
In response to Alvaro Herrera [EMAIL PROTECTED]: Bill Moran wrote: In response to Christopher Browne [EMAIL PROTECTED]: Oops! [EMAIL PROTECTED] (Cornelia Boenigk) was seen spray-painting on a wall: Hi all If I have a running transaction in database1 and try to vacuum database2 but the dead tuples in database2 cannot be removed. INFO: vacuuming public.dummy1 INFO: dummy1: found 0 removable, 14 nonremovable row versions in 1341 pages DETAIL: 135000 dead row versions cannot be removed yet. How can I achieve that database2 is vacuumed while a transaction in database1 is not yet commited? You can't, unless you're on 8.1, and the not-yet-committed transaction is VACUUM. I'm a little confused. First off, it would seem as if this is completely eliminated in 8.2, as I tested a scenario involving an idle transaction in one database, and both vacuum and vacuum full were able to complete in another database without completing the first transaction. Of course they are able to complete, but the point is that they would not remove the tuples that would be visible to that idle open transaction. I would expect that, but the OP claimed that vacuum full waited until the other transaction was finished. Are you saying that in 8.1, there is a single exception to this, which is that if db1 (for example) is in the process of running vacuum, it won't block db2 from vacuuming? But that any other type of transaction can block operations in other databases? In 8.2, a process running lazy vacuum (but not vacuum full) will not interfere with another process running vacuum, i.e., the second vacuum will be able to remove the tuples even if they would be seen by the transaction doing the first vacuum -- regardless of the database to which any of them is connected (i.e., it may be the same database or different databases). I don't remember if this was in 8.1 or was introduced in 8.2. So lazy vacuum never waits on transactions. Apparently (based on the OP) vacuum full _does_ wait on transactions in versions prior to 8.2, but based on my experiment, in 8.2 vacuum full no longer does. Of course, in any version, vacuum can't clean up tuples held by open transactions. At least, that's what it's looking like to me. -- Bill Moran Collaborative Fusion Inc. [EMAIL PROTECTED] Phone: 412-422-3463x4023 IMPORTANT: This message contains confidential information and is intended only for the individual named. If the reader of this message is not an intended recipient (or the individual responsible for the delivery of this message to an intended recipient), please be advised that any re-use, dissemination, distribution or copying of this message is prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] pg_controldata output documentation
Is there any documentation on the output from pg_controldata? Most of it seems intuitive but I would like something definitive on the following lines: Latest checkpoint location: 2F9/B38DE758 Prior checkpoint location:2F9/A3F688F8 Latest checkpoint's REDO location:2F9/B38DE758 Latest checkpoint's UNDO location:0/0 Especially, how do I relate this to the file names in the pg_xlog directory. Thanks ... Andy PS - we're relative newbies but getting there -- Andrew Rost National Operational Hydrologic Remote Sensing Center (NOHRSC) National Weather Service, NOAA 1735 Lake Dr. West, Chanhassen, MN 55317-8582 Voice: (952)361-6610 x 234 Fax: (952)361-6634 [EMAIL PROTECTED] http://www.nohrsc.noaa.gov ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] PostgreSQL 8.2 on IA-64 : 2 regression tests FAILED
Hello List, I work on an IA-64 server with Red Hat Enterprise Linux 4 AS, and as the 8.2 release is now available, I try to generate RPM for IA-64. To do that, I downloaded the postgresql-8.2.0-2PGDG.src.rpm, extracted the .spec file and these files : -bash-3.00$ cd SOURCES/ -bash-3.00$ ls -ltr total 24176 -rw-r--r-- 1 postgres postgres 12459207 Dec 2 20:25 postgresql-8.2.0.tar.bz2 -rw-r--r-- 1 postgres postgres 897 Dec 2 20:28 pg_config.h -rw-r--r-- 1 postgres postgres 1539 Dec 2 20:28 Makefile.regress -rwxr-xr-x 1 postgres postgres 56 Dec 2 20:28 filter-requires-perl-Pg.sh -rw-r--r-- 1 postgres postgres 1631 Dec 2 20:28 postgresql-logging.patch -rw-r--r-- 1 postgres postgres 7529 Dec 2 20:28 postgresql.init -rw-r--r-- 1 postgres postgres 85 Dec 2 20:28 postgresql-bashprofile -rw-r--r-- 1 postgres postgres 1757 Dec 2 20:28 rpm-pgsql.patch -rw-r--r-- 1 postgres postgres15598 Dec 2 20:28 README.rpm-dist -rw-r--r-- 1 postgres postgres 2563 Dec 2 20:28 postgresql-test.patch -rw-r--r-- 1 postgres postgres 919 Dec 2 20:28 postgresql-perl-rpath.patch -rw-r--r-- 1 postgres postgres 141 Dec 2 20:28 postgresql.pam -rw-r--r-- 1 postgres postgres 12198114 Dec 6 17:18 postgresql-8.2.0-2PGDG.src.rpm Then, as the postgres user, I launched the command : rpmbuild -ba postgresql-8.2.spec By doing this, all the RPM had been created on /RPMS/ia64. Now, I try to launch the regression tests and 2 of them failed : test create_function_1... FAILED ... ... ... triggers ... FAILED I see at http://archives.postgresql.org/pgsql-ports/2006-11/msg00011.php that this problem already appear on 64 bits but I am note sure that it is the same problem... In the file regression.diff, my first problem seems to be linked with the function autoinc() : *** ./expected/create_function_1.outThu Dec 7 14:37:28 2006 --- ./results/create_function_1.outThu Dec 7 14:41:21 2006 *** *** 25,40 NOTICE: argument type city_budget is only a shell CREATE FUNCTION check_primary_key () RETURNS trigger ! AS '/SLONY_PGS/PostgreSQL_8.2.0/BUILD/postgresql-8.2.0/src/test/regress/../../../contrib/spi/refint.so LANGUAGE C; CREATE FUNCTION check_foreign_key () RETURNS trigger ! AS '/SLONY_PGS/PostgreSQL_8.2.0/BUILD/postgresql-8.2.0/src/test/regress/../../../contrib/spi/refint.so LANGUAGE C; CREATE FUNCTION autoinc () RETURNS trigger ! AS '/SLONY_PGS/PostgreSQL_8.2.0/BUILD/postgresql-8.2.0/src/test/regress/../../../contrib/spi/refint.so LANGUAGE C; CREATE FUNCTION funny_dup17 () RETURNS trigger AS '/SLONY_PGS/PostgreSQL_8.2.0/BUILD/postgresql-8.2.0/src/test/regress/regress.so' --- 25,41 NOTICE: argument type city_budget is only a shell CREATE FUNCTION check_primary_key () RETURNS trigger ! AS '/SLONY_PGS/PostgreSQL_8.2.0/BUILD/postgresql-8.2.0/src/test/regress/../../../contrib/spi/refint.so' LANGUAGE C; CREATE FUNCTION check_foreign_key () RETURNS trigger ! AS '/SLONY_PGS/PostgreSQL_8.2.0/BUILD/postgresql-8.2.0/src/test/regress/../../../contrib/spi/refint.so' LANGUAGE C; CREATE FUNCTION autoinc () RETURNS trigger ! AS '/SLONY_PGS/PostgreSQL_8.2.0/BUILD/postgresql-8.2.0/src/test/regress/../../../contrib/spi/refint.so' LANGUAGE C; *+ ERROR: could not find function autoinc in file /SLONY_PGS/PostgreSQL_8.2.0/BUILD/postgresql-8.2.0/src/test/regress/../../../contrib/spi/refint.so* I checked the file contrib/spi/refint.c : it does not contain a function called autoinc... The two others errors reported in regression.diff are : delete from tttest where price_id = 2; select * from tttest; price_id | price_val | price_on | price_off --+---+--+--- ! 1 | 1 | 10 |99 ! 3 | 3 | 30 |99 ! 2 | 2 | 20 |40 (3 rows) -- what do we see ? --- 150,175 for each row execute procedure autoinc (price_on, ttdummy_seq); *+ ERROR: function autoinc() does not exist* insert into tttest values (1, 1, null); insert into tttest values (2, 2, null); insert into tttest values (3, 3, 0); select * from tttest; price_id | price_val | price_on | price_off --+---+--+--- ! 1 | 1 | |99 ! 2 | 2 | |99 ! 3 | 3 |0 |99 (3 rows) delete from tttest where price_id = 2; *+ ERROR: ttdummy (tttest): price_on must be NOT NULL* select * from tttest; price_id | price_val | price_on | price_off --+---+--+--- ! 1 | 1 | |99 ! 2 | 2 | |99 ! 3 | 3 |0 |99 (3 rows) -- what do we see ? Any suggestion would be appreciated... Thank you
Re: [GENERAL] Why does explain differ from explan analyze?
Joost Kraaijeveld [EMAIL PROTECTED] writes: On Wed, 2006-12-06 at 14:48 -0500, Tom Lane wrote: Joost Kraaijeveld [EMAIL PROTECTED] writes: I have a query that if I do explain shows an other plan than if I do explain analyze with that same query (immediately after the explain). Really? What PG version is this? Can you provide a self-contained test case? Alas, no self contained test case, I do have the query, the explain and the explain analyse output. Sorry about the long text. Well, the answer is that these aren't the same query. For instance you've got a change in estimated rowcount for cares: - Seq Scan on cares t1_$deliver_cares_care (cost=0.00..152.40 rows=3010 width=128) Filter: ((workers_number = 1) AND ((status)::text 'S'::text)) - Index Scan using idx_cares7 on cares t1_$deliver_cares_care (cost=0.00..176.05 rows=5147 width=128) (actual time=0.111..8.450 rows=4425 loops=1) Filter: ((workers_number = 1) AND ((status)::text 'P'::text)) and other changes elsewhere, all apparently due to using slightly different constraint values. That will (and should) affect the plan. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] partition insert question
On Thu, 7 Dec 2006, Marc Evans wrote: Hello - I find myself trying to find a way to have the table name used with the insert command be generated on the fly in a rule. For example, consider the following: create table foobars ( id bigserial, created_at timestamp not null, name ); create table foobars_200612 ( check (created_at = timestamp '2006-12-01 00:00:00' and created_at timestamp '2007-01-01 00:00:00') ) inherits (foobars); create table foobars_200701 ( check (created_at = timestamp '2007-01-01 00:00:00' and created_at timestamp '2007-02-01 00:00:00') ) inherits (foobars); create rule foobars_insert as on insert to foobars do instead insert into (select 'foobars_' || extract(year from NEW.created_at) || extract(month from NEW.created_at)) (created_at,name) values (now(),'hello'); I realize that I could craft a list of many ON INSERT TO foobars WHERE xxx constructs, but am trying to be a bit more maintainable than having each of the where clauses hard-defined. Any suggestions? Thanks in advance - Marc To answer my own question, I have found this to work. If anyone has suggestions for improvements, please let me know. create or replace function foo_insert(TIMESTAMP,TEXT) returns void as $$ begin execute 'insert into foobars_' || (select extract(year from $1) || extract(month from $1)) || ' (created_at,name) values (\'' || $1 || '\',\'' || $2 || '\')'; end; $$ language plpgsql; create rule foobars_insert as on insert to foobars do instead select foo_insert(NEW.created_at,NEW.name); - Marc ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] how to install 8.2 with yum on centos 4?
Devrim GUNDUZ wrote: On Wed, 2006-12-06 at 23:32 -0800, [EMAIL PROTECTED] wrote: I got installed on my windows box and my debian box. But is there a way to install 8.2 on centos 4 using yum? I am about to create a yum repo for PGDG RPM sets. Clodoaldo Pinto Neto sent me the instructions on how to create one. I'll post to the list when it is ready. Very cool! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] DBI module for postgres 1.4.3
On Dec 7, 2006, at 6:00 , Albe Laurenz wrote: Jasbinder Singh Bali wrote: Whats the difference between a module and a bundle as i can see while downloading DBI from CPAN website. What exactly needs to be downloaded . I'm kind of not sure about it. I have never heard of a 'bundle' in context with Perl but that may be because I know little about Perl. DBI can be got from http://www.cpan.org/modules/by-module/DBI/DBI-1.53.tar.gz and DBD-Pg is on http://www.cpan.org/modules/by-module/DBD/DBD-Pg-1.49.tar.gz A bundle is merely a collection of modules that are often used together. For example, installing Bundle::DBI installs a bunch of DBI drivers and tools. http://search.cpan.org/~timb/DBI-1.53/lib/Bundle/DBI.pm -M ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Cast boolean to text
Willy-Bas Loos [EMAIL PROTECTED] writes: I've noticed that postgresql 8.1.0 can cast a boolean to text, but version 8.1.5 CAN'T. Better check again --- there has never been a standard cast from bool to text. Sure you didn't install a custom one in your 8.1.0 database? regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Performance figures from DbMail list
The following appeared this afternoon on the DbMail list. As someone replied the MySql used is old, and the newer one is faster, but then 8.2 is faster than the older Postgresql versions. This was posted by:- Justin McAleer [EMAIL PROTECTED] I figured I would go ahead and toss this out for anybody that may be interested, since I was so shocked by the results. I have two servers set up for testing, one running postfix/dbmail and one running the database servers. The database machine is a dual core AMD (4400+ I believe) with 4 gigs of memory, with the database files living on a fiber connected Apple SAN (XRaid). I have dbmail compiled with mysql and pgsql, so all I need to do to switch between the two is change the driver in the conf file and restart. I'm using dbmail-lmtpd running on a unix socket. Finally, I have the postfix delivery concurrency set to 5. For mysql, I'm using a 4GB InnoDB sample config that comes in the CentOS rpm (increased the buffer pool to 2.5 gigs though). Version is 4.1.20. For postgres, I'm using the default variables except for increasing the shared buffers to 256MB, setting effective cache size to 3 GB, and random page cost to 2. Version is 8.1.4. I've sent a good amount of real mail to each setup as well, but for quantifiable results I have a perl script that sends gibberish of a configurable size (3kb here) to a single recipient. Since we're inserting into a DB, the recipient of the messages should have no bearing on delivery performance, barring postfix concurrency. For the test, I sent one batch of mail through so postfix would already have a full lmtp connection pool when I began the real test. I had 10 perl processes each sending 100 messages as fast as postfix would accept them, for a total of 1000 3KB messages. Results... Mysql: 95 seconds to deliver all 1000 messages. Both cores on the DB server were effectively peaked during delivery. Postgres: 10 seconds to deliver all 1000 messages. DBMail was really close to being able to deliver as fast as postfix could queue to local disk (within a second or two for 1000th message). The cores on the DB server looked to average around 45%/30% usage during delivery. The CPU usage is just based on watching top output, so keep that in mind... however with such a huge variance, even eyeballing it I'm confident in reporting it. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Internal function call from C-language function
Hi, Martijn van Oosterhout írta: On Thu, Dec 07, 2006 at 01:40:22PM +0100, Zoltan Boszormenyi wrote: text_in() doesn't exists, it's textin() but I have to call it through DirectFunctionCall1(), like this: yeardatum = DirectFunctionCall1(textin, CStringGetDatum(year)); However, the session crashes on the subsequent returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest); It would be a good idea to actually find out where it crashes, that would help you work out what the actual problem is. Just looking at the code you posted, I only see this other bit that looks a bit suspect: Datum timest; boolisnull; t = PG_GETARG_HEAPTUPLEHEADER(0); timest = DatumGetTimestamp(GetAttributeByName(t, ts_today, isnull)); You're calling DatumGetTimestamp() which would return a timestamp (probably some structure) but you're storing it in a Datum. Just take the result of GetAttributeByName directly. Thanks, that worked for me. I have just one more question: How can I get an Oid out of a Datum, i.e. how do I know what type I get in a given Datum? DatumGetObjectId() seems to give me an Oid that was specifically stored as a Datum. The function I am working on is made for an INSERT RULE, something like this: CREATE OR REPLACE FUNCTION myfunc( row1 table1 ) RETURNS BOOL VOLATILE LANGUAGE C AS 'myfunc.so', 'myfunc'; CREATE RULE rule_table1_insert AS ON INSERT TO table1 DO INSTEAD (SELECT myfunc( new ) ); So I get the to-be-inserted row in my function. In the function, depending on the passed in values I need to insert some other table. To do it, I need to use SPI_prepare() which needs the list of Oids. Get at least a backtrace next time it crashes... And how exactly can I do that? psql only reports that the backend crashed and unable to reset connection. At that time the backend session is already gone, isn't it? Have a nice day, Thanks, to you, too. You helped a lot. Best regards, Zoltán ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] VACUUM and transactions in different databases
Bill Moran [EMAIL PROTECTED] writes: In response to Alvaro Herrera [EMAIL PROTECTED]: Of course they are able to complete, but the point is that they would not remove the tuples that would be visible to that idle open transaction. I would expect that, but the OP claimed that vacuum full waited until the other transaction was finished. No, she didn't claim that. As far as I see she was just complaining about the failure to remove dead tuples: If I have a running transaction in database1 and try to vacuum database2 but the dead tuples in database2 cannot be removed. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] VACUUM and transactions in different databases
On fim, 2006-12-07 at 11:28 -0500, Tom Lane wrote: Bill Moran [EMAIL PROTECTED] writes: In response to Alvaro Herrera [EMAIL PROTECTED]: Of course they are able to complete, but the point is that they would not remove the tuples that would be visible to that idle open transaction. I would expect that, but the OP claimed that vacuum full waited until the other transaction was finished. No, she didn't claim that. As far as I see she was just complaining about the failure to remove dead tuples: If I have a running transaction in database1 and try to vacuum database2 but the dead tuples in database2 cannot be removed. well actually, there was also this: On fim, 2006-12-07 at 00:57 +0100, Cornelia Boenigk wrote: Hi Bill Can you run a vacuum full, and does it reclaim the space? I tried but it hangs. and also this: On fim, 2006-12-07 at 01:03 +0100, Cornelia Boenigk wrote: as soon as I committed the open transaction the hangig vacuum full completed and the table was vacuumed: gnari ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] VACUUM and transactions in different databases
In response to Tom Lane [EMAIL PROTECTED]: Bill Moran [EMAIL PROTECTED] writes: In response to Alvaro Herrera [EMAIL PROTECTED]: Of course they are able to complete, but the point is that they would not remove the tuples that would be visible to that idle open transaction. I would expect that, but the OP claimed that vacuum full waited until the other transaction was finished. No, she didn't claim that. As far as I see she was just complaining about the failure to remove dead tuples: If I have a running transaction in database1 and try to vacuum database2 but the dead tuples in database2 cannot be removed. Yes, but a later mail in the thread read: Can you run a vacuum full, and does it reclaim the space? I tried but it hangs. [EMAIL PROTECTED] ~]# ps axw|grep postgres 1746 ?S 0:00 postgres: writer process 1747 ?S 0:00 postgres: stats buffer process 1748 ?S 0:00 postgres: stats collector process 2106 pts/1S 0:00 su postgres 2120 pts/1S+ 0:00 psql postgres 2188 ?S 0:04 postgres: postgres dummy1 [local] VACUUM waiting 2200 pts/3S 0:00 su postgres 2215 ?S 0:00 postgres: postgres dummy2 [local] idle in transaction 2717 pts/2R+ 0:00 grep postgres Admittedly, I had the (incorrect) idea that she might need a vacuum full to reclaim space that lazy vacuum couldn't. And, admittedly, this wasn't the point of the original post. -- Bill Moran Collaborative Fusion Inc. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] VACUUM and transactions in different databases
Bill Moran [EMAIL PROTECTED] writes: [EMAIL PROTECTED] ~]# ps axw|grep postgres 1746 ?S 0:00 postgres: writer process 1747 ?S 0:00 postgres: stats buffer process 1748 ?S 0:00 postgres: stats collector process 2106 pts/1S 0:00 su postgres 2120 pts/1S+ 0:00 psql postgres 2188 ?S 0:04 postgres: postgres dummy1 [local] VACUUM waiting 2200 pts/3S 0:00 su postgres 2215 ?S 0:00 postgres: postgres dummy2 [local] idle in transaction 2717 pts/2R+ 0:00 grep postgres Too bad this wasn't accompanied by a dump of pg_locks ... but if that's the only other open transaction, the only way I can see for it to block the vacuum is if the vacuum was database-wide, and had gotten to the point of trying to vacuum one of the shared catalogs (eg, pg_database), and the other transaction had some type of lock on that shared catalog. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Internal function call from C-language function
Hi, Zoltan Boszormenyi írta: Hi, Martijn van Oosterhout írta: On Thu, Dec 07, 2006 at 01:40:22PM +0100, Zoltan Boszormenyi wrote: text_in() doesn't exists, it's textin() but I have to call it through DirectFunctionCall1(), like this: yeardatum = DirectFunctionCall1(textin, CStringGetDatum(year)); However, the session crashes on the subsequent returndatum = DirectFunctionCall2(timestamp_part, yeardatum, timest); It would be a good idea to actually find out where it crashes, that would help you work out what the actual problem is. Just looking at the code you posted, I only see this other bit that looks a bit suspect: Datum timest; boolisnull; t = PG_GETARG_HEAPTUPLEHEADER(0); timest = DatumGetTimestamp(GetAttributeByName(t, ts_today, isnull)); You're calling DatumGetTimestamp() which would return a timestamp (probably some structure) but you're storing it in a Datum. Just take the result of GetAttributeByName directly. Thanks, that worked for me. I have just one more question: How can I get an Oid out of a Datum, i.e. how do I know what type I get in a given Datum? DatumGetObjectId() seems to give me an Oid that was specifically stored as a Datum. I have found the alternative solution. If t is HeapTupleHeader then: Oid tupType; int32 tupTypmod; TupleDesc tupDesc; tupType = HeapTupleHeaderGetTypeId(t); tupTypmod = HeapTupleHeaderGetTypMod(t); tupDesc = lookup_rowtype_tupdesc(tupType, tupTypmod); will give me the needed TupleDesc and I can use SPI_gettypeid(). Thanks and best regards, Zoltán ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pg_controldata output documentation
On Thu, Dec 07, 2006 at 08:54:22AM -0600, andy rost wrote: Is there any documentation on the output from pg_controldata? Most of it seems intuitive but I would like something definitive on the following lines: Latest checkpoint location: 2F9/B38DE758 Prior checkpoint location:2F9/A3F688F8 Latest checkpoint's REDO location:2F9/B38DE758 Latest checkpoint's UNDO location:0/0 Especially, how do I relate this to the file names in the pg_xlog directory. http://www.postgresql.org/docs/8.2/static/functions-admin.html pg_xlogfile_name_offset() and pg_xlogfile_name_() eg testdb=# select pg_xlogfile_name_offset('2F9/B38DE758'); pg_xlogfile_name_offset (000102F900B3,9299800) (1 row) testdb=# select pg_xlogfile_name('2F9/B38DE758'); pg_xlogfile_name -- 000102F900B3 (1 row) -- rgds Stephen ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Online index builds
On Thu, 2006-12-07 at 12:26 +, Ragnar wrote: On mið, 2006-12-06 at 18:22 -0800, Jeff Davis wrote: On Wed, 2006-12-06 at 15:00 -0500, Chris Browne wrote: Let me add another question to this; this might possibly be worthy of a TODO for 8.3 or so... What if I wanted to: ALTER TABLE distributors ADD PRIMARY KEY CONCURRENTLY (dist_id); Interesting, I was just thinking about this today as well. I am thinking it would be nice if we could: ALTER TABLE SET PRIMARY KEY INDEX foo_pkey; If it's already got a primary key we switch the primary key to be the new primary key (throwing an error if the columns don't match up to the existing primary key, not sure what you mean by this In my suggestion, if the table already has a primary key, then you can only set the primary key index to be an index with exactly the same columns as the existing primary key index. or if it's not unique). must also be NOT NULL Indexes can't be NOT NULL; NOT NULL is a constraint. You're right though, if it was a new primary key, the column must already have the NOT NULL constraint on it. If not, the primary key attribute is added to the existing index and the columns in the index now make up the primary key (throwing an error if the index is not unique). What about existing foreign key constraints ? as the only function of the PRIMARY key property of an index is making it the default target of a foreign key reference, you would have to decide what implications this has. Possibly none, as I am not sure the foreign key constraint remembers if the target was a primary key or not. Doesn't matter. Foreign keys don't reference an index, they reference a set of attributes. I am just trying to provide an ability to change the underlying unique index that is used to implement the unique constraint that is necessary for all primary keys. also, your proposed syntax muddies the relationship between the PRIMARY KEY constraint and the existence of an INDEX. There is no such relationship in the SQL standards. The index is an important implementation detail of a primary key, because it is necessary to implement the UNIQUE constraint. Many PG DBAs need to reindex the primary key on a large table as part of regular maintenance. I am trying to provide a way to do this without locking our reads or writes, using the already-existing CREATE INDEX CONCURRENTLY. possibly more appropriate would be ALTER TABLE SET PRIMARY KEY (columns) and an error issued if no UNIQUE NOT NULL index is found on the relevant columns That doesn't solve the problem, because that doesn't allow you to choose the index that the primary key will use, which was the whole point of my suggestion. one other question is what shuld happen to the original index that was implicitly created. should it be dropped automatically ? Good question. Either way should be fine, as long as it is documented. It should probably not be automatically dropped, but maybe issue a NOTICE, like when the index is implicitly created. Regards, Jeff Davis ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] how to install 8.2 with yum on centos 4?
Excellent, thanks Hi, On Wed, 2006-12-06 at 23:32 -0800, [EMAIL PROTECTED] wrote: I got installed on my windows box and my debian box. But is there a way to install 8.2 on centos 4 using yum? I am about to create a yum repo for PGDG RPM sets. Clodoaldo Pinto Neto sent me the instructions on how to create one. I'll post to the list when it is ready. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Vote for your favorite database
http://linux.inet.hr/poll_favorite_database.html So far Firebird is in the lead :-( -- Tony Caduto AM Software Design http://www.amsoftwaredesign.com Home of PG Lightning Admin for Postgresql Your best bet for Postgresql Administration ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Vote for your favorite database
On Thu, 2006-12-07 at 11:59, Tony Caduto wrote: http://linux.inet.hr/poll_favorite_database.html So far Firebird is in the lead :-( Somebody just told their list earlier than anyone told us... or mysql's list. And Ingress has... 3 votes. Man, that's gotta hurt. Firebird is a great little dbms. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Vote for your favorite database
Scott Marlowe wrote: On Thu, 2006-12-07 at 11:59, Tony Caduto wrote: http://linux.inet.hr/poll_favorite_database.html So far Firebird is in the lead :-( Somebody just told their list earlier than anyone told us... or mysql's list. http://archives.postgresql.org/pgsql-general/2006-11/msg00072.php ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Vote for your favorite database
On Thu, 2006-12-07 at 12:05, Bricklen Anderson wrote: Scott Marlowe wrote: On Thu, 2006-12-07 at 11:59, Tony Caduto wrote: http://linux.inet.hr/poll_favorite_database.html So far Firebird is in the lead :-( Somebody just told their list earlier than anyone told us... or mysql's list. http://archives.postgresql.org/pgsql-general/2006-11/msg00072.php ouch! I stand corrected. Of course, I didn't register to vote either. Are there more firebird users in hungary (.hr right?) than postgresql maybe? I wonder. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Vote for your favorite database
On Thu, 2006-12-07 at 12:24 -0600, Scott Marlowe wrote: On Thu, 2006-12-07 at 12:05, Bricklen Anderson wrote: Scott Marlowe wrote: On Thu, 2006-12-07 at 11:59, Tony Caduto wrote: http://linux.inet.hr/poll_favorite_database.html So far Firebird is in the lead :-( Somebody just told their list earlier than anyone told us... or mysql's list. http://archives.postgresql.org/pgsql-general/2006-11/msg00072.php ouch! I stand corrected. Of course, I didn't register to vote either. Are there more firebird users in hungary (.hr right?) than postgresql maybe? I wonder. Well it looks like they have a following in Europe in general: Dateline Prague 12-NOV-2006 18:00 GMT The Firebird Project today officially released the much-anticipated version 2.0 of its open source Firebird relational database software during the opening session of the fourth international Firebird Conference in Prague, Czech Republic. Sincerely, Joshua D. Drake ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Why does explain differ from explan analyze?
On Thu, 2006-12-07 at 10:20 -0500, Tom Lane wrote: Well, the answer is that these aren't the same query. For instance You are right. I did not check the report thorough wnought. Sorry. -- Groeten, Joost Kraaijeveld Askesis B.V. Molukkenstraat 14 6524NB Nijmegen tel: 024-3888063 / 06-51855277 fax: 024-3608416 web: www.askesis.nl ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] VACUUM and transactions in different databases
Hi Sorry, i was out The first try was: create database dummy1; create table dummy ... and filled with 500 records create database dummy2; create table dummy ... and filled with 500 records connecting to dummy1, opening a transaction and issued an update begin; update dummy set f1='achterbahn'; then opened a second console and connected to dummy2: dummy2=# select count(*) from dummy; count --- 5000 (1 row) dummy2=# SELECT relpages, reltuples FROM pg_class WHERE relname ='dummy'; relpages | reltuples --+--- 160 | 5000 (1 row) updated the table several times - to generate dead tuples: dummy2=# SELECT relpages, reltuples FROM pg_class WHERE relname ='dummy'; relpages | reltuples --+--- 326 | 3 (1 row) dummy2=# vacuum; VACUUM dummy2=# SELECT relpages, reltuples FROM pg_class WHERE relname ='dummy'; relpages | reltuples --+--- 326 | 3 (1 row) dummy2=# select count(*) from dummy; count --- 5000 (1 row) dummy2=# vacuum full; vacuum was in waiting state as long the transaction in dummy1 was opened. After committing the transaction the vacuum full was carried out. --- VACUUM dummy2=# SELECT relpages, reltuples FROM pg_class WHERE relname ='dummy'; relpages | reltuples --+--- 326 | 3 (1 row) running on pg 8.1.4 on Fedora 5 Thanks Conni ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Vote for your favorite database
Are there more firebird users in hungary (.hr right?) than postgresql maybe? I wonder. Well it looks like they have a following in Europe in general: Dateline Prague 12-NOV-2006 18:00 GMT The Firebird Project today officially released the much-anticipated version 2.0 of its open source Firebird relational database software during the opening session of the fourth international Firebird Conference in Prague, Czech Republic. one more reason to have a pgsql 8.2 release party over here in europe as well :-) - thomas ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Vote for your favorite database
Scott Marlowe írta: ouch! I stand corrected. Of course, I didn't register to vote either. Are there more firebird users in hungary (.hr right?) than postgresql maybe? I wonder. Then stand a bit longer. :-) Hungary is .hu, .hr is for Hrvatska which is Croatia for English speakers. Best regards, Zoltán Böszörményi ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Online index builds
On fim, 2006-12-07 at 09:27 -0800, Jeff Davis wrote: On Thu, 2006-12-07 at 12:26 +, Ragnar wrote: On mið, 2006-12-06 at 18:22 -0800, Jeff Davis wrote: Interesting, I was just thinking about this today as well. I am thinking it would be nice if we could: ALTER TABLE SET PRIMARY KEY INDEX foo_pkey; If it's already got a primary key we switch the primary key to be the new primary key (throwing an error if the columns don't match up to the existing primary key, not sure what you mean by this In my suggestion, if the table already has a primary key, then you can only set the primary key index to be an index with exactly the same columns as the existing primary key index. Why would you do that? I saw the use-case of when you have a primary key and a surrogate key , and decided you wanted the surrogate key to be the primary key after all, maybe because the natural key you had used turned out not to be a good candidate. or if it's not unique). must also be NOT NULL Indexes can't be NOT NULL; NOT NULL is a constraint. Sorry, I got confused by the UNIQUE in the create index syntax: CREATE [ UNIQUE ] INDEX name ON table [ USING method ] ( { column | ( expression ) } [ opclass ] [, ...] ) [ TABLESPACE tablespace ] [ WHERE predicate ] ... What about existing foreign key constraints ? as the only function of the PRIMARY key property of an index is making it the default target of a foreign key reference, you would have to decide what implications this has. Possibly none, as I am not sure the foreign key constraint remembers if the target was a primary key or not. Doesn't matter. Foreign keys don't reference an index, they reference a set of attributes. I am just trying to provide an ability to change the underlying unique index that is used to implement the unique constraint that is necessary for all primary keys. I was still imagining here that you would want a different set of attributes froyour primary key. gnari ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] VACUUM and transactions in different databases
On fim, 2006-12-07 at 20:04 +0100, Cornelia Boenigk wrote: Sorry, i was out [ snip demonstration of blocked vacuum full] running on pg 8.1.4 on Fedora 5 could not duplicate this. can you show us the contents of pg_locks and pg_stat_activity while the VACUUM is blocked? gnari ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] VACUUM and transactions in different databases
Hi Ragnar could not duplicate this. I also cannot reproduce the hanging VACUUM FULL. The problem remains thet the dead tuples cannot be vemoved. dummy1=# vacuum full; VACUUM dummy1=# SELECT relpages, reltuples FROM pg_class WHERE relname ='dummy1'; -[ RECORD 1 ]- relpages | 997 reltuples | 10 dummy1=# analyze verbose; ... INFO: analyzing public.dummy1 INFO: dummy1: scanned 997 of 997 pages, containing 5000 live rows and 95000 dead rows; 3000 rows in sample, 5000 estimated total rows ... dummy1=# select * from pg_stat_activity; -[ RECORD 1 ]-+-- datid | 21529 datname | dummy1 procpid | 2065 usesysid | 10 usename | postgres current_query | command string not enabled query_start | backend_start | 2006-12-07 21:03:54.89+01 client_addr | client_port | -1 -[ RECORD 2 ]-+-- datid | 21530 datname | dummy2 procpid | 2152 usesysid | 10 usename | postgres current_query | command string not enabled query_start | backend_start | 2006-12-07 21:07:59.973477+01 client_addr | client_port | -1 the transaction in db dummy2 performed an update and select count(*) and is still running. dummy1=# select * from pg_locks; -[ RECORD 1 ]-+- locktype | relation database | 21530 relation | 21540 page | tuple | transactionid | classid | objid | objsubid | transaction | 85385 pid | 2152 mode | AccessShareLock granted | t -[ RECORD 2 ]-+- locktype | relation database | 21530 relation | 21540 page | tuple | transactionid | classid | objid | objsubid | transaction | 85385 pid | 2152 mode | RowExclusiveLock granted | t -[ RECORD 3 ]-+- locktype | relation database | 21529 relation | 10342 page | tuple | transactionid | classid | objid | objsubid | transaction | 85925 pid | 2065 mode | AccessShareLock granted | t -[ RECORD 4 ]-+- locktype | transactionid database | relation | page | tuple | transactionid | 85925 classid | objid | objsubid | transaction | 85925 pid | 2065 mode | ExclusiveLock granted | t -[ RECORD 5 ]-+- locktype | transactionid database | relation | page | tuple | transactionid | 85385 classid | objid | objsubid | transaction | 85385 pid | 2152 mode | ExclusiveLock granted | t Thanks Conni ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Indexes and Inheritance
Thanks to Erik, Jeff, Richard for their help. I have a further inheritance question: do child tables inherit the indexes created on parent columns, or do they need to be specified separately for each child table? I.e., created via CREATE INDEX. I assume at least that the implicit index created by a primary key would inherit, but I don't know if that assumption is safe. Thanks, Keary Suska Esoteritech, Inc. Demystifying technology for your home or business ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] tsearch2: pg8.1 to pg8.2
I am trying to copy a database that uses tsearch2 from pg8.1 to pg8.2. All I did to install tsearch2 was cd to the contrib/tsearch2 directory, then make, make install. I then dumped the database from pg8.1 and used psql -f filename db to load in into pg8.2. Attempting a query gives an error: lib2=# select * from item where idxTitle @@ to_tsquery('default', 'money'); ERROR: No dictionary with name 'en_stem' Is there a document that describes the necessary steps to convert to the upgraded tsearch2? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Vote for your favorite database
On Thu, 2006-12-07 at 13:04, Zoltan Boszormenyi wrote: Scott Marlowe írta: ouch! I stand corrected. Of course, I didn't register to vote either. Are there more firebird users in hungary (.hr right?) than postgresql maybe? I wonder. Then stand a bit longer. :-) Hungary is .hu, .hr is for Hrvatska which is Croatia for English speakers. Ya know, considering I'm a huge mplayer fan, which is from a site in Hungary, you'd think I'd pick up on such things... ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Indexes and Inheritance
Keary Suska wrote: Thanks to Erik, Jeff, Richard for their help. I have a further inheritance question: do child tables inherit the indexes created on parent columns, or do they need to be specified separately for each child table? I.e., created via CREATE INDEX. I assume at least that the implicit index created by a primary key would inherit, but I don't know if that assumption is safe. Thanks, Keary Suska Esoteritech, Inc. Demystifying technology for your home or business ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq No. In addition, child tables do not inherit primary keys. Think of it like this: if you did a \d to describe a table that you were going to use as a parent table in an inheritance chain, the child table would get everything in the table listing the columns but nothing beneath the table. -- erik jones [EMAIL PROTECTED] software development emma(r) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Online index builds
On Thu, 2006-12-07 at 20:07 +, Ragnar wrote: On fim, 2006-12-07 at 09:27 -0800, Jeff Davis wrote: On Thu, 2006-12-07 at 12:26 +, Ragnar wrote: On mið, 2006-12-06 at 18:22 -0800, Jeff Davis wrote: Interesting, I was just thinking about this today as well. I am thinking it would be nice if we could: ALTER TABLE SET PRIMARY KEY INDEX foo_pkey; If it's already got a primary key we switch the primary key to be the new primary key (throwing an error if the columns don't match up to the existing primary key, not sure what you mean by this In my suggestion, if the table already has a primary key, then you can only set the primary key index to be an index with exactly the same columns as the existing primary key index. Why would you do that? I saw the use-case of when you have a primary key and a surrogate key , and decided you wanted the surrogate key to be the primary key after all, maybe because the natural key you had used turned out not to be a good candidate. You've got a valid use-case, but it's completely different from the one I suggested. I wanted to be able to build an index concurrently (with the new functionality in 8.2) and then switch the primary key to use that new index, and then drop the old index. The reason is because that allows a 0-downtime index rebuild on a primary key's index without losing it's primary key status. I think all you need to do what you want is something like: ALTER TABLE foo DROP CONSTRAINT foo_pkey KEEP INDEX; Because then you could drop the primary key status on a column without affecting the column or the index, then use my suggested syntax to switch the primary key status to a different index like so: ALTER TABLE foo SET PRIMARY KEY INDEX foo_othercolumn_index; Regards, Jeff Davis ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] The relative stability of different procedural languages
I'm facing a particular task for which I need any procedural language but PL/PgSQL. I can't use PL/PgSQL because it doesn't allow me to use local variables such as new and old from a dynamic command. However, I've been unable to find any useful information on the stability of each particular procedural language. I've seen suggestions for creating a PL comparison matrix, but, short of such a matrix, I couldn't even find essential information regarding the vitality and stability of the various PLs. I'm comfortable with both Ruby and Perl, but wouldn't mind brushing up a bit on my Python. I'm also comfortable with PHP but have to say that the idea of running it within Postgres doesn't appeal to me very much. So, how to the different procedural languages stack up against each other in terms of stability? After all, adding a language that could cause random segfaults or huge memory leaks is simply not an option for a production DB. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] concatenation operator || with null array
Ok, solved the problem with COALESCE. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] concatenation operator || with null array
Hello! When I try to fill an array with the concatenation operator, like UPDATE test SET myint = myint || ARRAY[123] WHERE id = 1 that before that statement was null, then it is also null after that statement. But if there is already something in that array and I execute that statement, then everything works fine and one can find the 123 there, too. Is this the normal behavior? Is there a way to concatenate to null arrays as well, or do I have to test this inside my script, and if it is null fill it normal for the first time? Best regards, Kai ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] concatenation operator || with null array
stroncococcus wrote: Ok, solved the problem with COALESCE. Hm, I not really solved it ... just solved it for text columns, but not for integer arrays. I can use this for text COALESCE(textcol, '') || 'str ' but how do I use this for integers? Is there a way to create an empty integer array and do such a thing ... COALESCE(intarraycol, *empty int array*) || 5 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] can this be done in one SQL selcet statement?!
I have two table: -Table1: one column of type TEXT containing label for nodes in a graph -Table 2: two columns of type TEXT. first column contains node labels in a graph. second a list of node labels that the node label in column one is connected to. Example: Table1: NODE1 NODE2 Table 2: NODE1NODE2 NODE3 NODE2NODE4 NODE3 Goal: split column2 in table2 to individual node names, find a unique list of all node names obtained after splitting column2 of table2 and insert the ones not already in table1 in table1. Thanks in advance. S ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] SET statement_timeout
We run VACUUM ANALYZE as a cron job on PostgreSQL v8.1.3 on an Opteron box running FreeBSD 6.0-RELEASE #10. We set statement_timeout to 720 in postgresql.conf. Since this task often takes longer than two hours we encounter the following messages in our log files: 2006-11-30 00:03:31 CST ERROR: canceling statement due to statement timeout 2006-11-30 00:03:31 CST STATEMENT: VACUUM ANALYZE VERBOSE; No big deal. We simply modified the cron job to: set statement_timeout=0; VACUUM ANALYZE VERBOSE; Should work, right? Now we get the following entries in our log files: 2006-11-30 00:03:31 CST ERROR: canceling statement due to statement timeout 2006-11-30 00:03:31 CST STATEMENT: set statement_timeout=0; VACUUM ANALYZE VERBOSE; I imagine that I have a silly little mistake going on but I just can't see it. Any ideas? Thanks ... -- Andrew Rost National Operational Hydrologic Remote Sensing Center (NOHRSC) National Weather Service, NOAA 1735 Lake Dr. West, Chanhassen, MN 55317-8582 Voice: (952)361-6610 x 234 Fax: (952)361-6634 [EMAIL PROTECTED] http://www.nohrsc.noaa.gov ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Restore database from files (not dump files)?
In article [EMAIL PROTECTED], [EMAIL PROTECTED] says... On Wed, Dec 06, 2006 at 12:16:35PM -0800, wheel wrote: re Bruce Momjian Wow, what an unfriendly dude! Well, he's one of the very guys who make all this (PostgreSQL, that is) happen for us. Karsten Yes I know he's a mainstay here, and figured the crew wouldn't want to ruffle his feathers with an admonishment. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] The relative stability of different procedural languages
On 7 Dec 2006 14:02:53 -0800, BigSmoke [EMAIL PROTECTED] wrote: I'm facing a particular task for which I need any procedural language but PL/PgSQL. I can't use PL/PgSQL because it doesn't allow me to use local variables such as new and old from a dynamic command. could you clarify what you are trying to do and why pl/pgsql cant do it? merlin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] The relative stability of different procedural
On Thu, 2006-12-07 at 14:02 -0800, BigSmoke wrote: I'm facing a particular task for which I need any procedural language but PL/PgSQL. I can't use PL/PgSQL because it doesn't allow me to use local variables such as new and old from a dynamic command. PLPerl. And I say this as someone who doesn't even like Perl. Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] How to use outer join in update
In my current DBMS I can use create table t1 ( f1 int, f2 int ); create table t2 ( f3 int, f4 int ); update t1 set f1=t2.f3 from t1 left join t2 on t1.f2=t2.f4 This does not work in Postgres. How to convert this statement to Postgres 8.1 ? Andrus. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Help with Update Rule on View - 2nd Attempt
I tried to post this the other day, but didn't get any responses and never saw it show up in the digest. Here it is again if anyone can offer any insight: I'm trying to create a schema in which there will be simple a view for each table that will have the same columns and can be acted on in the same way as the underlying table An example of one table and its view would be: CREATE TABLE test (id int, text_field varchar(100)); CREATE VIEW _test AS SELECT * FROM test; I'd like to be able to create both the views and the insert, update, delete rules for the views in an automated fashion via a script that uses the information schema to get all of the table names. All is fine and good with the insert and delete rules and no problem to automatically generate this: CREATE RULE _test_oi_rule AS ON INSERT TO _test DO INSTEAD INSERT INTO test VALUES (NEW.*); CREATE RULE _test_od_rule AS ON DELETE TO _test DO INSTEAD DELETE FROM test WHERE id = OLD.id; However, I'm not sure how to create the update rule without having to go through the gory task of specifying each column by name. Yes, I could also use the information schema to automate this as well, but it just seems ugly. Is there any way to create an update rule that's something like this: CREATE RULE _test_ou_rule AS ON UPDATE TO _test SET test.* = NEW.*; -- or even better a command that will only update changed columns (i.e., WHERE NEW.* OLD.*) I imagine I could instead delete the old record and insert the new one, but that doesn't seem right either and seems like could be perilous. Maybe I'm overlooking something obvious, but any help to find a nice clean solution would be appreciated. Thanks, JL ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] loading data, creating indexes, clustering, vacuum...
Hi everyone, Looking for a small bit of advice... I have a script that updates several tables with large amounts of data. Before running the updates, it drops all indexes for optimal performance. When the updates have finished, I run the following procedure: recreate the indexes cluster the tables vacuum full analyze on the tables I was hoping an expert could comment on the optimal way to order these three commands. For instance I have a hunch that creating the indexes first (as I do now) could slow down the clustering - perhaps the row locations in the indexes all have to be updated as the cluster command shifts their locations? And perhaps vacuuming should be done before clustering so that dead tuples aren't in the way? Of course I could just test every combination until I get it right, but I'd like to have a good understanding as well. Any insight would be much appreciated. Thank you, Mark ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] tsearch2: pg8.1 to pg8.2
Rick, did you load tsearch2 itself into your database ? Oleg On Thu, 7 Dec 2006, Rick Schumeyer wrote: I am trying to copy a database that uses tsearch2 from pg8.1 to pg8.2. All I did to install tsearch2 was cd to the contrib/tsearch2 directory, then make, make install. I then dumped the database from pg8.1 and used psql -f filename db to load in into pg8.2. Attempting a query gives an error: lib2=# select * from item where idxTitle @@ to_tsquery('default', 'money'); ERROR: No dictionary with name 'en_stem' Is there a document that describes the necessary steps to convert to the upgraded tsearch2? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] The relative stability of different procedural languages
On Dec 7, 11:07 pm, [EMAIL PROTECTED] (Merlin Moncure) wrote: On 7 Dec 2006 14:02:53 -0800, BigSmoke [EMAIL PROTECTED] wrote: I'm facing a particular task for which I need any procedural language but PL/PgSQL. I can't use PL/PgSQL because it doesn't allow me to use local variables such as new and old from a dynamic command. could you clarify what you are trying to do and why pl/pgsql cant do it? I'm dealing with a trigger function which needs to check the nullness of a column in 'new' and 'old'. The catch is that the trigger function needs to take the name of that column as an argument. (I've tried a kludge which stores 'new' and 'old' in a temporary table, but this kludge seems too unreliable to trust.) - Rowan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] The relative stability of different procedural
On Dec 7, 11:11 pm, [EMAIL PROTECTED] (Joshua D. Drake) wrote: On Thu, 2006-12-07 at 14:02 -0800, BigSmoke wrote: I'm facing a particular task for which I need any procedural language but PL/PgSQL. I can't use PL/PgSQL because it doesn't allow me to use local variables such as new and old from a dynamic command.PLPerl. And I say this as someone who doesn't even like Perl. Joshua D. Drake Thanks heaps! That's the answer I was looking for. - Rowan ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] The relative stability of different procedural languages
BigSmoke wrote: On Dec 7, 11:07 pm, [EMAIL PROTECTED] (Merlin Moncure) wrote: On 7 Dec 2006 14:02:53 -0800, BigSmoke [EMAIL PROTECTED] wrote: I'm facing a particular task for which I need any procedural language but PL/PgSQL. I can't use PL/PgSQL because it doesn't allow me to use local variables such as new and old from a dynamic command. could you clarify what you are trying to do and why pl/pgsql cant do it? I'm dealing with a trigger function which needs to check the nullness of a column in 'new' and 'old'. The catch is that the trigger function needs to take the name of that column as an argument. (I've tried a kludge which stores 'new' and 'old' in a temporary table, but this kludge seems too unreliable to trust.) Why can't you just use something like this: IF new.yourcolumnname IS NULL THEN END IF; I test for null in PLpgsql all the time. Am I missing something? Later, Tony Caduto AM Software Design http://www.amsoftwaredesign.com Home of PG Lightning Admin for Postgresql Your best bet for Postgresql Administration ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] loading data, creating indexes, clustering, vacuum...
On Thursday 07 December 2006 08:38, Angva [EMAIL PROTECTED] wrote: three commands. For instance I have a hunch that creating the indexes first (as I do now) could slow down the clustering - perhaps the row locations in the indexes all have to be updated as the cluster command shifts their locations? And perhaps vacuuming should be done before clustering so that dead tuples aren't in the way? clustering also removes the dead tuples. I would just: - create one index, the one to be clustered - cluster the table - create the remaining indexes -- Eat right. Exercise regularly. Die anyway. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] VACUUM and transactions in different databases
Cornelia Boenigk wrote: Hi Ragnar could not duplicate this. I also cannot reproduce the hanging VACUUM FULL. The problem remains thet the dead tuples cannot be vemoved. [snip] I am interested in this. As one database cannot talk to another database in a transactional way a long running transaction in one database should not effect the vacuuming of another database. From my limited understanding VACUUM takes the lowest open transaction number and only cleans up transactions with TID's lower than that. The reason I believe that it has to use cluster wide is because the shared catalogs might be effected. Do shared catalogs follow MVCC or ACID strictly? I don't know, but I assume they don't follow both given my reading of the list. So if shared catalogs are the problem, what happens if you just vacuum the relevant table public.dummy1 and not the whole database, does the vacuum remove all the tuples that are dead? Is it possible to add logic for lazy vacuum that takes the lowest TID in our database when not vacuuming shared catalogs? This may already be the case, I don't know. Just putting forward suggestions. Russell Smith ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Restore database from files (not dump files)?
On Thu, 2006-12-07 at 02:41, wheel wrote: In article [EMAIL PROTECTED], [EMAIL PROTECTED] says... On Wed, Dec 06, 2006 at 12:16:35PM -0800, wheel wrote: re Bruce Momjian Wow, what an unfriendly dude! Well, he's one of the very guys who make all this (PostgreSQL, that is) happen for us. Karsten Yes I know he's a mainstay here, and figured the crew wouldn't want to ruffle his feathers with an admonishment. Let's add up the score: Bruce: works hard on the pgsql project gives selflessly of his time to help newbies is a genuinely pleasant guy to hang out with shows respect for people. is often the voice of moderation in discussions on the list has a working email address so I can respond to him if needs be Some person named wheel: has committed zero time or effort to this project has helped no one on the list, including himself has been singularly unpleasant to deal with shows no respect for those trying to help him is a divisive voice on an otherwise pleasant mailing list has no working email address, so I cannot be sure he will ever get this. I notice you didn't reply to my answer to your question. Not a howdy, a thanks or a request for clarification. Please, go use a different database, seriously.
Re: [GENERAL] Indexes and Inheritance
Erik Jones [EMAIL PROTECTED] writes: No. In addition, child tables do not inherit primary keys. Think of it like this: if you did a \d to describe a table that you were going to use as a parent table in an inheritance chain, the child table would get everything in the table listing the columns but nothing beneath the table. Not quite. CHECK constraints (and NOT NULL ones too) will be inherited. The main reason we don't yet inherit indexes/unique constraints is that the uniqueness would only be per-table, which is not what you'd expect. Eventually someone will think of a way to fix that ... regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Online index builds
Jeff Davis [EMAIL PROTECTED] writes: I think all you need to do what you want is something like: ALTER TABLE foo DROP CONSTRAINT foo_pkey KEEP INDEX; Because then you could drop the primary key status on a column without affecting the column or the index, then use my suggested syntax to switch the primary key status to a different index like so: ALTER TABLE foo SET PRIMARY KEY INDEX foo_othercolumn_index; That seems like an awful lot of uglification simply to let the index be marked as primary key rather than just unique. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] tsearch2: pg8.1 to pg8.2
It was my understanding that running pgdump creates a file that contains all the necessary commands to use tsearch2. That approach has worked for me to transfer my database from one pg8.1 server to another. I now see that is does *not* work from pg8.1 to pg8.2. At your suggestion I loaded tsearch2.sql before loading the pgdump output. I get some errors in the second part, I believe because it attempts to load tsearch2 stuff from the pg8.1 database that conflicts with the pg8.2 stuff from tsearch2.sql. But, the queries seem to work. So perhaps the answer is, load tsearch2.sql, then load the result of running pgdump on the 8.1 database, and ignore the errors? Oleg Bartunov wrote: Rick, did you load tsearch2 itself into your database ? Oleg On Thu, 7 Dec 2006, Rick Schumeyer wrote: I am trying to copy a database that uses tsearch2 from pg8.1 to pg8.2. All I did to install tsearch2 was cd to the contrib/tsearch2 directory, then make, make install. I then dumped the database from pg8.1 and used psql -f filename db to load in into pg8.2. Attempting a query gives an error: lib2=# select * from item where idxTitle @@ to_tsquery('default', 'money'); ERROR: No dictionary with name 'en_stem' Is there a document that describes the necessary steps to convert to the upgraded tsearch2? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] SET statement_timeout
andy rost [EMAIL PROTECTED] writes: No big deal. We simply modified the cron job to: set statement_timeout=0; VACUUM ANALYZE VERBOSE; Should work, right? Now we get the following entries in our log files: 2006-11-30 00:03:31 CST ERROR: canceling statement due to statement timeout 2006-11-30 00:03:31 CST STATEMENT: set statement_timeout=0; VACUUM ANALYZE VERBOSE; Hm, are you doing it like this: psql -c set statement_timeout=0; VACUUM ANALYZE VERBOSE; ... ? I am not totally certain without looking at the code, but I think in that scenario the SET would only take effect at the next command string (which of course there won't be in a -c case). postgres.c defines a statement as whatever is sent in a single Query message, and psql -c just crams its entire argument into a single Query --- which is unlike psql's behavior otherwise. You could instead do echo set statement_timeout=0; VACUUM ANALYZE VERBOSE; | psql ... in which case psql will break its input apart at semicolons, and you'll get the behavior you expect. BTW, you might instead consider doing ALTER USER postgres SET statement_timeout=0 (or whatever userid you run the VACUUM as). This would make all superuser activities immune to the timeout, which is probably a good idea. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Online index builds
On Thu, 2006-12-07 at 18:11 -0500, Tom Lane wrote: Jeff Davis [EMAIL PROTECTED] writes: I think all you need to do what you want is something like: ALTER TABLE foo DROP CONSTRAINT foo_pkey KEEP INDEX; Because then you could drop the primary key status on a column without affecting the column or the index, then use my suggested syntax to switch the primary key status to a different index like so: ALTER TABLE foo SET PRIMARY KEY INDEX foo_othercolumn_index; That seems like an awful lot of uglification simply to let the index be marked as primary key rather than just unique. Agreed. It's just a thought. The reason it came to my mind is because some applications, like Slony, use the primary key by default. After reading through the archives, it looks like Gregory Stark suggested a REINDEX CONCURRENTLY, which would certainly solve the awkwardness of maintenance on a primary key. I didn't see much objection, maybe it's worth consideration for 8.3? Regards, Jeff Davis ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] porting time calcs to PG
I'm trying to port an MS statement that's a bit involved with timestamps, and I don't see anything in the docs to lead me forward. It's basically a select statement, looking for records with a timestamp within a certain range, where that range is calculated with one of the fields. The WHERE clause that I use in SQL Server is: getdate() + ((2100 + 5 + (9*Points)) / 86400) = DueTime Where the numbers are actually parameters passed in to the function. Other than changine getdate() to now(), I'm not sure how to change the + interval to be effective. All the docs I see use something like interval '1 hour' - not sure how to put a calculated value in the quotes. Is this possible? ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Online index builds
Jeff Davis [EMAIL PROTECTED] writes: After reading through the archives, it looks like Gregory Stark suggested a REINDEX CONCURRENTLY, which would certainly solve the awkwardness of maintenance on a primary key. I didn't see much objection, maybe it's worth consideration for 8.3? That idea was bounced on the grounds that it requires a DROP INDEX to occur somewhere, and that can't be concurrent, and you'd surely not like to go through all the work of a CONCURRENTLY rebuild only to get a deadlock failure at the very end. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Online index builds
On fim, 2006-12-07 at 13:57 -0800, Jeff Davis wrote: On Thu, 2006-12-07 at 20:07 +, Ragnar wrote: On fim, 2006-12-07 at 09:27 -0800, Jeff Davis wrote: On Thu, 2006-12-07 at 12:26 +, Ragnar wrote: On mið, 2006-12-06 at 18:22 -0800, Jeff Davis wrote: Interesting, I was just thinking about this today as well. I am thinking it would be nice if we could: ALTER TABLE SET PRIMARY KEY INDEX foo_pkey; You've got a valid use-case, but it's completely different from the one I suggested. I wanted to be able to build an index concurrently (with the new functionality in 8.2) and then switch the primary key to use that new index, and then drop the old index. The reason is because that allows a 0-downtime index rebuild on a primary key's index without losing it's primary key status. my point was just that 'primary key' is really just a property of a set of attributes, and it is just incidental that postgres enforces this property with an index. so if if a ALTER TABLE SET PRIMARY KEY is implemented, it should involve a set of attributes, but not an index. in your use case, the ALTER should not really be needed. lets say you have PRIMARY KEY (a,b) on some table. you decide you want to rebuild the primary key concurrently. just build a new index on (a,b). if you then drop the old index, the primary key constraint can still be enforced by the new index, so the DROP should be allowed to proceed, without affecting the constraint. on the other hand, the PRIMARY KEY property is really only there because the standards say so, but does not have a great value in my opinion, so the ability to alter it would not be high on my priority lists. gnari ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] porting time calcs to PG
[EMAIL PROTECTED] wrote: I'm trying to port an MS statement that's a bit involved with timestamps, and I don't see anything in the docs to lead me forward. It's basically a select statement, looking for records with a timestamp within a certain range, where that range is calculated with one of the fields. The WHERE clause that I use in SQL Server is: getdate() + ((2100 + 5 + (9*Points)) / 86400) = DueTime Where the numbers are actually parameters passed in to the function. Other than changine getdate() to now(), I'm not sure how to change the + interval to be effective. All the docs I see use something like interval '1 hour' - not sure how to put a calculated value in the quotes. Is this possible? For the interval part, you can try interval '1 minute' * some number ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Asynchronous replication of a PostgreSQL DB to a
On Thu, 2006-12-07 at 09:56 +0100, Markus Wollny wrote: My first problem is that the PostgreSQLs schema is not stable, so if I simply write a couple of jobs to transport the data, I need to alter these jobs and the MySQL schema whenever there are changes to the PG schema. The second problem lies in PostgreSQL-specifics such as tsearch2 - I actually do not need nor want to replicate such metadata. Custom datatypes and functions should also be exempt from this kind of replication. My hopes aren't all too high that there's an easy way to accomplish what I wish to do, so any advice would be very much welcome - even a can't be done that way by somebody who has tried to travel that path before Wow, tough problem. You're trying to do two quite difficult things at once: replicate schema changes and convert from PostgreSQL to MySQL. I think your best bet might be to hack PgPool to do what you need. First, do some analysis to figure out what kinds of schema changes are pushed onto PostgreSQL. Since the schema changes are automated, you should see a few types of queries that change the schema. Maybe all the commands begin with ALTER or CREATE. Then, in PgPool, hack it to recognize any DDL statements, and put them in a log in PostgreSQL (the query in raw text), in addition to passing it to PostgreSQL like normal. You then need to make a process that reads this log of DDL changes, and can recognize the types of DDL statements they are, and convert them to the MySQL equivalent, and put them in MySQL. In order for this to work, the statements must match one of a few patterns so that you can pre- define the translations necessary. This won't work for people issuing arbitrary schema changes. Then, for the data replication, do something clever with triggers and a function that can use the MySQL protocol. Make sure that the DDL stuff makes it to the MySQL database before your trigger tries to send the data. I have no idea whether that will work for you, but there isn't going to be a simple solution. You should strongly reconsider your options; you're asking for a lot of trouble. Regards, Jeff Davis ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Online index builds
On Thu, 2006-12-07 at 18:51 -0500, Tom Lane wrote: Jeff Davis [EMAIL PROTECTED] writes: After reading through the archives, it looks like Gregory Stark suggested a REINDEX CONCURRENTLY, which would certainly solve the awkwardness of maintenance on a primary key. I didn't see much objection, maybe it's worth consideration for 8.3? That idea was bounced on the grounds that it requires a DROP INDEX to occur somewhere, and that can't be concurrent, and you'd surely not like to go through all the work of a CONCURRENTLY rebuild only to get a deadlock failure at the very end. I don't understand. CREATE INDEX CONCURRENTLY can't be run in a transaction block anyway, so a REINDEX CONCURRENTLY wouldn't either. So how (or when) would you deadlock? I see it as the following logical operations: (1) CREATE INDEX CONCURRENTLY tmp; (2) swap the relfilenode of the old index and new index (3) DROP INDEX tmp; If this was all already hashed out on -hackers, you can point me to the discussion if it's easier. Regards, Jeff Davis ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Online index builds
Jeff Davis [EMAIL PROTECTED] writes: I don't understand. CREATE INDEX CONCURRENTLY can't be run in a transaction block anyway, so a REINDEX CONCURRENTLY wouldn't either. So how (or when) would you deadlock? The problem is you need to upgrade from a nonexclusive table lock to an exclusive one before you could drop the old index. If someone else is waiting to get a conflicting lock, boom ... regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Online index builds
On Thu, 2006-12-07 at 19:44 -0500, Tom Lane wrote: Jeff Davis [EMAIL PROTECTED] writes: I don't understand. CREATE INDEX CONCURRENTLY can't be run in a transaction block anyway, so a REINDEX CONCURRENTLY wouldn't either. So how (or when) would you deadlock? The problem is you need to upgrade from a nonexclusive table lock to an exclusive one before you could drop the old index. If someone else is waiting to get a conflicting lock, boom ... I think what I'm confused about is how these non-transactional commands work (like VACUUM, etc). Are they still transactions, and just can't be run in a block? My original thinking was that the shared lock could be unlocked before the exclusive lock is taken to switch the relfilenodes and to drop the index. However, if it is a real transaction, clearly you can't unlock in the middle. Is it safe to manually run the sequence I previously suggested? If so it seems like there could be a command to do it properly. I tried it and it appeared to work. Regards, Jeff Davis ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Restore database from files (not dump files)?
Uh, do you have the entier /data directory tree? If so, just restore the directory start it up a binary. Yes I have the entire dir/file set. But what does If so, just restore the directory start it up a binary mean? Restore the dir, you mean copy it to it's location under \base? What does start it up a(s?) binary mean? I'm new to postgres. I have copied the folders back to the base dir (like C:\PostgreSQL\data \base\16404) if that's step one but what after that? OK, one more try: - Stop postgres. - Restore the ***ENTIRE*** ***data*** directory tree. Not base, not something else to it's location under \base. The entire data directory tree, just like the helpful message said. - Start postgres. Now, if you did that, and the raw files were from the same OS architecture postgres build settings, everything is fine. Otherwise, you're out of luck. I see...I'm using pgAdmin III with 8.1.5, and the dbs don't reappear in the pgAdmin list of databases automatically. I doubt that. -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Cast boolean to text
You can create a cast like this: *create or replace function bool_to_text (boolean) returns char strict language sql as ' select case when $1 then \'t\' else \'f\' end; '; create cast (boolean as char(1)) with function bool_to_text(boolean) as implicit; * Thank you, Shoaib On 12/7/06, Tom Lane [EMAIL PROTECTED] wrote: Willy-Bas Loos [EMAIL PROTECTED] writes: I've noticed that postgresql 8.1.0 can cast a boolean to text, but version 8.1.5 CAN'T. Better check again --- there has never been a standard cast from bool to text. Sure you didn't install a custom one in your 8.1.0 database? regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] pg_log missing
Hello all, I have build 8.2 version from source code on redhat linux 4.0 but i am not able to figure out why directory pg_log is missing... and as it is missing then where are the log file.' can anybody help.. Thanks Regards Ashish Karalkar Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_log missing
Ashish Karalkar [EMAIL PROTECTED] writes: I have build 8.2 version from source code on redhat linux 4.0 but i am not able to figure out why directory pg_log is missing... It sounds like you are expecting the source distribution to have the same default logging setup that the RPMs use. T'aint so ... check the contents of postgresql.conf. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/