[SQL] shared memory leak in 7.0.2?
All, i've been running 7.0.2 for the last month or so, and I've had to reboot my redhat linux box twice to clear up a shared memory leak issue. Essentially with the DB running for about 2weeks with large amounts of usage, eventually the Os runs out of shared memory and the db crashes and fails to restart. The only way to get the db back online is to reboot. Has anyone seen this? Or does anyone have any suggestions on how to fix it? BTW. Here is my startup line for the postgres daemon. su -l postgres -c 'exec /usr/local/pgsql/bin/postmaster -B 384 -D/usr/local/pgsql/data -i -S -o -F Pierre
[SQL] Table design issue....
Hi all, I've got a situation where I need to be able to query for the same sort of data across multiple tables. Let me give some example tables then explain. create table t1 ( t_attr1 text[], t_attr2 text[] ); create table a1 ( a_attr1 text[], a_attr2 text[] ); create table c1 ( c_attr1 text[], c_attr2 text[], c_attr3 text[] ); In each of the above tables *_attr*[1] contains a flag that determines what type of attribute it is. t1.t_attr1[1] == a1.a_attr2[1] == c1_.c_attr3[1] == FLAG In otherwords, the attribute with the specific flag in quesiton is not known at runtime, unless I keep a table with the column names and table names setup. Also, new *1 tables could be created dynamically with new attr*'s, and the number of columns within the tables isn't going to be the same. What I need to be able to do is say something like: "For ALL *1 tables with *_attr*[1] == FLAG return rows with VALUE" Ideas? Comments? Suggestions? Am I being crazy? Pierre
[SQL] timestamp conversion to unisgned long?
All, Perhaps I'm not sing hte correct datatype, but I'd like to be able to convert a timestamp over to an unsigned long to be used within C code and compare to the output of time(). I can't seem to see any easy way of doing this using the built in stuff for postgresql. Ideas? Perhaps I'm using the wrong type? Pierre
Re: [SQL] shared memory leak in 7.0.2?
Tom, Thanks for that command. I never knew that existed. The only reason I blame postgres at this point, is that the only thing that has changed on this machine in the past month was upgrading postgres to 7.0.2 as well as upgrading perl. Of the two perl is used not nearyl as much as postgres. Here is the current output of that ipc command: [root@kahuna pierre]# ipcs -m -a -- Shared Memory Segments keyshmid owner perms bytes nattchstatus 0x0052e2ca 0 postgres 700 144 2 0x0052e2c1 1 postgres 600 3769344 2 0x0052e2c7 2 postgres 600 66060 2 0x 3 llee 600 46084 6 dest 0x 4 www 600 46084 11dest -- Semaphore Arrays key semid owner perms nsems status 0x0052e2ce 0 postgres 600 16 0x0052e2cf 1 postgres 600 16 -- Message Queues key msqid owner perms used-bytes messages 0x 0 root 700 0 0 If postgres were to crash for some reason. Would the shared memory be left in never never land? If this were the case, and I'm using most if not all of the available shared memory on startup of postgres, then this would bring about the problems I'm seeing. Does this make sense? Pierre Tom Lane wrote: > > [EMAIL PROTECTED] writes: > > i've been running 7.0.2 for the last month or so, and I've had to > > reboot my redhat linux box twice to clear up a shared memory leak > > issue. Essentially with the DB running for about 2weeks with large > > amounts of usage, eventually the Os runs out of shared memory and the > > db crashes and fails to restart. The only way to get the db back > > online is to reboot. > > I haven't seen this reported before. Are you sure Postgres deserves > the blame, rather than some other package? Postgres' use of shared > memory is fixed for the life of a postmaster, so unless you're > constantly restarting the postmaster I don't see how we could be leaking > shmem. > > However, rather than speculate, let's get some hard facts. Try using > "ipcs -m -a" to keep track of shared mem allocations, and see what usage > is creeping up over time. > > regards, tom lane
[SQL] Subqueries in from clause
It looks like if subqueries in from clause are not supported by PostgreSQL. Am I right ? If yes, are there any plans to provide this feature soon ? I would like to use Postgres for teaching SQL as a replacement for a well known commercial product which I find too heavy, too not open source, too everything ! For this kind of usage I need most SQL92 features, at least for query expression. Thanks for any information or advice about this topics. Pierre -- Pierre HABRAKEN - mailto:[EMAIL PROTECTED] Tél: 04 76 82 72 48 - Fax: 04 76 82 72 87 IMAG-LSR BP72 38402 SAINT MARTIN D'HERES Cedex
Re: [SQL] Make a column case insensitive
When I use UTF-8 encoding for my database. upper and lower() functions break (no longer process accented chars correctly). This is with the correct encoding [EMAIL PROTECTED] I think, for CTYPES et al. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] degradation in performance
6000 inserts, each in its own transaction, will be very long. Group your inserts in one transaction and it'll be faster (maybe 1-2 minutes). Have your program generate a tab-delimited text file and load it with COPY, you should be down to a few seconds. On Tue, 21 Sep 2004 13:27:43 +0200, Alain Reymond <[EMAIL PROTECTED]> wrote: Good afternoon, I created a database with Postgres 7.3.4 under Linux RedHat 7.3 on a Dell PowerEdge server. One of the table is resultats(numbil, numpara, mesure, deviation) with an index on numbil. Each select on numbil returns up to 60 rows (that means 60 rows for one numbil with 60 different numpara) for example (20,1,500,3.5) (20,2,852,4.2) (20,12,325,2.8) (21,1,750,1.5) (21,2,325,-1.5) (21,8,328,1.2) etc.. This table contains now more than 6.500.000 rows and grows from 6000 rows a day. I have approximatively 1.250.000 rows a year. So I have 5 years of data online. Now, an insertion of 6000 lasts very lng, up to one hour... I tried to insert 100.000 yesterday evening and it was not done in 8 hours. Do you have any idea how I can improve speed - apart from splitting the table every 2 or 3 years which is the the aim of a database! I thank you for your suggestions. Regards. Alain Reymond CEIA Bd Saint-Michel 119 1040 Bruxelles Tel: +32 2 736 04 58 Fax: +32 2 736 58 02 [EMAIL PROTECTED] PGP key sur http://pgpkeys.mit.edu:11371 ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] Howto turn an integer into an interval?
try : resend_interval * '1 seconds'::interval this will convert your seconds into an interval. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] need query advice
argh, you could use contribs/intarray with a gist index... instead of N columns use an integer[] and gist-index it, then write the equivalent of : where (intersection of the search array with the data array) has at least 5 elements (or 4 elements) (or at least 4 elements order by the number of elements desc) in your table defs, you can express your unicity constraint on the array with : CHECK( uniq(yourarrau) = yourarrau AND length(yourarrau)=6 ) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Comparing two (largish) tables on different servers
Idea : Write a program which connects on the two databases, creates a cursor on each to return the rows in order, then compare them as they come (row 1 from cursor 1 == row 1 from cursor 2, etc). Fetch in batchs. If there's a difference you can then know which row. I hope you have an index to sort on, to save you a huge disk sort. On Tue, 9 Nov 2004 14:41:00 -0800, Gregory S. Williamson <[EMAIL PROTECTED]> wrote: This is probably a silly question. Our runtime deployment of database servers (7.4) involves some redundant/duplicate databases. In order to compare tables (about 5 gigs each) on different servers I unload the things (takes a while etc.), sort them with a UNIX sort and then do a cksum on them. Is there any way to do this from inside postgres that anyone knows of ? I looked through the manual and the contrib stuff and didn't see much ... Thanks, Greg Williamson DBA GlobeXplorer LLC ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] tree structure photo gallery date quiery
I'm looking at the possibility of implementing a photo gallery for my web site with a tree structure, something like: You don't really want a tree structure, because one day you'll want to put the same photo in two galleries. Suppose you take a very interesting photo of celery during your trip to china, you might want to create a 'Trip to China' folder, and also a 'Celery' folder for your other celery photos... well, if you don't like vegetables, it also works with people, moods, geographic regions, themes, etc. You could define this structure : You could then define tables describing themes, and/or keywords, link photos with these themes and keywords, and define a folder as either being a specific collection of photos, or as a collection of one or several themes. From a tree, it becomes a bit more like a graph. Themes can also be organized and relationed together. This opens the path to easy searching and cataloguing ; is not that much more difficult to do, and in the end you'll have a much better system. How would I go about creating a view to show a) the number of photos in a gallery and b) the timestamp of the most recent addition for a gallery, so that it interrogates all sub-galleries? If you're concerned about performance, you should do this in a materialized view updated with triggers. If you can afford a seq scan on every time, a few stored procs should do the trick. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] finding gaps in dates
I have a logging application that should produce an entry in the database every minute or so, give or take a few seconds. I'm interested in finding out a: what minutes don't have a record and b: periods where the gap exceeded a certain amount of time. Is this not the same question ? Answer to a: If your script is set to run at every minute + 00 seconds, if it ever runs one second earlier, timestamp-truncate will keep the previous minute and you're screwed. A simple solution would be to have your script run every minute + 30 seconds. Answer to b: If you can do the following : examine the records in chronological order, every time computing the delay between record N and record N-1 ; if this delay is not one minute +/- a few seconds, you have detected an anomaly. Problem : you need to scan the whole table for anomalies every time. Solution : put an ON INSERT trigger on your log table which : - checks the current time for sanity (ie. is it +/- a few seconds from the expected time ?) This solves part of a) - looks at the timestamp of the latest row, computes the difference with the inserted one, and if > than 1 minute + a few seconds, inserts a row in an anomaly logging table. This solves the rest of a) and b) It's just an additional SELECT x FROM table ORDER BY timestamp DESC LIMIT 1 which has a negligible performance impact compared to your insert. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Recursive SETOF function
SELECT INTO cid count(*) FROM providers WHERE uid =child_provider; Hey, hey. Better : SELECT blablah FROM providers WHERE uid = child_provider LIMIT 1; IF NOT FOUND THEN exit with error ELSE do your stuff Why scan more than 1 row when you just need existence ? Or : SELECT INTO cid parent_id FROM providers WHERE uid=cid; WHILE FOUND RETURN NEXT cid; SELECT INTO cid parent_id FROM providers WHERE uid=cid; END; Not sure about the While syntax but you get the idea. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Way to stop recursion?
You have to do this with a trigger. The problem is that the rule is expanded inline like a macro, so you can't prevent the behaviour you're seeing. True, but you can get out of the hole in another way : - Change the name of your table to "hidden_table" - Create a view which is a duplicate of your table : CREATE VIEW visible_table AS SELECT * FROM hidden_table; -> Your application now accesses its data without realizing it goes through a view. Now create a rule on this view, to make it update the real hidden_table. As the rule does not apply to hidden_table, it won't recurse. Other solution (this similar to what Tom Lane proposed I think) : Create a field common_id in your table, with - an insert trigger which puts a SERIAL default value if there is no parent, or copies the parent's value if there is one - an update trigger to copy the new parent's common_id whenever a child changes parent (if this ever occurs in your design) Now create another table linking common_id to the 'common' value. Create a view which joins the two, which emulates your current behaviour. Create an ON UPDATE rule to the view which just changes one row in the link table. If you do a lot of selects, solution #1 will be faster, if you do a lot of updates, #2 will win... Just out of curiosity, what is this for ? On Fri, 26 Nov 2004 16:34:48 -0500, Andrew Sullivan <[EMAIL PROTECTED]> wrote: On Fri, Nov 26, 2004 at 01:03:38PM -0800, Jonathan Knopp wrote: UPDATE rules work perfectly for what I need to do except I need them to only run once, not try and recurse (which of course isn't allowedby postgresql anyway). Triggers seem a less efficient way to do the same thing, though I understand they would run recursively too. Here's the table structure in question: You have to do this with a trigger. The problem is that the rule is expanded inline like a macro, so you can't prevent the behaviour you're seeing. A ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Making dirty reads possible?
You can always use contribs/dblink, or a plperlu/plpythonu function which writes to a file... So, summarising: - Nested transactions is not (yet) supported - READ UNCOMMITTED isolation level is not (yet) supported - EXECUTE does not circumvent the transaction Is there a way around this? Regards, Ellert. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Formatting an Interval
Note that there will be a loss of precision as an interval of 1 month, for instance, does not mean any specific number of days, as : 1 february + 1 month = 1 march (1 month = 28 or 29 days) 1 december + 1 month = 1 january(1 month = 31 days) Same for years etc. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] [PERFORM] SQL Query Performance - what gives?
The bitmask allows the setting of multiple permissions but the table definition doesn't have to change (well, so long as the bits fit into a word!) Finally, this is a message forum - the actual code itself is template-driven and the bitmask permission structure is ALL OVER the templates; getting that out of there would be a really nasty rewrite, not to mention breaking the user (non-developer, but owner) extensibility of the current structure. Is there a way to TELL the planner how to deal with this, even if it makes the SQL non-portable or is a hack on the source mandatory? You could use an integer array instead of a bit mask, make a gist index on it, and instead of doing "mask & xxx" do "array contains xxx", which is indexable with gist. The idea is that it can get much better row estimation. Instead of 1,2,3, you can use 1,2,4,8, etc if you like. you'd probably need a function to convert a bitmask into ints and another to do the conversion back, so the rest of your app gets the expected bitmasks. Or add a bitmask type to postgres with ptoper statistics... -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] matching a timestamp field
Hello, Why is my sql below accepted in 8.1.19 and refused in 8.4.9 ??? Is there something I have missed in the doc ? Welcome to psql 8.1.19, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit ansroc=# select * from s12hwdb where record ~'2012-09-20' limit 5; host | exchange | rit | board | var | lceid | pceid | mnem | eq | rtyp | rv | cetype | record| type | zone --+--+-+--+--+---+---+---++--++--+-+--+-- and5032t | and5032t | 01a0301 | 21122994 | ebjb | | 000c | con3a | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0307 | 21406298 | aaca | | 000c | mmca | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0309 | 21406298 | aaca | | 000c | mmca | s | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0311 | 21407930 | | | 000c | mmcb | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0313 | 21407932 | abca | | 000c | mcud | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 (5 rows) ansroc=# \q psql (8.4.9) Type "help" for help. ansroc=# select * from s12hwdb where record ~'2012-09-20' limit 5; ERROR: operator does not exist: timestamp without time zone ~ unknown LINE 1: select * from s12hwdb where record ~'2012-09-20' limit 5; ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. ansroc=# Pierre. +32 471 68 12 23 * Disclaimer * http://www.belgacom.be/maildisclaimer
Re: [SQL] matching a timestamp field
Hello, The solution I just found on the Net (Thanks to Samuel Gendler) ansroc=# select * from s12hwdb where record::text ~ '2012-09-20 11:50:02' limit 5; host | exchange | rit | board | var | lceid | pceid | mnem | eq | rtyp | rv | cetype | record| type | zone --+--+-+--+--+---+---+---++--++--+-+--+-- and5032t | and5032t | 01a0301 | 21122994 | ebjb | | 000c | con3a | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0307 | 21406298 | aaca | | 000c | mmca | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0309 | 21406298 | aaca | | 000c | mmca | s | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0311 | 21407930 | | | 000c | mmcb | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0313 | 21407932 | abca | | 000c | mcud | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 (5 rows) But I still can not find this in the doc. From: BACHELART PIERRE (CIS/SCC) [mailto:pierre.bachel...@belgacom.be] Sent: Thursday 20 September 2012 13:01 To: pgsql-sql@postgresql.org Subject: matching a timestamp field Hello, Why is my sql below accepted in 8.1.19 and refused in 8.4.9 ??? Is there something I have missed in the doc ? Welcome to psql 8.1.19, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit ansroc=# select * from s12hwdb where record ~'2012-09-20' limit 5; host | exchange | rit | board | var | lceid | pceid | mnem | eq | rtyp | rv | cetype | record| type | zone --+--+-+--+--+---+---+---++--++--+-+--+-- and5032t | and5032t | 01a0301 | 21122994 | ebjb | | 000c | con3a | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0307 | 21406298 | aaca | | 000c | mmca | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0309 | 21406298 | aaca | | 000c | mmca | s | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0311 | 21407930 | | | 000c | mmcb | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 and5032t | and5032t | 01a0313 | 21407932 | abca | | 000c | mcud | e | ef03 | b1 | plce#xfx | 2012-09-20 11:50:02 | H| a1 (5 rows) ansroc=# \q psql (8.4.9) Type "help" for help. ansroc=# select * from s12hwdb where record ~'2012-09-20' limit 5; ERROR: operator does not exist: timestamp without time zone ~ unknown LINE 1: select * from s12hwdb where record ~'2012-09-20' limit 5; ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. ansroc=# Pierre. +32 471 68 12 23 * Disclaimer * http://www.belgacom.be/maildisclaimer
Re: [SQL] Export tab delimited from mysql to postgres.
A tested example in Python : Basically it counts the \t and accumulates the lines until it has enough and then prints the line. Note : as an exercise you could add a test so that there are exactly (columns-1) delimiters and not >=(columns-1). def grouplines( in_stream, columns, delimiter ): num_delimiters = columns - 1 accum = '' for line in in_stream: accum += line if accum.count( delimiter ) >= num_delimiters: print accum.replace( "\n", "\\n" ) accum = '' if accum: print "Last line unterminated." grouplines( open( 'data.in' ), 3, "\t" ): Input data (I added a column over your example): 1 What a day! A 2 What a week it has been! B 3 What the! C Output : 1 What a day! A\n 2 What a week it has\nbeen! B\n 3 What the! C Have fun with your copy ! On Tue, 12 Oct 2004 15:33:46 +1000, Theo Galanakis <[EMAIL PROTECTED]> wrote: Thanks for all your comments, I have beent trying the insert within a transaction block, however it does not seem to reduce the time it takes to process each records. Mind you there are 80 column and the insert statement explicitly defines the column to insert into. I need any tip I can get help me transform the text file into a format postgres copy will successfully read. Here is sample of the current format of a mysql tab delimited dump.. columnA columnB 1 What a day! 2 What a week it has been! 3 What the! As you can see row 2 has a value that holds a CR value which ends up wrapping around onto the third line. Postgres copy command does not like this and mysql is unable to replace the value with another type of delimiter, like a \r. So I gather I have to some how manually replace the carriage return with something postgres understand \r... columnA columnB 1 What a day! 2 What a week it has \r been! 3 What the! How do I do this without getting a text file that looks like this 1 What a day! \r\n2 What a week it has \r been!\r\n3 What the!\r\n Any help would be appreciated. Theo -Original Message- From: Christopher Browne [mailto:[EMAIL PROTECTED] Sent: Tuesday, 12 October 2004 10:46 AM To: [EMAIL PROTECTED] Subject: Re: [SQL] Export tab delimited from mysql to postgres. Quoth [EMAIL PROTECTED] (Theo Galanakis): Could you provide a example of how to do this? I actually ended up exporting the data as Insert statements, which strips out cf/lf within varchars. However it takes an eternity to import 200,000 records... 24 hours infact Is this normal? I expect that this results from each INSERT being a separate transaction. If you put a BEGIN at the start and a COMMIT at the end, you'd doubtless see an ENORMOUS improvement. That's not even the _big_ improvement, either. The _big_ improvement would involve reformatting the data so that you could use the COPY statement, which is _way_ faster than a bunch of INSERTs. Take a look at the documentation to see the formatting that is needed: http://techdocs.postgresql.org/techdocs/usingcopy.php http://www.faqs.org/docs/ppbook/x5504.htm http://www.postgresql.org/docs/7.4/static/sql-copy.html -- output = ("cbbrowne" "@" "ntlug.org") http://www3.sympatico.ca/cbbrowne/lsf.html Question: How many surrealists does it take to change a light bulb? Answer: Two, one to hold the giraffe, and the other to fill the bathtub with brightly colored machine tools. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) __ This email, including attachments, is intended only for the addressee and may be confidential, privileged and subject to copyright. If you have received this email in error, please advise the sender and delete it. If you are not the intended recipient of this email, you must not use, copy or disclose its content to anyone. You must not copy or communicate to others content that is confidential or subject to copyright, unless you have the consent of the content owner. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Storing properties in a logical way.
But after looking closely at the list of a possible properties, i found out that some of them depend on others. For example, if item is a PDF document, it can have an index. But a document can also have an index with links. Logically, a properties like 'index with links' don't belong to the verification table - they look like a kind of a composite field - 'index with links' is not a stand-alone property, but it also implies that an item also has an 'index' property. On the other hand, it is impossible to decouple 'index' from 'with links', because the second part won't have any meaning without the first part. You mean your properties would be better organized as a tree ? Or is it even more complicated than that ? If it's a tree, look at the ways of storing a tree in a table. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org