Re: [GENERAL] Stored Procedure Assistance
On Tuesday 08 Jul 2003 5:55 pm, Bradley J. Bartram wrote: The first query is simple: SELECT a FROM table_a WHERE column_a = b This will return a single row. The next query takes that derived value and does a simliar select that returns multiple rows. SELECT c FROM table_b WHERE column_b = a The next query has some logic in php that constructs it. Basically if c 0 than the results of query 2 are setup as ORs in the WHERE clause. SELECT d FROM table_c WHERE column_c = c1 OR column_c = c2, etc. The first two queries are not a problem, but how can I take the array of results from query 2 and put them into query 3? Well, we can combine (1) and (2): SELECT c FROM table_b, table_a WHERE table_b.column_b = table_a.a AND table_a.column_a = b; The third needs to be something like: SELECT d FROM table_c WHERE column_c IN (...the combined select above...) So long as you don't have too many results this should work fine - no need for a stored procedure at all. If I've understood your problem, that should work. -- Richard Huxton ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Is Postgres broken in Red Hat 9?
Hi, there is a small problem here I have, I would appreciate any suggestion. We were using Postgres 7.2.x and later 7.3.x under Red Hat 8, and everything worked fine. Clients are working under windows and are communicating to Postgres via ODBC. This weekend we tried to upgrade to Red Hat 9 (and PG 7.3.3 shipped with it), but we just couldn't connect to server through ODBC. Postgres starts fine, I can access it through psql, but we simply can't access it from client machines. It is configured properly (it should accept tcp/ip connections, and appropriate rights are given). I simply can't find what is going wrong. I even trien putting older (Postgres made) rpms, but the problem persists. Are there any similar experiences, is this a RedHat bug, has it closed something somehow, or where should I start looking? Tnx. Dragan ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Is Postgres broken in Red Hat 9?
On Tue, 8 Jul 2003, Dragan Matic wrote: Hi, there is a small problem here I have, I would appreciate any suggestion. We were using Postgres 7.2.x and later 7.3.x under Red Hat 8, and everything worked fine. Clients are working under windows and are communicating to Postgres via ODBC. This weekend we tried to upgrade to Red Hat 9 (and PG 7.3.3 shipped with it), but we just couldn't connect to server through ODBC. Postgres starts fine, I can access it through psql, but we simply can't access it from client machines. It is configured properly (it should accept tcp/ip connections, and appropriate rights are given). I simply can't find what is going wrong. I even trien putting older (Postgres made) rpms, but the problem persists. Are there any similar experiences, is this a RedHat bug, has it closed something somehow, or where should I start looking? Tnx. Can you connect to it locally by using your machine's IP address? I.e. if your machine's eth0 sits on 10.0.0.2, does psql -h 10.0.0.2 work? You may have a firewall setup to block all ports by default. firewall-config or something like it was the name of the firewall config util in 7.2. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Is Postgres broken in Red Hat 9?
On Tuesday 08 Jul 2003 6:38 pm, Dragan Matic wrote: Hi, there is a small problem here I have, I would appreciate any suggestion. We were using Postgres 7.2.x and later 7.3.x under Red Hat 8, and everything worked fine. Clients are working under windows and are communicating to Postgres via ODBC. This weekend we tried to upgrade to Red Hat 9 (and PG 7.3.3 shipped with it), but we just couldn't connect to server through ODBC. Postgres starts fine, I can access it through psql, but we simply can't access it from client machines. It is configured properly (it should accept tcp/ip connections, and appropriate rights are given). I simply can't find what is going wrong. I even trien putting older (Postgres made) rpms, but the problem persists. Are there any similar experiences, is this a RedHat bug, has it closed something somehow, or where should I start looking? Tnx. 1. As root, try the following to see if PG is listening: lsof -i | grep postgres You should see a postgres process with a TCP connection in (LISTEN) mode. 2. Try the following from another one of your PCs to make sure there's not a firewall in the way (open a DOS-prompt) telnet myPGmachine:5432 Hit return twice and you should see an error message like: EFATAL: invalid length of startup packet 3. Turn on the logging in the ODBC driver settings and see if there's anything useful there. If nothing leaps out at you, try back here. HTH -- Richard Huxton ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Is Postgres broken in Red Hat 9?
On Tuesday 08 Jul 2003 6:38 pm, Dragan Matic wrote: We were using Postgres 7.2.x and later 7.3.x under Red Hat 8, and everything worked fine. Clients are working under windows and are communicating to Postgres via ODBC. Oh - one thing you need to know - there was an extra column introduced to the pg_hba.conf file, I think that might have been between 7.2 and 7.3 -- Richard Huxton ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Is Postgres broken in Red Hat 9?
On Tue, 8 Jul 2003 11:40:15 -0600 (MDT) scott.marlowe [EMAIL PROTECTED] wrote: You may have a firewall setup to block all ports by default. firewall-config or something like it was the name of the firewall config util in 7.2. in recent redhats, this is likely it. sudo /sbin/ipchains -L to see the current ipchains firewall rules, sudo /sbin/iptables -L for iptables. i've not run RH 9 yet, so i don't know what the defaults are here, but i've had this problem on slightly older releases, so there are no suprises if this is the problem. you can either learn enough about iptables/ipchains to punch a hole, but to make sure that it will solve the problem, try shutting the whole think down first: sudo /etc/rc.d/init.d/ipchains stop sudo /etc/rc.d/init.d/iptables stop after that, see if the network clients can connect. richard -- Richard Welty [EMAIL PROTECTED] Averill Park Networking 518-573-7592 Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] SQL Functions and plan time
Richard Huxton [EMAIL PROTECTED] writes: Can I ask why, since the plan is constructed at query-time the parameters aren't substitued *before* planning? Because then the plan couldn't be re-used. A SQL function may be executed many times in a query, so the plan has to be reusable. (Or, if you prefer, we have query-level caching of SQL function plans.) regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Transaction Blocks
On 2003.07.07 11:56 Mat wrote: I believe that its more efficient to group INSERT's together and COMMIT them in groups. However, I am automatically entering a lot of data and some of its is not unique. How can I reap the benefits of using BEGIN and COMMIT without the whole block of transactions failing if one is duplicated. Best I could guess would be that you'll need to pre-process the data to ensure that the data is good. Take care, Jay ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Website (DNS) is hosed!!!
On Tue, 8 Jul 2003, Andrew Sullivan wrote: On Mon, Jul 07, 2003 at 08:39:13PM -0500, Daniel Armbrust wrote: Do you realize your entire website is hosed? NONE of the download sites will resolve, and NONE of the mirror sites work (or will resolve). What is up? Maybe you have a bad network connection or stale DNS records? It works fine for me. we have a script that regenerates the postgresql.org DNS based on the mirrors database ... it runs fine from the command line, but I think it might have had PATH problems when we ran it from cron ... have corrected for that by hard coding everything, so hopefully alls well tonight when it runs ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PG crash on simple query, story continues
Hi, I have very interesting suspicion: See my comments ! convert_string_datum ... !this is my case if (!lc_collate_is_c()) { /* Guess that transformed string is not much bigger than original */ xfrmsize = strlen(val) + 32;/* arbitrary pad value here... */ ! I would say very interesting aproach, ! why not just xfrmsize = strxfrm(xfrmstr, NULL, 0); ! fine xfrmstr = (char *) palloc(xfrmsize); !fine xfrmlen = strxfrm(xfrmstr, val, xfrmsize); !if error happend, xfrmlen will be (size_t)-1 if (xfrmlen = xfrmsize) { !yep did not make it /* Oops, didn't make it */ pfree(xfrmstr); !what do we allocating here? 0 byte xfrmstr = (char *) palloc(xfrmlen + 1); !BOOM xfrmlen = strxfrm(xfrmstr, val, xfrmlen + 1); } pfree(val); val = xfrmstr; } -Original Message- From: Maksim Likharev Sent: Tuesday, July 08, 2003 9:35 AM To: 'Tom Lane' Cc: [EMAIL PROTECTED]; '[EMAIL PROTECTED]' Subject: RE: [GENERAL] PG crash on simple query, story continues After upgrade on 7.3.3 we have following: signal 11 #0 0x254f38 in pfree () #1 0x1fde44 in convert_to_scalar () #2 0x1faafc in scalarineqsel () #3 0x1fd574 in mergejoinscansel () #4 0x14fec8 in cost_mergejoin () #5 0x16b820 in create_mergejoin_path () #6 0x155048 in sort_inner_and_outer () #7 0x154dd0 in add_paths_to_joinrel () #8 0x1567cc in make_join_rel () #9 0x15669c in make_jointree_rel () #10 0x14dd28 in make_fromexpr_rel () #11 0x14d6d0 in make_one_rel () #12 0x15d328 in subplanner () #13 0x15d218 in query_planner () #14 0x15f29c in grouping_planner () #15 0x15d93c in subquery_planner () #16 0x15d5e4 in planner () #17 0x1a6a94 in pg_plan_query () #18 0x1a712c in pg_exec_query_string () #19 0x1a8fd8 in PostgresMain () #20 0x172698 in DoBackend () #21 0x171ac4 in BackendStartup () #22 0x16ff14 in ServerLoop () #23 0x16f780 in PostmasterMain () #24 0x128e60 in main () -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Monday, July 07, 2003 10:14 PM To: Maksim Likharev Cc: [EMAIL PROTECTED] Subject: Re: [GENERAL] PG crash on simple query, story continues Maksim Likharev [EMAIL PROTECTED] writes: SELECT p.docid FROM prod.t_documents AS p INNER JOIN t_tempdocs AS t ON p.docid = t.docid LEFT OUTER JOIN prod.t_refs AS ct ON ct.docid = p.docid; here is a stack trace: 00252174 AllocSetAlloc (3813b0, 15, 251fe0, 20, 0, ffbee2f8) + 194 002532e4 MemoryContextAlloc (3813b0, 15, 11, 7efefeff, 81010100, ff00) + 68 0020dc0c varcharin (ffbee378, ffbee378, 20dae4, 0, 0, ffbee3f0) + 128 00243570 FunctionCall3 (ffbee4a8, 3c1ce8, 0, 324, 0, ffbee5c4) + 11c 0023e6c4 get_attstatsslot (3d6410, 413, 324, 2, 0, ffbee5c4) + 2b0 001f8cb4 scalarineqsel (3bb978, 42a, 0, 3bffa8, 40f0e8, 413) + 288 001fb824 mergejoinscansel (3bb978, 3c0080, 3c0968, 3c0970, 0, 1) + 23c Hmm, it would seem there's something flaky about your pg_statistic entries. Could we see the pg_stats rows for the columns mentioned in this query? regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PG crash on simple query, story continues
Maksim Likharev [EMAIL PROTECTED] writes: ! I would say very interesting aproach, ! why not just xfrmsize = strxfrm(xfrmstr, NULL, 0); strxfrm doesn't work that way (and if it did, it would give back a malloc'd not a palloc'd string). !if error happend, xfrmlen will be (size_t)-1 No it won't; see the man page for strxfrm. This does raise an interesting thought though: what platform are you on? It seems to me that we've heard of buggy versions of strxfrm that write more bytes than they're allowed to, thereby clobbering palloc's data structures. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] PG crash on simple query, story continues
!if error happend, xfrmlen will be (size_t)-1 No it won't; see the man page for strxfrm. RETURN VALUES Upon successful completion, strxfrm() returns the length of the transformed string (not including the terminating null byte). If the value returned is n or more, the contents of the array pointed to by s1 are indeterminate. On failure, strxfrm() returns (size_t)-1. but you a right it is strxfrm() that returns more than allowed, most likely in following condition: strxfrm(xfrmstr, val, 0) a null terminator extra. I am on SunOS 5.8, BTW on Linux it works -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 08, 2003 11:45 AM To: Maksim Likharev Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: [GENERAL] PG crash on simple query, story continues Maksim Likharev [EMAIL PROTECTED] writes: ! I would say very interesting aproach, ! why not just xfrmsize = strxfrm(xfrmstr, NULL, 0); strxfrm doesn't work that way (and if it did, it would give back a malloc'd not a palloc'd string). !if error happend, xfrmlen will be (size_t)-1 No it won't; see the man page for strxfrm. This does raise an interesting thought though: what platform are you on? It seems to me that we've heard of buggy versions of strxfrm that write more bytes than they're allowed to, thereby clobbering palloc's data structures. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] C Triggers Cancelling Transactions?
My initial impression is that AFTER triggers written in C cannot abort the wrapping transaction if an error occurs. Is this correct? --- Clay Cisco Systems, Inc. [EMAIL PROTECTED] (972) 813-5004 I've stopped 15,797 spam messages. You can too! One month FREE spam protection at http://www.cloudmark.com/spamnetsig/} ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] C Triggers Cancelling Transactions?
On Tue, 8 Jul 2003, Clay Luther wrote: My initial impression is that AFTER triggers written in C cannot abort the wrapping transaction if an error occurs. Is this correct? After triggers can still raise an exception condition to abort the transaction with elog. They can't quietly change or ignore the action like before triggers do however. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Statistics on a table
Hi, I just found very interesting situation, statistic ( n_distinct in particular) records for one of my columns, greatly under calculated it saying: 49726, but in reality 33409816. So planer never choose index but rather using table scan, and query never returns, is it any way how I can improve that? I can turn seqscan off but is it safe? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] where is the list of companies that provide commercial support?
The link at the end of 1.6 in the faq does not work. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [pgsql-advocacy] [HACKERS] [GENERAL] Postgresql AMD x86-64
The s_lock.h change will be in 7.4. --- Tom Lane wrote: Martin D. Weinberg [EMAIL PROTECTED] writes: I didn't change the source tree at all. I used: env CFLAGS='-O3 -m64' LD='/usr/bin/ld -melf_x86_64' ./configure --with-CXX --without-zlib BTW, see Jeff Baker's nearby report in pgsql-general that s_lock.h needs to be tweaked to use spinlocks on this platform. If you're using semaphores instead then you're taking a big performance hit. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [pgsql-advocacy] [HACKERS] [GENERAL] Postgresql AMD x86-64
Interesting. The compiler doesn't do x86_64 by default --- you have to enable it in the compile. Any idea how to handle this in our builds? It doesn't seem like a property of the OS as much as a property of the compiler --- we already do 64-bit on some platforms without flags. What is the full ac_cv_host value in config.log? ac_cv_host=i386-pc-bsdi4.3.1 I assume there is something special in the first field before the dash that could trigger these compile/link flags automatically. As I already mentioned, the s_lock.h changes will appear in 7.4 and are in CVS now. --- Martin D. Weinberg wrote: Bruce, I didn't change the source tree at all. I used: env CFLAGS='-O3 -m64' LD='/usr/bin/ld -melf_x86_64' ./configure --with-CXX --without-zlib with the experimental gcc and bintils from www.x86-64.org. I needed --without-zlib because I don't have a 64 bit compile yet for zlib. make make install-all-headers make check CC='gcc -m64' The last one is needed to make sure that the shared library gets made in the amd64 architecture. That's it! On Tue, 10 Jun 2003 14:14:22 -0400 (EDT) Bruce Momjian [EMAIL PROTECTED] wrote: Can you send us a patch? --- Martin D. Weinberg wrote: Hi folks, We recently built a dual K8D-based Opteron box running Linux in 64-bit mode (Debian 'testing' distribution with newly compiled binutils, gcc, and various support libraries for amd64 architecture). The Postgres 7.3.3 port was simply a matter of setting the appropriate flags to take of the biarchectecture nature of the Linux port. (that is, -m64 to generate 64 bit code and either gcc -m64 or ld -melf_x86_64 for linking). There were no other issues in the compile. In the install, I had to re-init due to the incompatibility of pg_control. All the regression tests went smoothly (the one failure was in geometry and is due to round off in the least sig figs of the doubles in the Point structure or machine zero differences). I compared a simple query on local data in both 32bit mode and 64bit mode; the execute time difference was not significant but this was not a compute intensive verification (summing up column values in a table). We have some other 32-bit amd machines here; I would be happy to try a few other tests. Good job, developers!!! On Mon, 07 Apr 2003 18:34:05 +0800 Justin Clift [EMAIL PROTECTED] wrote: Hi guys, Does anyone want remote access to the upcoming AMD 64 bit architecture, to make sure PostgreSQL runs well on it? It's only via remote access at present, but the AMD guys are willing to help us out here. Regards and best wishes, Justin Clift Original Message Subject: RE: Postgresql AMD x86-64 Date: Fri, 4 Apr 2003 10:29:24 -0800 From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] CC: [EMAIL PROTECTED], [EMAIL PROTECTED] Justin, I apologize for the delayed response. Unfortunately, at the moment I don't have a system available to send you. If I could get you access to a machine remotely, would that be useful to you? I will need to check machine availability before I can promise you anything, but I'm willing to be your sponsor in the AMD Developer Center and approve a request for access. -Original Message- From: Justin Clift [mailto:[EMAIL PROTECTED] Sent: Monday, March 10, 2003 7:31 PM To: Andreas Jaeger Subject: Re: AMD x86-64 Hi Andreas, Have you heard anything back from the AMD guys in relation to this? We've not heard a single thing from them. :-( Regards and best wishes, Justin Clift Andreas Jaeger wrote: Justin Clift [EMAIL PROTECTED] writes: snip Yep, the aim is to allow PostgreSQL developers access to a system running x86-64 hardware as needed. Trying to get ahead of the ballgame these days. :) If you have hammer Hardware, I can provide you with a prerelease of our software, That would be cool Andreas, thanks. Now, just need to secure the hardware somehow. Personally, I feel that an email forwarded from you to the right people at AMD may help that significantly. At least, people from AMD should get in contact with us to see if something beneficial can be arranged. Ok, I forwarded your note and let's see whether they're interested (there're already a few commercial database like IBM DB2 ported). From past experience, it might be difficult to get hardware directly but let's wait for their answer. If you don't hear anything this week, feel free to ask me
Re: [GENERAL] PG crash on simple query, story continues
Maksim Likharev [EMAIL PROTECTED] writes: On failure, strxfrm() returns (size_t)-1. Not according to the Single Unix Specification, Linux, or HP-UX; I don't have any others to check. But anyway, that is not causing your problem, since palloc(0) would complain not dump core. I am on SunOS 5.8, Solaris, eh? IIRC, it was Solaris that we last heard about broken strxfrm on. Better check to see if Sun has a fix for this. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] PG crash on simple query, story continues
I would referrer dump that gar.xxg, and put PG on Linux, but this is not up to me. Thanks for the help. -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 08, 2003 3:58 PM To: Maksim Likharev Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: [GENERAL] PG crash on simple query, story continues Maksim Likharev [EMAIL PROTECTED] writes: On failure, strxfrm() returns (size_t)-1. Not according to the Single Unix Specification, Linux, or HP-UX; I don't have any others to check. But anyway, that is not causing your problem, since palloc(0) would complain not dump core. I am on SunOS 5.8, Solaris, eh? IIRC, it was Solaris that we last heard about broken strxfrm on. Better check to see if Sun has a fix for this. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] DISTINCT vs EXISTS performance
I have a query where I want to select the usertable records that have a matching entry in an event table. There are two ways to do this. 1) SELECT COUNT(DISTINCT u.uid) FROM usertable u, eventlog e WHERE u.uid = e.uid AND e.type = XX; 2) SELECT COUNT(u.uid) FROM usertable u WHERE EXISTS(SELECT 1 FROM eventlog e WHERE u.uid = e.uid AND e.type = XX); In a real life query 1 took 46284.58 msec and 45856.66 msec for first and second runs and query 2 took 38736.77 msec and 32833.08 msec. Here are the explain analyse outputs: QUERY PLAN Aggregate (cost=180116.76..180116.76 rows=1 width=20) (actual time=46267.93..46267.93 rows=1 loops=1) - Nested Loop (cost=195.13..180116.75 rows=1 width=20) (actual time=46094.32..46265.81 rows=8 loops=1) - Nested Loop (cost=195.13..180112.96 rows=1 width=16) (actual time=46064.86..46189.25 rows=8 loops=1) - Hash Join (cost=195.13..179856.40 rows=43 width=12) (actual time=46054.33..46153.15 rows=8 loops=1) Hash Cond: (outer.typeid = inner.id) - Seq Scan on eventlog (cost=0.00..174675.16 rows=664742 width=8) (actual time=357.04..45349.36 rows=580655 loops=1) Filter: (type = 4) - Hash (cost=194.97..194.97 rows=65 width=4) (actual time=21.83..21.83 rows=0 loops=1) - Index Scan using clickthru_jid_and_id_key on clickthru (cost=0.00..194.97 rows=65 width=4) (actual time=21.47..21.71 rows=63 loops=1) Index Cond: (jobid = 7899) - Index Scan using usertable_pkey on usertable u (cost=0.00..5.91 rows=1 width=4) (actual time=4.50..4.50 rows=1 loops=8) Index Cond: ((outer.uid = u.userkey) AND (u.podkey = 259)) - Index Scan using d_pkey on d (cost=0.00..3.78 rows=1 width=4) (actual time=9.56..9.56 rows=1 loops=8) Index Cond: (outer.uid = d.ukey) Total runtime: 46284.58 msec 45856.66 msec (15 rows) QUERY PLAN Aggregate (cost=4325054.14..4325054.14 rows=1 width=8) (actual time=38736.62..38736.62 rows=1 loops=1) - Nested Loop (cost=0.00..4325052.01 rows=852 width=8) (actual time=12451.09..38736.58 rows=6 loops=1) - Index Scan using usertable_podkey_key on usertable u (cost=0.00..4321822.44 rows=852 width=4) (actual time=12450.95..38735.85 rows=6 loops=1) Index Cond: (pkey = 259) Filter: (subplan) SubPlan - Nested Loop (cost=0.00..2532.25 rows=1 width=8) (actual time=49.59..49.59 rows=0 loops=752) - Index Scan using eventlog_uid_and_jid_and_type_key on eventlog (cost=0.00..2343.37 rows=62 width=4) (actual time=29.64..48.91 rows=4 loops=752) Index Cond: (uid = $0) Filter: (type = 4) - Index Scan using clickthru_pkey on clickthru (cost=0.00..3.02 rows=1 width=4) (actual time=0.18..0.18 rows=0 loops=2725) Index Cond: (outer.typeid = clickthru.id) Filter: (jobid = 7899) - Index Scan using directory_pkey on d (cost=0.00..3.78 rows=1 width=4) (actual time=0.11..0.11 rows=1 loops=6) Index Cond: (d.ukey = outer.userkey) Total runtime: 38736.77 msec 32833.08 msec (16 rows) ... so the questions are: Why are the plans so different? Why does the planner think query 2 will take so long when it ends up being faster than query 1? Is there anything I can do to speed up the query? version is: 7.3.3 BTW does this belong on the performance list or is that list about tuning the database config paramaters only? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Restoring a postgres database
--- Timothy Brier [EMAIL PROTECTED] wrote: Hi, I've run in to this problem with a database we are working. When I restore a database schema, I need to restore the schema 5 times to ensure that the schema is complete. Also some of the sequences are not restored in a usable form. E.g. If my next sequence should be 1000, my sequence is set to 1 and I need to run a query to reset my sequences. I have restored other simpler databases in PostgreSQL without a problem. It is my view that this issue is caused by a dependency issue because the items that don't get restored the first or second time complain that a dependency on a function doesn't exist, but all is fine after the 5th attempt. I do two pg_dumps. The first is: pg_dump -Cs databasename | gzip -cv databasenameschemammdd.gz pg_dump -Ca databasename | gzip -cv databasenamedatammdd.gz The database contains 64 tables, 34 views, 244 user functions, 34 rules, 87 triggers, 202 indexes and 70 sequences. We are also using inheritance in the database. The schemas are standard schemas created by PostgreSQL. I have also used the ability of pg_dump to create a schema and data in a tar format, but cannot get it to restore the schema from the tar. It always complains about the functions for plpgsql already existing and stops. No problem restoring the data from the tar. I am using PostgreSQL 7.3.3 on RedHat 7.1 - 9.0. The database itself is great, we've converted a few projects from MSSQL to PostgreSQL but I am concerned about the integrity of restoring the data. Does anyone know if this will be improved in 7.4? Is there a better way to do a backup? To the developers, support team and the community, Keep up the good work. Timothy Brier. I ran into a situation similar to yours regarding tables with foreign references and escalation rules. I've noticed that tables seem to get dumped/restored in the order in which they were created. To fix my problem, I rearranged the table order in my schema files. Since the tables were then created in the correct order, subsequent dumps/restores have gone smoothly. (I hope it wasn't just dumb luck.) I dump the schema separately from the data. I have a python script that separates the table creation statements into one schema file and the index and constraint creation statements into a second schema file. This allows me to recreate the tables, restore the data, and then recreate indexes and constraints. I figure if the data does not comply with the contraints, the dump was bad anyway. (This has yet to occur.) I can't help with the sequence field problem; but I hope you're not having to restore too often. Best of luck, Andrew Gould ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Restoring a postgres database
Andrew Gould wrote: --- Timothy Brier [EMAIL PROTECTED] wrote: Hi, I've run in to this problem with a database we are working. When I restore a database schema, I need to restore the schema 5 times to ensure that the schema is complete. Also some of the sequences are not restored in a usable form. E.g. If my next sequence should be 1000, my sequence is set to 1 and I need to run a query to reset my sequences. I have restored other simpler databases in PostgreSQL without a problem. It is my view that this issue is caused by a dependency issue because the items that don't get restored the first or second time complain that a dependency on a function doesn't exist, but all is fine after the 5th attempt. I do two pg_dumps. The first is: pg_dump -Cs databasename | gzip -cv databasenameschemammdd.gz pg_dump -Ca databasename | gzip -cv databasenamedatammdd.gz The database contains 64 tables, 34 views, 244 user functions, 34 rules, 87 triggers, 202 indexes and 70 sequences. We are also using inheritance in the database. The schemas are standard schemas created by PostgreSQL. I have also used the ability of pg_dump to create a schema and data in a tar format, but cannot get it to restore the schema from the tar. It always complains about the functions for plpgsql already existing and stops. No problem restoring the data from the tar. I am using PostgreSQL 7.3.3 on RedHat 7.1 - 9.0. The database itself is great, we've converted a few projects from MSSQL to PostgreSQL but I am concerned about the integrity of restoring the data. Does anyone know if this will be improved in 7.4? Is there a better way to do a backup? To the developers, support team and the community, Keep up the good work. Timothy Brier. I ran into a situation similar to yours regarding tables with foreign references and escalation rules. I've noticed that tables seem to get dumped/restored in the order in which they were created. To fix my problem, I rearranged the table order in my schema files. Since the tables were then created in the correct order, subsequent dumps/restores have gone smoothly. (I hope it wasn't just dumb luck.) I dump the schema separately from the data. I have a python script that separates the table creation statements into one schema file and the index and constraint creation statements into a second schema file. This allows me to recreate the tables, restore the data, and then recreate indexes and constraints. I figure if the data does not comply with the contraints, the dump was bad anyway. (This has yet to occur.) I can't help with the sequence field problem; but I hope you're not having to restore too often. Best of luck, Andrew Gould Thanks for the reply. I don't do alot of restores. But I would like to see the issue addressed so it would be easier for other people who use PostgreSQL and need to do a restore without jumping through these hoops. At the same time I realize and appreciate the hard work that has gone into this DB and that there are other priorities. Tim. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Native dataprovider on Windows
Hi, are there any Windows based native data providers for PostGreSql that can be used from .Net applications other than Mono regards-adivi
[GENERAL] PostgreSQL Advocacy Fund and New Banner Ads
POSTGRESQL ADVOCACY FUND Robert Treat has been selected by the Core Team as Treasurer for our new PostgreSQL Advocacy Fund. Robert will soon be setting up an account in the U.S. for receiving donations for the promotion of PostgreSQL. This fund will be used primarily to print promotional materials and provide travel and trade show funding, and will be managed by the PostgreSQL Advocacy team, with oversight by the core group. Once the fund is ready, we will make an announcement explaining how to donate. NEW BANNER AD POLICY FOR POSTGRESQL.ORG You will have noticed the two banner advertisements on each PostgreSQL.org web site. The PostgreSQL Core Team has decided on a new policy whereby these ads will benefit the project and our community. One advertisement is a Sponsorship Banner Ad. It will be advertised to commercial companies who wish to reach the rather specialized target market of PostgreSQL users and developers. The revenue from this ad will be split: 50% will go into the PostgreSQL Advocacy Fund, and the other 50% will go to Hub.org to compensate hosting costs for the PostgreSQL.org domain. If you work for a computer hardware, software, or services company which might be interested in supporting PostgreSQL and reaching an audience of programmers and database administrators, please contact Hub.org about posting an ad. Details are at: http://www.postgresql.org/sponsor.html The second advertisement is a Free Open Source Banner Ad, which is available to other Open Source projects which relate to PostgreSQL in some way. For those of you who participate in other OSS projects, you are encouraged to design and submit ads for this spot. For details, see: http://www.postgresql.org/project.html ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])