Re: [GENERAL] target list evaluation wrt sequences
> elein <[EMAIL PROTECTED]> wrote: >> Is the order of the target list guaranteed? AFAIR, all current and past Postgres versions evaluate target lists left-to-right. This is not guaranteed to remain true forever, since neither the SQL spec nor our own docs promise it anywhere... but offhand I can't think of a reason to break it. Bruno Wolff III <[EMAIL PROTECTED]> writes: > No. You can do effectively this by joining a select nextval to whatever > you main select is. Something like: > insert into ... >select a.n as a, a.n as b, > from (select nextval('n') as n) as a, lalala Urgh ... I'd not want to promise that nextval() will always be evaluated just once in the above example ... this really seems *much* more fragile than assuming left-to-right targetlist evaluation :-( regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] TCP only listening on localhost???
Richard Gass <[EMAIL PROTECTED]> writes: > This was all working fine until I recently. It has been a long time since I > have restarted the server ( 5:05pm up 377 days, 18:36, 7 users, load > average: 0.00, 0.00, 0.00) and it has been a long time since I restarted the > postgres DB. I wonder if some weirdness is happening which isn't allowing > postgres to bind to tcp port 5432. This seems really really odd. Have you touched the machine's firewall configuration (iptables or whatever) lately? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Unable to use index?
Edmund Dengler <[EMAIL PROTECTED]> writes: > Hmm, interesting as I have that table clustered starting with the > rep_component, so 'ps_probe' will definitely appear later in a sequential > scan. So why does the force the use of the index? It does not "force" anything, it simply alters the cost estimates. The seqscan-based plan requires an extra sort step to meet the ORDER BY, while the indexscan plan does not. In this particular scenario the indexscan plan is estimated to beat seqscan+sort, but in other cases the opposite decision might be made. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] postgresql idle
Alvaro Herrera <[EMAIL PROTECTED]> writes: > Several things. I think I wrote them along with my previous patch. The > visibility rules and the pg_clog protocol are what comes to mind > immediately. This is the difficult part. Difficult part? I think those are easy --- they are narrow and already solved-in-principle problems. What I do not understand is how you are going to handle error recovery and undo in general. Every single backend module that has any at-abort or at-commit cleanup is going to need work to extend its data structures to handle subtransactions. That seems like a major mess :-( regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] postgresql idle
Alvaro Herrera <[EMAIL PROTECTED]> writes: >> Yeah. We agreed in principle awhile back to "fix" this on the backend >> side by postponing the actual transaction start until the first command >> after BEGIN. > Actually, my patch is waiting for you to review it ;-) On the other > hand, since I'm already touching that code, maybe I can include it in my > patch. Or would you prefer to keep those things separate? I'd opt for keeping it separate I think ... regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Plpgsql problem passing ROWTYPE to function
"Karl O. Pinc" <[EMAIL PROTECTED]> writes: > What a drag. The documentation says it supports this. No it doesn't ... > "Parameters to a function can be composite types (complete table > rows). In that case, the corresponding identifier $n will be a row > variable, and fields can be selected from it, for example $1.user_id." That says that a parameter passed *into* a plpgsql function can be a rowtype, not that plpgsql supports doing something with the whole-row variable in function calls it makes. I agree it's a drag :-( regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql idle
On Thu, 29 Apr 2004, Andrew Rawnsley wrote: > > I find that some clients (DBVisualizer for one) do exactly that - > execute the COMMIT;BEGIN sequence, and leaves idle > transactions on a consistent basis. > The 7.5 JDBC driver has been fixed to avoid this problem. Kris Jurka ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] postgresql idle
On Thu, Apr 29, 2004 at 10:31:07PM -0400, Bruce Momjian wrote: > Alvaro Herrera wrote: > > > > Yeah. We agreed in principle awhile back to "fix" this on the backend > > > > side by postponing the actual transaction start until the first command > > > > after BEGIN. I looked at this just before 7.4 feature freeze, but > > > > decided it wasn't quite trivial and I hadn't time to make it happen. > > > > No one's gone back to work on it during the 7.5 cycle either. > > > > > > > > Right now I'm not wanting to touch that code since both Alvaro and the > > > > 2PC guy have open patches against it... > > > > Actually, my patch is waiting for you to review it ;-) On the other > > hand, since I'm already touching that code, maybe I can include it in my > > patch. Or would you prefer to keep those things separate? > > Alvaro, can I ask what is left? Several things. I think I wrote them along with my previous patch. The visibility rules and the pg_clog protocol are what comes to mind immediately. This is the difficult part. > I know you have pg_subtrans, but what plans do you have to abort > subtransactions and bring the system back to the state before the > subtransaction started? Some of those things are already in place. For example cursors are closed/dropped, file deletions (DROP TABLE) no longer take place, file creation is reverted, and the server is in a known state. Some things are missing: how to deal with deferred triggers, prepared statements, locks, on-commit actions. -- Alvaro Herrera () "Vivir y dejar de vivir son soluciones imaginarias. La existencia está en otra parte" (Andre Breton) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql idle
Alvaro Herrera wrote: > > > Yeah. We agreed in principle awhile back to "fix" this on the backend > > > side by postponing the actual transaction start until the first command > > > after BEGIN. I looked at this just before 7.4 feature freeze, but > > > decided it wasn't quite trivial and I hadn't time to make it happen. > > > No one's gone back to work on it during the 7.5 cycle either. > > > > > > Right now I'm not wanting to touch that code since both Alvaro and the > > > 2PC guy have open patches against it... > > Actually, my patch is waiting for you to review it ;-) On the other > hand, since I'm already touching that code, maybe I can include it in my > patch. Or would you prefer to keep those things separate? Alvaro, can I ask what is left? I know you have pg_subtrans, but what plans do you have to abort subtransactions and bring the system back to the state before the subtransaction started? -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] postgresql idle
On Thu, Apr 29, 2004 at 09:54:08PM -0400, Bruce Momjian wrote: > Tom Lane wrote: > > Andrew Sullivan <[EMAIL PROTECTED]> writes: > > > On Thu, Apr 29, 2004 at 03:57:59PM -0400, Andrew Rawnsley wrote: > > >> I find that some clients (DBVisualizer for one) do exactly that - > > >> execute the COMMIT;BEGIN sequence, and leaves idle > > >> transactions on a consistent basis. > > > > > Almost all the things I've see that set the autocommit behaviour will > > > also do this. I suspect it's a pretty common approach. > > > > Yeah. We agreed in principle awhile back to "fix" this on the backend > > side by postponing the actual transaction start until the first command > > after BEGIN. I looked at this just before 7.4 feature freeze, but > > decided it wasn't quite trivial and I hadn't time to make it happen. > > No one's gone back to work on it during the 7.5 cycle either. > > > > Right now I'm not wanting to touch that code since both Alvaro and the > > 2PC guy have open patches against it... Actually, my patch is waiting for you to review it ;-) On the other hand, since I'm already touching that code, maybe I can include it in my patch. Or would you prefer to keep those things separate? -- Alvaro Herrera () You liked Linux a lot when he was just the gawky kid from down the block mowing your lawn or shoveling the snow. But now that he wants to date your daughter, you're not so sure he measures up. (Larry Greenemeier) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] "Idle in Transaction" and hung connections
On Thu, 29 Apr 2004, Gregory S. Williamson wrote: > Tom -- > > Thanks for the suggestion, and the rapid response on something which may > not be truely a postgres issue (perhaps more a JDBC thing)! > This behavior is fixed in the 7.5 cvs version of the JDBC driver if you'd like to try it out. Kris Jurka ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] postgresql idle
Is this a TODO? --- Tom Lane wrote: > Andrew Sullivan <[EMAIL PROTECTED]> writes: > > On Thu, Apr 29, 2004 at 03:57:59PM -0400, Andrew Rawnsley wrote: > >> I find that some clients (DBVisualizer for one) do exactly that - > >> execute the COMMIT;BEGIN sequence, and leaves idle > >> transactions on a consistent basis. > > > Almost all the things I've see that set the autocommit behaviour will > > also do this. I suspect it's a pretty common approach. > > Yeah. We agreed in principle awhile back to "fix" this on the backend > side by postponing the actual transaction start until the first command > after BEGIN. I looked at this just before 7.4 feature freeze, but > decided it wasn't quite trivial and I hadn't time to make it happen. > No one's gone back to work on it during the 7.5 cycle either. > > Right now I'm not wanting to touch that code since both Alvaro and the > 2PC guy have open patches against it... > > regards, tom lane > > ---(end of broadcast)--- > TIP 7: don't forget to increase your free space map settings > -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Postgre and Web Request
> On Thu, 29 Apr 2004, Tatsuo Ishii wrote: > > > > > Depending on your web development environment (java, php, .NET) etc, > > > > you should be able to use some mechanism that will provide a pool of > > > > connections to the database. Each request does not open a new > > > > connection (and then release it), but insteads gets a connection from > > > > the pool to use, and returns it back to the pool when done. > > > > > > Where can I find some examples for connection pooling with php? Or must I > > > just use persistence connections? > > > > Use pgpool > > (ftp://ftp.sra.co.jp/pub/cmd/postgres/pgpool/pgpool-1.1.tar.gz). > > Tatsuo, I just tried pgpool (not replication yet) and noticed warnings appear > in pgsql.log: > (I just changed port number in my perl script to ) > > Apr 29 19:19:59 mira postgres[363]: [4-1] WARNING: there is no transaction in > progress It's harmless. pgpool issues "ABORT" and "RESET ALL" to backend each time when client connects to pgpool with pooled connection. This is neccesary since previos client might disconnect to pgpool without closing his/her transaction. -- Tatsuo Ishii ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Timestamp problems...wrong weeks.
Is there any way with SQL to get what I'm trying to get? Where (in this month, April): April 1 - 3 (Week 1) April 4 - 10 (Week 2) April 11 - 17 (Week 3) April 18 - 24 (Week 4) April 25 - 30 (Week 5) Thank you! ~ Troy Campano ~ -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andrew Sullivan Sent: Thursday, April 29, 2004 3:42 PM To: Pgsql-General Subject: Re: [GENERAL] Timestamp problems...wrong weeks. On Thu, Apr 29, 2004 at 02:42:49PM -0400, Campano, Troy wrote: > > This is causing my reports to print out incorrect data. > Do you know why this would happen? Am I doing something wrong? I think you may be misunderstanding what "W" means: week of month (1-5) (The first week starts on the first day of the month.) In April 2004, 1 April is Thurs, so 1-7 -> W1 8-14 -> W2 15-21 -> W3 22-28 -> W4 29-30 -> W5 == W1 of May This is also why 8 May is in week 2 of May, but 7 May is on week 1. A -- Andrew Sullivan | [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] trying to restore after a server failure...need some help
"David A. Ulevitch" <[EMAIL PROTECTED]> writes: > I did the normal debian binary install of postgresql-server and copies > everything from the old server's /var/lib/postgres to the new > /var/lib/postgres. I then copied everything from /etc/postgresql from the > old server to the new server too. (/etc was also fine in the crash) After > chown'ing all the files to `chown -R postgres.postgres /var/lib/postgres` > I tried to login. Uh, was the new machine's postmaster running while you did all this? You would certainly want the postmaster down while you overwrite /var/lib/postgres ... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Timestamp problems...wrong weeks.
On Thu, Apr 29, 2004 at 02:42:49PM -0400, Campano, Troy wrote: > > This is causing my reports to print out incorrect data. > Do you know why this would happen? Am I doing something wrong? I think you may be misunderstanding what "W" means: week of month (1-5) (The first week starts on the first day of the month.) In April 2004, 1 April is Thurs, so 1-7 -> W1 8-14 -> W2 15-21 -> W3 22-28 -> W4 29-30 -> W5 == W1 of May This is also why 8 May is in week 2 of May, but 7 May is on week 1. A -- Andrew Sullivan | [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] postgresql idle
I find that some clients (DBVisualizer for one) do exactly that - execute the COMMIT;BEGIN sequence, and leaves idle transactions on a consistent basis. On Apr 29, 2004, at 3:19 PM, Andrew Sullivan wrote: On Thu, Apr 29, 2004 at 02:04:47PM -0400, Jon Pastore wrote: pgsql thought there was a transaction in progress and was waiting for it to complete when in fact the commit had already taken place. Come again? That doesn't sound possible. What ismore likely is that somebody issues COMMIT;BEGIN; all at once. You get one transaction through, but you still end up idle in transaciton. A -- Andrew Sullivan | [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] "Idle in Transaction" and hung connections
Dear peoples, Periodically we are getting runaway postgres processes on our Linux (2.4.21-0.13 on Dell servers), using 7.4 and GIS (0.8 USE_GEOS=1 USE_PROJ=1 USE_STATS=1). All of the queries come in from remote servers using JDBC/proxool; once every 4 hours we have a process on the client side that cleans out old connections. All the processes are doing is single queries -- no inserts or updates. Very occasionally we will see a thread go wild, taking up a huge amount of processor time (the load will climb by "1" for each process -- usual load is around .2, when these hit the load rises to 1.x all the way up to a load of about 40 once). The pg_stat_activity shows these conections as being old -- much older than any live thread. All such connections are in a state of "IDLE IN TRANSACTION" which seems odd as these are all queries and presumably each query is a complete transaction. My tenative theory is that something is killing the client while the server side still thinks it has data to send, or some such variant. The client machines don't have a corresponding connection to the one on the postgres server. Killing the runaways with a -15 seems to bring the load back down and all is well, until it happens again. Does anyone have any ideas what might be triggering this ? It is mostly an annoyance but on a couple of occasions seems to have brought down a server, or at least rendered it non-functional. Thanks for any advice ! Greg Williamson DBA GlobeXplorer LLC ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql idle
On Thu, Apr 29, 2004 at 02:04:47PM -0400, Jon Pastore wrote: > pgsql thought there was a transaction in progress and was waiting > for it to complete when in fact the commit had already taken place. Come again? That doesn't sound possible. What ismore likely is that somebody issues COMMIT;BEGIN; all at once. You get one transaction through, but you still end up idle in transaciton. A -- Andrew Sullivan | [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Arbitrary precision modulo operation
> -Original Message- > From: Tom Lane [mailto:[EMAIL PROTECTED] > Sent: Wednesday, April 28, 2004 9:14 PM > To: Dann Corbit > Cc: Bruno Wolff III; Paul Tillotson; [EMAIL PROTECTED] > Subject: Re: [GENERAL] Arbitrary precision modulo operation > > > "Dann Corbit" <[EMAIL PROTECTED]> writes: > > I would suggest computation in sufficient digits of > accuracy to get a > > correct answer. > > Fine. How many is that, exactly? Here is what I would suggest: Using the outline I proposed before (starting with a floating point divide of DBL_DIG digits of precision), keep doubling the precision until the precision is 5 digits larger than either operand. If the last doubling makes the precision larger (quite likely) simply reduce it to the smaller margin. Something like this (pseudocode): numeric divide(numeric x, numeric y) { /* double starting estimate of quotient*/ numeric y1 = 1.0 / y::double; numeric two = 2:numeric; /* Need to collect the maximal precision of either operand */ total_precision = get_max_precision(x,y); y1 *= (two - y * y1); /* use numeric 30 math or total precision+5 whichever is less */ if (total_precision <= 25) return x*y; /* correctly rounded --> banker's rounding? */ y1 *= (two - y * y1); /* use numeric 60 math or total precision+5 whichever is less */ if (total_precision <= 50) return x*y; /* correctly rounded --> banker's rounding? */ y1 *= (two - y * y1); /* use numeric 120 math or total precision+5 whichever is less */ if (total_precision <= 110) return x*y; /* correctly rounded --> banker's rounding? */ y1 *= (two - y * y1); /* use numeric 240 math or total precision+5 whichever is less */ if (total_precision <= 230) return x*y; /* correctly rounded --> banker's rounding? */ y1 *= (two - y * y1); /* use numeric 480 math or total precision+5 whichever is less */ if (total_precision <= 470) return x*y; /* correctly rounded --> banker's rounding? */ y1 *= (two - y * y1); /* use numeric 960 math or total precision+5 whichever is less */ if (total_precision <= 950) return x*y; /* correctly rounded --> banker's rounding? */ y1 *= (two - y * y1); /* use maximum precision math or total precision+5 whichever is less */ return x*y; } ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Timestamp problems...wrong weeks.
Title: Timestamp problems...wrong weeks. Hello, I’m having a problem with timestamps in postgresql. I run the following query to pull dates in this format: WEEK/MONTH/YEAR However the data says that April 28th 2004 was in week 4 of April and that April 29th 2004 was in week 5 of april. This is incorrect. This is causing my reports to print out incorrect data. Do you know why this would happen? Am I doing something wrong? --SQL CODE SELECT current_timestamp, completion_date, to_char(current_timestamp,'W/MM/'), to_char(completion_date,'W/MM/') FROM anna_onestop_database_t WHERE to_char(current_timestamp,'MM/') = to_char(completion_date,'MM/') AND upper(solution_provider) = 'N0050961' AND status LIKE 'Closed - Completed' --RESULTSET Timestamptz |completion_date |to_char |to_char 2004-04-29 14:29:47.289369-04|2004-04-28 11:40:35|5/04/2004|4/04/2004 2004-04-29 14:29:47.289369-04|2004-04-29 13:26:34|5/04/2004|5/04/2004 thank you! Troy Campano
Re: [GENERAL] postgresql idle
Thank you for your response...and you are correct that is something to be concerned about but after much frustration we found that pgsql thought there was a transaction in progress and was waiting for it to complete when in fact the commit had already taken place. What we ended up doing was after a commit or rollback we would undef $main::conn; Forcing it to reestablish a new connection for the next itteration of the loop. This seemed to be an adequate solution but strangely we ran our test and it failed as expected, implemented this change ran the test and it worked, but then when we commented out the change and ran the test again and it succeeded...so I have no idea. Jon Pastore RHCE, President IDE Tech, Inc. (954) 360-0393 Office (954) 428-0442 Fax Public Key: http://www.idetech.net/keys/jpastore.asc -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Michael Chaney Sent: Thursday, April 22, 2004 10:26 AM To: [EMAIL PROTECTED] Subject: Re: [GENERAL] postgresql idle On Wed, Apr 21, 2004 at 09:12:27PM -0400, Jon Pastore wrote: > This perl script is designed to handle payment posting for an > application we developed. It runs fine on our development server > which is running apache > 1.3.27 on ES 2.1 > > on the production server the script hangs and we see the above msg in > "ps aux" > > we wrote a wrapper for the program that will exec it from the command > line and everything runs a-o-k > > has anyone else encounted something like this? should I downgrade the > installation of apache? It's a Perl script, so why aren't you concerned about the Perl version? If you can run it through the Perl debugger, you should be able to see where it's hanging. That's probably your best course of action. Barring that, start putting some printf's in there to see where it's hanging. Michael -- Michael Darrin Chaney [EMAIL PROTECTED] http://www.michaelchaney.com/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Follow-up to original post, more information:
I tried again with some more debugging enabled and I get this dump in my logs. -davidu --- what I think is "bad" output --- 2004-04-29 11:41:32 [9307] DEBUG: open of /var/lib/postgres/data/pg_xlog/ (log file 0, segment 0) failed: No such file or directory 2004-04-29 11:41:32 [9307] DEBUG: invalid primary checkpoint record 2004-04-29 11:41:32 [9307] DEBUG: open of /var/lib/postgres/data/pg_xlog/ (log file 0, segment 0) failed: No such file or directory 2004-04-29 11:41:32 [9307] DEBUG: invalid secondary checkpoint record 2004-04-29 11:41:32 [9307] FATAL 2: unable to locate a valid checkpoint record --- complete output --- /usr/lib/postgresql/bin/postmaster: PostmasterMain: initial environ dump: - PWD=/var/lib/postgres HZ=100 USER=postgres MAIL=/var/mail/postgres LOGNAME=postgres SHLVL=3 SHELL=/bin/sh PGLIB=/usr/lib/postgresql TERM=xterm HOME=/var/lib/postgres PGDATA=/var/lib/postgres/data PATH=/bin:/usr/bin:/usr/lib/postgresql/bin PGDATESTYLE=ISO,European _=/usr/lib/postgresql/bin/postmaster - FindExec: found "/usr/lib/postgresql/bin/postgres" using argv[0] invoking IpcMemoryCreate(size=127549440) FindExec: found "/usr/lib/postgresql/bin/postmaster" using argv[0] 2004-04-29 11:41:32 [9307] DEBUG: database system shutdown was interrupted at 2004-04-29 11:41:26 UTC 2004-04-29 11:41:32 [9307] DEBUG: open of /var/lib/postgres/data/pg_xlog/ (log file 0, segment 0) failed: No such file or directory 2004-04-29 11:41:32 [9307] DEBUG: invalid primary checkpoint record 2004-04-29 11:41:32 [9307] DEBUG: open of /var/lib/postgres/data/pg_xlog/ (log file 0, segment 0) failed: No such file or directory 2004-04-29 11:41:32 [9307] DEBUG: invalid secondary checkpoint record 2004-04-29 11:41:32 [9307] FATAL 2: unable to locate a valid checkpoint record 2004-04-29 11:41:32 [9307] DEBUG: proc_exit(2) 2004-04-29 11:41:32 [9307] DEBUG: shmem_exit(2) 2004-04-29 11:41:32 [9307] DEBUG: exit(2) 2004-04-29 11:41:32 [12241] DEBUG: reaping dead processes 2004-04-29 11:41:32 [12241] DEBUG: startup process (pid 9307) exited with exit code 2 2004-04-29 11:41:32 [12241] DEBUG: aborting startup due to startup process failure 2004-04-29 11:41:32 [12241] DEBUG: proc_exit(1) 2004-04-29 11:41:32 [12241] DEBUG: shmem_exit(1) 2004-04-29 11:41:32 [12241] DEBUG: exit(1) [EMAIL PROTECTED]:/var/lib/postgres# --- end complete output --- Where to go from here? David A. Ulevitch - Founder, EveryDNS.Net Washington University in St. Louis http://david.ulevitch.com -- http://everydns.net ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] trying to restore after a server failure...need
I've no suggestions, except that I hope you still have a copy of the original /var/lib/postgres/data files and didn't run postgresql on your only copy... At 12:19 PM 4/29/2004 -0500, David A. Ulevitch wrote: Hi fellow pgsql users, I am helping my university's student union get back up and running after some major server issues they had. They had serious disk issues on a server, but not on the /var partition where all of the /var/lib/postgres/data files were. I was able to recover all of it, at the file-system level. The old machine and the new machine were both running Debian Linux 3.0-stable and postgresql 7.2.1-2woody4. I did the normal debian binary install of postgresql-server and copies everything from the old server's /var/lib/postgres to the new /var/lib/postgres. I then copied everything from /etc/postgresql from the old server to the new server too. (/etc was also fine in the crash) After chown'ing all the files to `chown -R postgres.postgres /var/lib/postgres` I tried to login. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] trying to restore after a server failure...need some help
Hi fellow pgsql users, I am helping my university's student union get back up and running after some major server issues they had. They had serious disk issues on a server, but not on the /var partition where all of the /var/lib/postgres/data files were. I was able to recover all of it, at the file-system level. The old machine and the new machine were both running Debian Linux 3.0-stable and postgresql 7.2.1-2woody4. I did the normal debian binary install of postgresql-server and copies everything from the old server's /var/lib/postgres to the new /var/lib/postgres. I then copied everything from /etc/postgresql from the old server to the new server too. (/etc was also fine in the crash) After chown'ing all the files to `chown -R postgres.postgres /var/lib/postgres` I tried to login. Here's the weird part: [EMAIL PROTECTED]:/var/lib/postgres# psql -U spark spark_db Password: Welcome to psql, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help on internal slash commands \g or terminate with semicolon to execute query \q to quit spark_db=> \l List of databases Name| Owner | Encoding ---+---+--- template0 | | SQL_ASCII template1 | | SQL_ASCII (2 rows) spark_db=> create database spark_db; ERROR: CREATE DATABASE: database "spark_db" already exists spark_db=> It seems like the data is sort of there but not really. The login account works and it thinks spark_db exists but I can't query the tables or anything... Sadly, there are no other backups of this data other than what I recovered from the old server. What can we do from here? Are there any experts we can consult? Thanks, David A. Ulevitch David A. Ulevitch - Founder, EveryDNS.Net Washington University in St. Louis http://david.ulevitch.com -- http://everydns.net ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Syntax error at or near "$1"
I'm working with different functions and following an example of iterating through rows from a returned query. It's not a complicated function, but I can't get past this parse error. I've tried numerous things and searched for some help. I can't find any reason for the error below: ERROR: syntax error at or near "$1" at character 30 CONTEXT: PL/pgSQL function "list_devices" line 6 at for over select rows Any help would be greatly appreciated! Thanks, Ryan CREATE FUNCTION list_devices(macaddr) RETURNS text AS ' DECLARE macAddress ALIAS FOR $1; rowval record; devices text; BEGIN devices := '' ''; FOR rowval IN SELECT device_type_id FROM devices WHERE mac_address = macAddress LOOP devices := devices || ''\r''; END LOOP; RETURN devices; END; ' LANGUAGE 'plpgsql'; Ryan J. Booz Research Programmer Penn State University Information Technology Services TLT/CLC [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Unable to use index?
On Thu, 29 Apr 2004 09:48:10 -0400 (EDT), Edmund Dengler <[EMAIL PROTECTED]> wrote: >=> explain analyze select * from replicated where rep_component = 'ps_probe' limit 1; >--- > Limit (cost=0.00..0.23 rows=1 width=101) (actual time=34401.857..34401.859 rows=1 > loops=1) > -> Seq Scan on replicated (cost=0.00..936557.70 rows=4114363 width=101) (actual > time=34401.849..34401.849 rows=1 loops=1) > Filter: ((rep_component)::text = 'ps_probe'::text) The planner thinks that the seq scan has a startup cost of 0.00, i.e. that it can return the first tuple immediately, which is obviously not true in the presence of a filter condition. Unfortunately there's no easy way to fix this, because the statistics information does not have information about the physical position of tuples with certain vaules. >=> explain analyze select * from replicated where rep_component = 'ps_probe' order by >rep_component limit 1; This is a good workaround. It makes the plan for a seq scan look like | Limit (cost=2345679.00..2345679.20 rows=1 width=101) | -> Sort (2345678.90..250.00 rows=4114363 width=101) | -> Seq Scan on replicated (cost=0.00..936557.70 rows=4114363 width=101) | Filter: ((rep_component)::text = 'ps_probe'::text) which is a loser against the index scan: > Limit (cost=0.00..1.66 rows=1 width=101) (actual time=51.163..51.165 rows=1 loops=1) >Maybe I need to up the number of rows sampled for statistics? Won't help, IMHO. Servus Manfred ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Postgre and Web Request
On Thu, 29 Apr 2004, Tatsuo Ishii wrote: > > > Depending on your web development environment (java, php, .NET) etc, > > > you should be able to use some mechanism that will provide a pool of > > > connections to the database. Each request does not open a new > > > connection (and then release it), but insteads gets a connection from > > > the pool to use, and returns it back to the pool when done. > > > > Where can I find some examples for connection pooling with php? Or must I > > just use persistence connections? > > Use pgpool > (ftp://ftp.sra.co.jp/pub/cmd/postgres/pgpool/pgpool-1.1.tar.gz). Tatsuo, I just tried pgpool (not replication yet) and noticed warnings appear in pgsql.log: (I just changed port number in my perl script to ) Apr 29 19:19:59 mira postgres[363]: [4-1] WARNING: there is no transaction in progress Oleg > -- > Tatsuo Ishii > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] > Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Anyone using pgreplicator?
On Tue, 20 Apr 2004 23:47:26 +0800 Alan Graham <[EMAIL PROTECTED]> wrote: > I fixed my problem with pgReplicator, so the test system is merrily > replicating between Perth and Brisbane as I type. Bandwidth usage is > low at the moment. The problem was embarrassingly simple once I checked > the actual SQL being received on the remote server. (Thanks Stef of .za > for the suggestion). any chance of your publishing notes on how you got pgreplicator going? i've taken two shots at it, and gotten stuck on getting tcl-dp working properly both times. from a functionality point of view, it's closer to what i want than any of the others, but it's been frustrating as hell to try and get going. richard -- Richard Welty [EMAIL PROTECTED] Averill Park Networking 518-573-7592 Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Performance problem with correlated sub-query
Howard, Steven (US - Tulsa) wrote: I have created a web app that stores and displays all the messages from my database maintenance jobs that run each night. The web app uses Java servlets and has PostgreSQL 7.0 as the back end. When the user requests the first page, he gets a list of all the servers with maintenance records in the database, and a drop down list of all the dates of maintenance records. If the user chooses a date first, then the app uses a prepared statement with the date contained in a parameter, and this executes very quickly – no problems. However, if the web page user does not choose a date, then the app uses a correlated sub-query to grab only the current (latest) day’s maintenance records. The query that is executed is: select servername, databasename, message from messages o where o.date_of_msg = (select max(date_of_msg) from messages i where i.servername = o.servername); And this is a dog. It takes 15 – 20 minutes to execute the query (there are about 200,000 rows in the table). I have an index on (servername, date_of_msg), but it doesn’t seem to be used in this query. Few basic checks.. - What does explain analyze says for the slow query? - Have you vacuumed and analyzed recently? - Have you done basic optimisations from default state? Check http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html and http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html And 7.0 is way too old. If you can afford to upgrade, upgrade to 7.4.2. HTH Shridhar ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Performance problem with correlated sub-query
I apologize for my mistake in the version. I have 7.3, and not 7.0. However, I'll upgrade to 7.4. and work with some of the other suggestions made also. -Original Message- From: Paul Thomas [mailto:[EMAIL PROTECTED] Sent: Thursday, April 29, 2004 9:20 AM To: Howard, Steven (US - Tulsa) Cc: pgsql-general @ postgresql . org Subject: Re: [GENERAL] Performance problem with correlated sub-query On 29/04/2004 14:34 "Howard, Steven (US - Tulsa)" wrote: > I have created a web app that stores and displays all the messages from > my database maintenance jobs that run each night. The web app uses Java > servlets and has PostgreSQL 7.0 as the back end. 7.0? That's positively ancient! > > When the user requests the first page, he gets a list of all the servers > with maintenance records in the database, and a drop down list of all > the dates of maintenance records. If the user chooses a date first, then > the app uses a prepared statement with the date contained in a > parameter, and this executes very quickly - no problems. > > > > However, if the web page user does not choose a date, then the app uses > a correlated sub-query to grab only the current (latest) day's > maintenance records. The query that is executed is: > > select servername, databasename, message from messages o where > o.date_of_msg = > > (select max(date_of_msg) from messages i where i.servername > = o.servername); > > > > And this is a dog. It takes 15 - 20 minutes to execute the query (there > are about 200,000 rows in the table). I have an index on (servername, > date_of_msg), but it doesn't seem to be used in this query. PG doesn't use indexes for things like count(), max, min()... You can avoid using max() by something like select my_date from my_table order by my_date desc limit 1; which will use the index. > > Is there a way to improve the performance on this query? In addition to the above, I'd strongly recommend upgrading to 7.4 to take advantage of the last ~4 years of continuous improvements. -- Paul Thomas +--+ -+ | Thomas Micro Systems Limited | Software Solutions for Business | | Computer Consultants | http://www.thomas-micro-systems-ltd.co.uk | +--+ -+ This message (including any attachments) contains confidential information intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message. Any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Performance problem with correlated sub-query
On 29/04/2004 14:34 "Howard, Steven (US - Tulsa)" wrote: I have created a web app that stores and displays all the messages from my database maintenance jobs that run each night. The web app uses Java servlets and has PostgreSQL 7.0 as the back end. 7.0? That's positively ancient! When the user requests the first page, he gets a list of all the servers with maintenance records in the database, and a drop down list of all the dates of maintenance records. If the user chooses a date first, then the app uses a prepared statement with the date contained in a parameter, and this executes very quickly - no problems. However, if the web page user does not choose a date, then the app uses a correlated sub-query to grab only the current (latest) day's maintenance records. The query that is executed is: select servername, databasename, message from messages o where o.date_of_msg = (select max(date_of_msg) from messages i where i.servername = o.servername); And this is a dog. It takes 15 - 20 minutes to execute the query (there are about 200,000 rows in the table). I have an index on (servername, date_of_msg), but it doesn't seem to be used in this query. PG doesn't use indexes for things like count(), max, min()... You can avoid using max() by something like select my_date from my_table order by my_date desc limit 1; which will use the index. Is there a way to improve the performance on this query? In addition to the above, I'd strongly recommend upgrading to 7.4 to take advantage of the last ~4 years of continuous improvements. -- Paul Thomas +--+-+ | Thomas Micro Systems Limited | Software Solutions for Business | | Computer Consultants | http://www.thomas-micro-systems-ltd.co.uk | +--+-+ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Performance problem with correlated sub-query
On Thu, 29 Apr 2004, Howard, Steven (US - Tulsa) wrote: > I have created a web app that stores and displays all the messages from > my database maintenance jobs that run each night. The web app uses Java > servlets and has PostgreSQL 7.0 as the back end. Step 1 is upgrade. ;) > However, if the web page user does not choose a date, then the app uses > a correlated sub-query to grab only the current (latest) day's > maintenance records. The query that is executed is: > > select servername, databasename, message from messages o where > o.date_of_msg = > > (select max(date_of_msg) from messages i where i.servername > = o.servername); This is likely to be running the subquery once for each row in messages, and probably not going to use an index in the inner either. The former might be optimized by recent versions. Changing the inner query to something like: (select date_of_msg from messages i where i.servername=o.servername order by date_of_msg desc limit 1) or changing it to use a subselect in from (something like): from messages o, (select servername, max(date_of_msg) from messages) i where o.servername=i.servername might both help, but I'm not sure either will work on 7.0. > And this is a dog. It takes 15 - 20 minutes to execute the query (there > are about 200,000 rows in the table). I have an index on (servername, > date_of_msg), but it doesn't seem to be used in this query. You might wish to play around with changing the indexes and the order of the columns in the multicolumn index as well. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Postgre and Web Request
Bill Moran wrote: Stefan Sturm wrote: Hello, Depending on your web development environment (java, php, .NET) etc, you should be able to use some mechanism that will provide a pool of connections to the database. Each request does not open a new connection (and then release it), but insteads gets a connection from the pool to use, and returns it back to the pool when done. Where can I find some examples for connection pooling with php? Or must I just use persistence connections? php handles connection pooling more or less automatically ... as long as you use pg_pconnect() instead of pg_connect(). You have to be careful using connection pooling. For example, if you create a temporary table and forget to delete it when the the script completes, the next time the script runs, it's likely that the connection will be reused and the script will cause an error because the temp table already exists. Gavin Sherry added the ON COMMIT DROP clause to CREATE TEMPORARY TABLE in 7.4 which, depending upon the expected life-cycle of the temporary table, may prove useful in the above scenario. FWIW, Mike Mascari ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Performance problem with correlated sub-query
Howard, Steven (US - Tulsa) wrote: select servername, databasename, message from messages o where o.date_of_msg = (select max(date_of_msg) from messages i where i.servername = o.servername); And this is a dog. It takes 15 – 20 minutes to execute the query (there are about 200,000 rows in the table). I have an index on (servername, date_of_msg), but it doesn’t seem to be used in this query. Just off the top of my head: SELECT servername, databasename, message FROM messages o WHERE o.date_of_msg = ( SELECT date_of_msg FROM messages i WHERE i.servername = o.servername ORDER BY date_of_msg LIMIT 1 ); HTH, Mike Mascari ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] Unable to use index?
Hi folks! A query I am running does not seem to use indexes that are available (running version 7.4.2). I have the following table: => \d replicated Table "public.replicated" Column | Type | Modifiers -+--+- rep_id | bigint | not null default nextval('replicated_id_seq'::text) rep_component | character varying(100) | rep_key1| integer | rep_key2| bigint | rep_key3| smallint | rep_replicated | timestamp with time zone | rep_remotekey1 | integer | rep_remotekey2 | bigint | rep_remotekey3 | smallint | rep_key2b | bigint | rep_remotekey2b | bigint | rep_key4| text | Indexes: "replicated_pkey" primary key, btree (rep_id) "replicate_key1_idx" btree (rep_key1, rep_key2, rep_key3) "replicated_item2_idx" btree (rep_component, rep_key2, rep_key3) "replicated_item_idx" btree (rep_component, rep_key1, rep_key2, rep_key3) "replicated_key2_idx" btree (rep_key2, rep_key3) "replicated_key4_idx" btree (rep_key4) => analyze verbose replicated; INFO: analyzing "public.replicated" INFO: "replicated": 362140 pages, 3 rows sampled, 45953418 estimated total rows ANALYZE The following does not use an index, even though two are available for the specific selection of rep_component. => explain analyze select * from replicated where rep_component = 'ps_probe' limit 1; QUERY PLAN --- Limit (cost=0.00..0.23 rows=1 width=101) (actual time=34401.857..34401.859 rows=1 loops=1) -> Seq Scan on replicated (cost=0.00..936557.70 rows=4114363 width=101) (actual time=34401.849..34401.849 rows=1 loops=1) Filter: ((rep_component)::text = 'ps_probe'::text) Total runtime: 34401.925 ms (4 rows) Yet, if I do the following, an index will be used, and it runs much faster (even when I swapped the order of the execution). => explain analyze select * from replicated where rep_component = 'ps_probe' order by rep_component limit 1; QUERY PLAN --- Limit (cost=0.00..1.66 rows=1 width=101) (actual time=51.163..51.165 rows=1 loops=1) -> Index Scan using replicated_item2_idx on replicated (cost=0.00..6838123.76 rows=4114363 width=101) (actual time=51.157..51.157 rows=1 loops=1) Index Cond: ((rep_component)::text = 'ps_probe'::text) Total runtime: 51.265 ms (4 rows) Any reason why the index is not chosen? Maybe I need to up the number of rows sampled for statistics? Regards! Ed ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Partial index question
Anton Nikiforov said: > Dear All, > I have a question about using partial indexes. > Lets say i have a table containing data types (table_datatype) and the > table containing data entrys (table_data). > While inserting into data entrys i have to number the entrys according > to it's type, so i always have to do > select max(id) from table_data where data_type=X; > And then insert a new value into the table data with this type and index. > Looks like there is no way to use sequences in this case without using > different tables that will make application not so clear. > But "my" way is not so clear also because i could get a collision while > concurrent inserts, so i have to control insertion from the application > and always check that it is unique. > So i'm planning to use partable indexes and hope they will help in > performance improving (the table data will contain millions of records > of each type so without indexing the performance will be not good and it > is not clear form me that it will be faster using complex index) > I know that i can do > create indexe .. where type=X; > But is there any way to create all types of indexes at a time of > database creation without using triggers and creating indexes from it? > The matter is that data types are being added by the user, so i do not > know the indexes that i should create now. > And what will be faster? > CREATE UNIQUE INDEX type_index ON table_data (type, id); > or > CREATE UNIQUE INDEX type_1_index ON table_data (id) WHERE type=1; > CREATE UNIQUE INDEX type_X_index ON table_data (id) WHERE type=X; Not really answering your question directly, but some alternative stragegies are: If the set of types is small, then using sequences would be manageable. Create a function that returns the next value for the passed-in data_type. If the sequence does not yet exist for that id, you create the sequence, and then return the sequence value. If the set is large, then consider using an extra table which creates a mapping between the data_type, and the last allocated value. Again create a function to increment and return the next id for that data type. My feeling (and I may be wrong) is that SELECT MAX(id) FROM table_data WHERE type=x is always going to be less efficient that one of the above methods regardless of the types of index you use, especially as the table gets larger. John Sidney-Woollett ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Performance problem with correlated sub-query
I have created a web app that stores and displays all the messages from my database maintenance jobs that run each night. The web app uses Java servlets and has PostgreSQL 7.0 as the back end. When the user requests the first page, he gets a list of all the servers with maintenance records in the database, and a drop down list of all the dates of maintenance records. If the user chooses a date first, then the app uses a prepared statement with the date contained in a parameter, and this executes very quickly – no problems. However, if the web page user does not choose a date, then the app uses a correlated sub-query to grab only the current (latest) day’s maintenance records. The query that is executed is: select servername, databasename, message from messages o where o.date_of_msg = (select max(date_of_msg) from messages i where i.servername = o.servername); And this is a dog. It takes 15 – 20 minutes to execute the query (there are about 200,000 rows in the table). I have an index on (servername, date_of_msg), but it doesn’t seem to be used in this query. Is there a way to improve the performance on this query? Thanks, Steve Howard This message (including any attachments) contains confidential information intended for a specific individual and purpose, and is protected by law. If you are not the intended recipient, you should delete this message. Any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited.
Re: [GENERAL] Postgre and Web Request
> > Depending on your web development environment (java, php, .NET) etc, > > you should be able to use some mechanism that will provide a pool of > > connections to the database. Each request does not open a new > > connection (and then release it), but insteads gets a connection from > > the pool to use, and returns it back to the pool when done. > > Where can I find some examples for connection pooling with php? Or must I > just use persistence connections? Use pgpool (ftp://ftp.sra.co.jp/pub/cmd/postgres/pgpool/pgpool-1.1.tar.gz). -- Tatsuo Ishii ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Postgre and Web Request
Stefan Sturm wrote: Hello, Depending on your web development environment (java, php, .NET) etc, you should be able to use some mechanism that will provide a pool of connections to the database. Each request does not open a new connection (and then release it), but insteads gets a connection from the pool to use, and returns it back to the pool when done. Where can I find some examples for connection pooling with php? Or must I just use persistence connections? php handles connection pooling more or less automatically ... as long as you use pg_pconnect() instead of pg_connect(). You have to be careful using connection pooling. For example, if you create a temporary table and forget to delete it when the the script completes, the next time the script runs, it's likely that the connection will be reused and the script will cause an error because the temp table already exists. Here's some docs at the php web site that may help: http://us4.php.net/manual/en/features.persistent-connections.php -- Bill Moran Potential Technologies http://www.potentialtech.com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Postgre and Web Request
On Wed, 2004-04-28 at 17:58, Chris Browne wrote: > [EMAIL PROTECTED] ("Kilmer C. de Souza") writes: > > Oww ... sorry man ... > > I make a mistake ... there are 10.000 users and 1.000 from 10.000 try to > > access at the same time the database. > > Can you help me again with this condition? > > The issues don't really change. Opening 1000 concurrent connections > means spawning 1K PostgreSQL processes, which will reserve a pile of > memory, and cause a pretty severe performance problem. > I think you need some qualifiers to that statement, since opening the processes themselves should cause little to no problems at all if given the right hardware. The main database I work on is currently set to handle up to 825 simultaneous connections during peak times and that is with perl dbi style connection pooling. If it weren't for i/o issues, I'm pretty sure PostgreSQL would have no problems at all running that load, which really only means we need to get a faster disk system set up. (Currently the data and wal live on a single 10,000 rpm SCSI drive). While I agree with everyone else in this thread that the OP is not likely to ever need such a high connection count, there's no reason that PostgreSQL can't support it given you have enough RAM, fast enough disks, and you don't shoot yourself in the foot with FK/Locking issues in the app. Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Postgre and Web Request
Hello, > Depending on your web development environment (java, php, .NET) etc, > you should be able to use some mechanism that will provide a pool of > connections to the database. Each request does not open a new > connection (and then release it), but insteads gets a connection from > the pool to use, and returns it back to the pool when done. Where can I find some examples for connection pooling with php? Or must I just use persistence connections? Greetings, Stefan Sturm ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Partial index question
Paul Thomas пишет: On 29/04/2004 09:53 Anton Nikiforov wrote: looks like after a few tests that i've done i'm confused more than before. I did create a table and inserted 20 random records of two different types of data into it. Did you analyze the table afterwards? Hello and thanks for the comment. Yes. I did analyze. Also thanks to Bruno Wolff III for his comments, i have found it easyear to use his suggestion. But while i was making my tests i found a way how to use partial or complex indexes, but they make my selects slower than without indexes at all :( (I was selecting * not a ma()). But i'll continue my tests. -- Best regads, Anton Nikiforov smime.p7s Description: S/MIME Cryptographic Signature