[GENERAL] Urgent !!! Please Help Me
Hello Every body: I have implemented psql version 7.4.2 on Debian linux version 3.2 and it was running very well. At this moment it is being crashed when executing a single select statement. Details are mentionded as follows: DB Dump size before this problem is arised: 95 MB DB ENCODING is "EUC_JP" After some investigation the problem is narrowed down : -- A table namly t60 has 411120 rows . --There is no Index for the t60 table . -- t60 has column named c1, c2, c3 . -- Statement " SELECT c1, c3 FROM t60 " Returns All rows of specified coumn. But when execute Statement " SELECT c2 FROM t60 " causes Error like &nbs p; 1." Invalid memory alloc request size 82127290 " 2. "ERROR: out of memory DETAIL: Failed on request of size 875573295" I have then made an B-Tree Index of t60 and when execute "REINDEX t60 " then get error "ERROR: index row requires 336336 bytes, maximum size is 8191" I have also tried to Make a full dump of the DB but failed but success fully taken schema wise dump accept the the schema that holds t60 table. Then I have copied full pgsql folder where pgsql is installed to another computer with same configaration and started the pgsql server on that mchaine successfully.And did all operation again on that DB as mentioned above .But gotten same errors again. Now I have restored the DB from Previous bak cup db dump. But my client says that they have entered many data that stored on t60 table and there is no way reenter data again. I will be glad if any body let me know ASAP : 1.Why the problem is Happenning? 2. What is the solution to restore full datbase? Regards R.I. Russell Yahoo! Mail Bring photos to life! New PhotoMail makes sharing a breeze.
Re: [GENERAL] Dynamic function execution?
On Mon, Mar 13, 2006 at 10:45:47PM -0800, Nick Johnson wrote: > Can anyone provide me with some direction on how to write a function > I can load into postgres that will execute a function specified by > OID (or regproc/regprocedure) at runtime, with type safety? I've been > able to write such a function in C, but I was unable to figure out > how to determine the parameters that the specified function expects, > so I can prevent calling a function that doesn't match the expected > signature (which segfaults postgres). Does the calling function have to be written in C? In PL/pgSQL you could easily query pg_proc with the oid to get the called function's name, argument types, etc., then build an appropriate string to EXECUTE. In C you could use SearchSysCache() and SysCacheGetAttr(); search through the source code for examples of calls to those functions with a first argument of PROCOID. Why do you need to do this? What problem are you trying to solve? -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] [pgsql-advocacy] Wisconsin Circuit Court Access (WCCA) on
On Mon, 2006-03-13 at 13:27 -0600, Kevin Grittner wrote: > Even more important is the fast response we have had when posting > problems to the lists. We have normally had a fix within 24 hours. > Frankly, the support has been amazing. Kevin, well done. We've all watched your progress with interest. The reason you've got excellent support is because of the detailed postings you've made, together with responses to all replies. Doing all your homework before posting is essential; unfortunately many people don't do this and then leave disappointed. Best Regards, Simon Riggs ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Update value to "the first character is capital and
Have you tried the initcap function? select initcap('abcd efgh'); initcap --- Abcd Efgh John Emi Lu wrote: Hello all, Does anyone have available plpgsql codes to update all capital letters in a column to "the first character is capital and the rest is small" ? For example, in tableA(id, description) 001, 'ZHANG ZHE XIN' => 'Zhang Zhe Xin' 002, 'LIU, WEI-HUAI' =>'Liu, Wei-Huai' 003, 'GU & WEI. NAN (CE SHI) & TOMMORROW' => 'Gu & Wei. Nan (Ce Shi) & Tommorrow' Thanks a lot! Ying ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Utility of GRANT EXECUTE
It seems that any user has the right to execute a function, whether or not it has been granted the EXECUTE privilege on it. Even a REVOKE EXECUTE has no impact. A privilige error will be raised only if the function tries to access an object (ex.: a table) for witch the user doesn't have the appropriate privilege(s). Is there any utility to the GRANT EXECUTE then ? Thanks,Paul
Re: [GENERAL] Utility of GRANT EXECUTE
On Tue, Mar 14, 2006 at 09:24:52AM +0100, Paul Mackay wrote: > It seems that any user has the right to execute a function, whether or not > it has been granted the EXECUTE privilege on it. Even a REVOKE EXECUTE has > no impact. A privilige error will be raised only if the function tries to > access an object (ex.: a table) for witch the user doesn't have the > appropriate privilege(s). Revoking EXECUTE from an individual user has no effect if public still has privileges, which is does by default. > Is there any utility to the GRANT EXECUTE then ? If you revoke public's privileges then GRANT EXECUTE has an effect. test=> create function foo() returns integer as 'select 1' language sql; CREATE FUNCTION test=> revoke all on function foo() from public; REVOKE test=> grant execute on function foo() to user1; GRANT test=> \c - user1 You are now connected as new user "user1". test=> select foo(); foo - 1 (1 row) test=> \c - user2 You are now connected as new user "user2". test=> select foo(); ERROR: permission denied for function foo -- Michael Fuhr ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Transaction eating up all RAM
>> I have stored proc that retrieves a bunch of data, stores it in temp = >> table, computes all sorts of totals/averages/whatnots from the temp = >> table, and inserts results in another table. It works fine (except I = >> don't like wrapping all SQL statements in 'execute'), but multiple >> calls >> = >> to that proc from another procedure causes excessive memory usage = >> (upwards of 400M), and server eventually runs out of swap space. I = >> believe this is because PG caches transactions in RAM, and this = >> particular one is a bit too big.=20 is that multiple simultaneous calls? maybe you are over committing your sort memory. If you can reproduce the out of memory behavior from a single backend that argues for a memory leak. Single backend. p.s. you can create one function temp_tables_init(), called after connection to backend (and not in a transaction) which creates all temp tables for the process. If you do that and remember to truncate the tables (not drop), you can use non-dynamic pl/pgsql calls. That's an interesting option... would make our PHP frontend a bit more complex thou. Obviously performance would be better in this case as query plans will be pre-compiled. Peter ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Utility of GRANT EXECUTE
Is there a way to change the default prvilege on functions, i.e. that like for tables, only the creator has privilege on it by default ?Thanks.PaulOn 3/14/06, Michael Fuhr <[EMAIL PROTECTED]> wrote:On Tue, Mar 14, 2006 at 09:24:52AM +0100, Paul Mackay wrote: > It seems that any user has the right to execute a function, whether or not> it has been granted the EXECUTE privilege on it. Even a REVOKE EXECUTE has> no impact. A privilige error will be raised only if the function tries to > access an object (ex.: a table) for witch the user doesn't have the> appropriate privilege(s).Revoking EXECUTE from an individual user has no effect if publicstill has privileges, which is does by default. > Is there any utility to the GRANT EXECUTE then ?If you revoke public's privileges then GRANT EXECUTE has an effect.test=> create function foo() returns integer as 'select 1' language sql; CREATE FUNCTIONtest=> revoke all on function foo() from public;REVOKEtest=> grant execute on function foo() to user1;GRANTtest=> \c - user1You are now connected as new user "user1". test=> select foo(); foo- 1(1 row)test=> \c - user2You are now connected as new user "user2".test=> select foo();ERROR: permission denied for function foo --Michael Fuhr
[GENERAL] stored procedure
I am wondering how i could create stored procedure using postgresql I hope anyone could give me directions for that Thanks, Mary ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] stored procedure
Mary wondered:>I am wondering how i could create stored procedure using postgresqlSee chapters 35ff in the Server programming manual @ http://www.postgresql.org/docs/8.1/interactive/server-programming.htmlBest wishesHarald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b 70197 Stuttgart0173/9409607-When I visit a mosque, I show my respect by taking off my shoes. I follow the customs, just as I do in a church, synagogue or other holy place. But if a believer demands that I, as a nonbeliever, observe his taboos in the public domain, he is not asking for my respect, but for my submission. And that is incompatible with a secular democracy.
Re: [GENERAL] Urgent !!! Please Help Me
r irussel wrote: Hello Every body: I have implemented psql version 7.4.2 on Debian linux version 3.2 and it was running very well. At this moment it is being crashed when executing a single select statement. Details are mentionded as follows: You really should be running something later than 7.4.2 - there are a lot of bug-fixes between that and 7.4.12 DB Dump size before this problem is arised: 95 MB DB ENCODING is "EUC_JP" After some investigation the problem is narrowed down : -- A table namly t60 has 411120 rows . --There is no Index for the t60 table . -- t60 has column named c1, c2, c3 . -- Statement " SELECT c1, c3 FROM t60 " Returns All rows of specified coumn. But when execute Statement " SELECT c2 FROM t60 " causes Error like 1." Invalid memory alloc request size 82127290 " 2. "ERROR: out of memory DETAIL: Failed on request of size 875573295" It looks like column c2 on t60 has been corrupted on at least one row. Have you experienced any crashes on this database? Anyway - the simplest thing to do is to copy the good rows into another table and skip only the damaged data. 1. CREATE TABLE new_t60 AS SELECT * FROM t60 LIMIT 0; 2. INSERT INTO new_t60 SELECT * FROM t60 WHERE c1 BETWEEN ??? AND ??? Adjusting the ??? will let you work around the problem row(s). Column c1 might not be the best choice - your primary key is what you want to use. 3. Dump the new table's data. 4. Drop table t60 (remove any foreign-key references and turn off triggers first) 5. Recreate table t60 6. Restore new_t60's data into t60 and restore foreign-key references etc. Then, you can dump the new table and restore it with just the damaged row(s) needing to be replaced/repaired. Then, you need to upgrade to 7.4.12 and also figure out why this happened. Have you had crashes? Is your RAM good? Are your disks syncing when they say they are? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Clustered PostgreSQL
Is it possible to cluster PostgreSQL? If yes where can I find the resource information on how to implement it? -- http://jojopaderes.multiply.com http://jojopaderes.wordpress.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Clustered PostgreSQL
Hi, On Tue, 14 Mar 2006, Jojo Paderes wrote: Is it possible to cluster PostgreSQL? If yes where can I find the resource information on how to implement it? You can use pgcluster: http://pgcluster.projects.postgresql.org/ (I haven't tried it yet) Of course, you can install any clustering software (like Red Hat Cluster Suite) if you want an active-passive cluster. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Clustered PostgreSQL
Jojo Paderes wrote: Is it possible to cluster PostgreSQL? If yes where can I find the resource information on how to implement it? -- http://jojopaderes.multiply.com http://jojopaderes.wordpress.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org Slony-I is another cluster software for postgreSQL. What OS are you running? I just deployed it on Solaris 9, between a Sun Ultra Enterprise E450 and a Sun Ultra 30. It's really great. Currently, it only supports Single Master-to-multipleSlaves. Meaning, the single master is the only node where database updates can occur, the changes are then propagated to the slave nodes. http://gborg.postgresql.org/project/slony1/projdisplay.php ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Clustered PostgreSQL
If you're using JDBC, the C-JDBC (http://c-jdbc.objectweb.org/) might be a good option. Regards, Thomas Hallgren Devrim GUNDUZ wrote: Hi, On Tue, 14 Mar 2006, Jojo Paderes wrote: Is it possible to cluster PostgreSQL? If yes where can I find the resource information on how to implement it? You can use pgcluster: http://pgcluster.projects.postgresql.org/ (I haven't tried it yet) Of course, you can install any clustering software (like Red Hat Cluster Suite) if you want an active-passive cluster. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Clustered PostgreSQL
Hi, On Tue, 14 Mar 2006, Thomas Hallgren wrote: If you're using JDBC, the C-JDBC (http://c-jdbc.objectweb.org/) might be a good option. Definitely. I've read some papers on C-JDBC 3 years ago, and was really satisfies with its features. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on PostgreSQL
One doesn't 'install' oracle. That implies you have control of the situation. One attempts to convince it to condescend to install itself onto your machine. Of course, this is like convincing my 3 year old to go to bed on time. Such powers of persuasion are not common. On 3/13/06 5:41 PM, "Scott Marlowe" <[EMAIL PROTECTED]> wrote: > On Mon, 2006-03-13 at 15:26, Scott Marlowe wrote: >> On Mon, 2006-03-13 at 15:16, Tony Caduto wrote: >>> Kevin Grittner wrote: Overall, PostgreSQL has been faster than the commercial product from which we converted. >>> >>> >>> Kevin, >>> Are you allowed to say what commercial product you converted from? >> >> And whether he can or not, this would make a GREAT case study for the >> advocacy folks. > > Yeah, anyone who says installing postgresql is hard should have to > install Oracle first. Or compile MySQL from source. :) > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Resetting priveleges on a table
I have a database that has a few tables that have privileges granted by a user that no longer works here. I am the owner of these tables and the owner of the database. If I do any granting/revoking on these tables my actions do not seem to affect the privs set by this other user. The privileges I have set show up after the original user privileges in the \z output. How can I clean this up. Would dropping the user have any effect? This is on 7.4 if that makes a difference. -- Bryan White, ArcaMax Publishing Inc. The world ends when your dead. Until then you got more punishment in store. Stand it like a man... And give some back. -- Al Swearengen ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Dynamic function execution?
On 14/03/2006, at 12:05 AM, Michael Fuhr wrote: On Mon, Mar 13, 2006 at 10:45:47PM -0800, Nick Johnson wrote: Can anyone provide me with some direction on how to write a function I can load into postgres that will execute a function specified by OID (or regproc/regprocedure) at runtime, with type safety? I've been able to write such a function in C, but I was unable to figure out how to determine the parameters that the specified function expects, so I can prevent calling a function that doesn't match the expected signature (which segfaults postgres). Does the calling function have to be written in C? In PL/pgSQL you could easily query pg_proc with the oid to get the called function's name, argument types, etc., then build an appropriate string to EXECUTE. I considered this, but I'd rather not do it by string manipulation and dynamic SQL - it seems a kludge. In C you could use SearchSysCache() and SysCacheGetAttr(); search through the source code for examples of calls to those functions with a first argument of PROCOID. Thanks for the tips. Why do you need to do this? What problem are you trying to solve? I want to associate Postgres functions with rows of a table (eg, a table column of datatype regproc or regprocedure) and be able to execute the function associated with that row in a query. -Nick Johnson ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on PostgreSQL
On Tue, 2006-14-03 at 07:45 -0500, Andrew Rawnsley wrote: > One doesn't 'install' oracle. That implies you have control of the > situation. One attempts to convince it to condescend to install itself onto > your machine. > > Of course, this is like convincing my 3 year old to go to bed on time. Such > powers of persuasion are not common. > > On 3/13/06 5:41 PM, "Scott Marlowe" <[EMAIL PROTECTED]> wrote: > > > On Mon, 2006-03-13 at 15:26, Scott Marlowe wrote: > >> On Mon, 2006-03-13 at 15:16, Tony Caduto wrote: > >>> Kevin Grittner wrote: > Overall, PostgreSQL > has been faster than the commercial product from which we converted. > > >>> > >>> > >>> Kevin, > >>> Are you allowed to say what commercial product you converted from? > >> > >> And whether he can or not, this would make a GREAT case study for the > >> advocacy folks. > > > > Yeah, anyone who says installing postgresql is hard should have to > > install Oracle first. Or compile MySQL from source. :) > > I'll agree with that, we had a tech who tried for a week to install Oracle, only having to resort to buying third party books, to figure out how to configure it. Managing it, is another scary task when that guy left, I soon discovered the jumble of tools required to administer it. We stopped supporting Oracle when our last customer using it stopped using it. I use PostgreSQL for most projects but have been supporting MySQL for customers who request it, and usually build any new libraries to be able to support either transparently, just by changing the driver and user credentials in the config file. The hard part is usually getting the MySQL to do what I expect, and what PostgreSQL does by default. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Utility of GRANT EXECUTE
On Tue, Mar 14, 2006 at 09:57:54AM +0100, Paul Mackay wrote: > Is there a way to change the default prvilege on functions, i.e. that like > for tables, only the creator has privilege on it by default ? Not that I'm aware of. You could revoke USAGE on the functions' schema so attempts to call the functions would fail with "permission denied for schema" but that might be too sweeping a solution. The idea of having default privileges has come up before; the developers' TODO list has an item that mentions "GRANT SELECT ON NEW TABLES." Is there a particular problem you're trying to solve? -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Clustered PostgreSQL
We're currently using Ubuntu Breezy Hedgehog for our existing PostgreSQL database server (8.0.x). A Java-based applications uses the database server via JDBC. I'm not sure if this question is within the context but I would like to know if clustering can provide a hot-standby instance for a production-running PostgreSQL server? Say if the main db server goes down, another hot-standy with mirrored data from the main server can be used to replaced the defective db server. On 3/14/06, Louis Gonzales <[EMAIL PROTECTED]> wrote: > > Slony-I is another cluster software for postgreSQL. What OS are you > running? I just deployed it on Solaris 9, between a Sun Ultra > Enterprise E450 and a Sun Ultra 30. > > It's really great. Currently, it only supports Single > Master-to-multipleSlaves. Meaning, the single master is the only node > where database updates can occur, the changes are then propagated to the > slave nodes. > > http://gborg.postgresql.org/project/slony1/projdisplay.php > -- http://jojopaderes.multiply.com http://jojopaderes.wordpress.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Resetting priveleges on a table
On Mar 14, 2006, at 9:54 AM, Bryan White wrote: How can I clean this up. Would dropping the user have any effect? This is on 7.4 if that makes a difference. dropping the user will leave dangling permissions (ie, Pg will report them as being granted to user "103" or whatever Id that user happened to have). not sure why you're not able to revoke permissions. that seems curious to me. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] open file counts in 8.1.2?
We're trying to make sense of the number of open files on an HP-UX 11.23 system that's getting several new 8.1.2 clusters, and in particular why the numbers appear to be significantly larger than our 7.4 clusters on similar hardware. Would there be anything particular to 8.1.2 over 7.4 that would lead to a larger number of open files? Ed ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] open file counts in 8.1.2?
"Ed L." <[EMAIL PROTECTED]> writes: > We're trying to make sense of the number of open files on an > HP-UX 11.23 system that's getting several new 8.1.2 clusters, > and in particular why the numbers appear to be significantly > larger than our 7.4 clusters on similar hardware. Would there > be anything particular to 8.1.2 over 7.4 that would lead to a > larger number of open files? This is much too handwavy to provide an intelligent comment on. Get a copy of "lsof" and find out exactly which processes have how many files open, then we'll have some idea what's going on... regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Resetting priveleges on a table
Vivek Khera <[EMAIL PROTECTED]> writes: > not sure why you're not able to revoke permissions. that seems > curious to me. You need to revoke them as that user, likely. REVOKE really means "revoke grants I made", not "revoke any grant anybody made". regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] open file counts in 8.1.2?
On Tuesday March 14 2006 10:25 am, Tom Lane wrote: > "Ed L." <[EMAIL PROTECTED]> writes: > > We're trying to make sense of the number of open files on an > > HP-UX 11.23 system that's getting several new 8.1.2 > > clusters, and in particular why the numbers appear to be > > significantly larger than our 7.4 clusters on similar > > hardware. Would there be anything particular to 8.1.2 over > > 7.4 that would lead to a larger number of open files? > > This is much too handwavy to provide an intelligent comment > on. Get a copy of "lsof" and find out exactly which processes > have how many files open, then we'll have some idea what's > going on... We have 3 clusters with 24K, 34K, and 47K open files according to lsof. These same clusters have 164, 179, and 210 active connections, respectively. Their schemas, counting the number of user and system entries in pg_class as a generously rough measure of potential open files, contain roughly 2000 entries each. Those open files seem pretty plausible, they're just much higher than what we see on the older systems. Ed ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] open file counts in 8.1.2?
On Tuesday March 14 2006 10:31 am, Ed L. wrote: > On Tuesday March 14 2006 10:25 am, Tom Lane wrote: > > "Ed L." <[EMAIL PROTECTED]> writes: > > > We're trying to make sense of the number of open files on > > > an HP-UX 11.23 system that's getting several new 8.1.2 > > > clusters, and in particular why the numbers appear to be > > > significantly larger than our 7.4 clusters on similar > > > hardware. Would there be anything particular to 8.1.2 > > > over 7.4 that would lead to a larger number of open files? > > > > This is much too handwavy to provide an intelligent comment > > on. Get a copy of "lsof" and find out exactly which > > processes have how many files open, then we'll have some > > idea what's going on... > > We have 3 clusters with 24K, 34K, and 47K open files according > to lsof. These same clusters have 164, 179, and 210 active > connections, respectively. Their schemas, counting the number > of user and system entries in pg_class as a generously rough > measure of potential open files, contain roughly 2000 entries > each. Those open files seem pretty plausible, they're just > much higher than what we see on the older systems. One lsof curiosity is that one cluster seems to have it's partition directory listing open about 10K times, including many times by the same backend process: COMMAND PIDUSER FD TYPE DEVICE SIZE/OFF NODE NAME postgres 4023 db1dba 49u REG 64,0x10001 16384 7435 /db1 (/dev/vgdb1/lvol1) postgres 4023 db1dba 62u REG 64,0x10001 8192 7673 /db1 (/dev/vgdb1/lvol1) postgres 4023 db1dba 68u REG 64,0x10001 16384 7601 /db1 (/dev/vgdb1/lvol1) postgres 4023 db1dba 78u REG 64,0x10001 16384 7379 /db1 (/dev/vgdb1/lvol1) postgres 4023 db1dba 79u REG 64,0x10001 16384 7380 /db1 (/dev/vgdb1/lvol1) postgres 4023 db1dba 135u REG 64,0x10001 352256 7305 /db1 (/dev/vgdb1/lvol1) postgres 4023 db1dba 136u REG 64,0x10001 262144 7640 /db1 (/dev/vgdb1/lvol1) postgres 4023 db1dba 137u REG 64,0x10001 262144 7642 /db1 (/dev/vgdb1/lvol1) postgres 4023 db1dba 138u REG 64,0x10001 262144 7643 /db1 (/dev/vgdb1/lvol1) Ed ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] open file counts in 8.1.2?
"Ed L." <[EMAIL PROTECTED]> writes: > We have 3 clusters with 24K, 34K, and 47K open files according to > lsof. These same clusters have 164, 179, and 210 active > connections, respectively. Their schemas, counting the number > of user and system entries in pg_class as a generously rough > measure of potential open files, contain roughly 2000 entries > each. Those open files seem pretty plausible, they're just much > higher than what we see on the older systems. Hm. AFAICT from the CVS logs, 7.4.2 and later should have about the same behavior as 8.1.* in this regard. What version is the older installation exactly? You can always reduce max_files_per_process if you want more conservative behavior. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] open file counts in 8.1.2?
"Ed L." <[EMAIL PROTECTED]> writes: > One lsof curiosity is that one cluster seems to have it's > partition directory listing open about 10K times, including > many times by the same backend process: Nah, that's just an lsof aberration on HPUX --- it doesn't always tell the truth about files' names. Notice the NODEs are all different, so these really are different files. You could use ls -i if you want to determine what they actually are. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] open file counts in 8.1.2?
On Tuesday March 14 2006 10:46 am, Tom Lane wrote: > "Ed L." <[EMAIL PROTECTED]> writes: > > We have 3 clusters with 24K, 34K, and 47K open files > > according to lsof. These same clusters have 164, 179, and > > 210 active connections, respectively. Their schemas, > > counting the number of user and system entries in pg_class > > as a generously rough measure of potential open files, > > contain roughly 2000 entries each. Those open files seem > > pretty plausible, they're just much higher than what we see > > on the older systems. > > Hm. AFAICT from the CVS logs, 7.4.2 and later should have > about the same behavior as 8.1.* in this regard. What version > is the older installation exactly? They are machines each with a mix of 7.3.4, 7.4.6, and 7.4.8. I'm working on lsof comparison to find specific diffs. It would seem the factors driving number of open files are current connections, # of relations, indices, etc. Am I correct about that? > You can always reduce max_files_per_process if you want more > conservative behavior. Ah, thanks. I'm not particularly worried about this since the numbers on the new system somewhat make sense to me. But others here are concerned, so I'm trying to explain/justify/understand better. If we want to handle 16 clusters on this one box, each with 300 max_connections and 2000 relations, would it be ball-park reasonable to say that worst case we might have 300 backends with ~2000 open file descriptors each (300 * 2000 = 600K open files per cluster, 600K * 16 clusters = 10M open files)? Increasing the kernel parameter 'nfiles' (max total open files on system) to something like 10M seems to make some of the ITRC HP gurus gasp. (I suspect we'll hit I/O limits long before open files become an issue.) Ed ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Resetting priveleges on a table
Tom Lane wrote: You need to revoke them as that user, likely. REVOKE really means "revoke grants I made", not "revoke any grant anybody made". Ok I tried logging is as that user. Oddly after the revoke then only grant that disappeared was one I created. Maybe it has something to do with 'grant option' permissions which seem to have been created here. Transscript: (pconner is the obsolet user, bryan is my account) ec=# \z bulkuploadcfg Access privileges for database "ec" Schema | Table |Access privileges +---+-- public | bulkuploadcfg | {pconner=a*r*w*d*R*x*t*/pconner,=arwdRxt/pconner,=arwdRxt/bryan} (1 row) ec=# select current_user; current_user -- pconner (1 row) ec=# revoke all on bulkuploadcfg from public; REVOKE ec=# \z bulkuploadcfg Access privileges for database "ec" Schema | Table | Access privileges +---+--- public | bulkuploadcfg | {pconner=a*r*w*d*R*x*t*/pconner,=arwdRxt/pconner} (1 row) ec=# revoke all on bulkuploadcfg from pconner; REVOKE ec=# \z bulkuploadcfg Access privileges for database "ec" Schema | Table | Access privileges +---+--- public | bulkuploadcfg | {pconner=a*r*w*d*R*x*t*/pconner,=arwdRxt/pconner} (1 row) ec=# revoke grant option for all on bulkuploadcfg from pconner; REVOKE ec=# \z bulkuploadcfg Access privileges for database "ec" Schema | Table | Access privileges +---+--- public | bulkuploadcfg | {pconner=a*r*w*d*R*x*t*/pconner,=arwdRxt/pconner} (1 row) -- Bryan ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Clustered PostgreSQL
Do you know if pgcluster will work with PostgreSQL 8.1.3? Brendan Duddridge | CTO | 403-277-5591 x24 | [EMAIL PROTECTED] ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB T2G 0V9 http://www.clickspace.com On Mar 14, 2006, at 6:18 AM, Devrim GUNDUZ wrote: Hi, On Tue, 14 Mar 2006, Jojo Paderes wrote: Is it possible to cluster PostgreSQL? If yes where can I find the resource information on how to implement it? You can use pgcluster: http://pgcluster.projects.postgresql.org/ (I haven't tried it yet) Of course, you can install any clustering software (like Red Hat Cluster Suite) if you want an active-passive cluster. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 6: explain analyze is your friend smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] open file counts in 8.1.2?
"Ed L." <[EMAIL PROTECTED]> writes: > If we want to handle 16 clusters on this one box, each > with 300 max_connections and 2000 relations, would it be > ball-park reasonable to say that worst case we might have 300 > backends with ~2000 open file descriptors each (300 * 2000 = > 600K open files per cluster, 600K * 16 clusters = 10M open > files)? No, an individual backend should never exceed max_files_per_process open files (1000 by default). It will feel free to go up that high, though, if it has reason to touch that many database files over its lifetime. 1000 is probably much higher than you really need for reasonable performance; I'd be inclined to cut it to a couple hundred at most if you need to sustain large numbers of backends. I dunno what sort of penalties the kernel might have for millions of open files but there probably are some ... regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Dynamic function execution?
On Tue, Mar 14, 2006 at 07:21:51AM -0800, Nick Johnson wrote: > On 14/03/2006, at 12:05 AM, Michael Fuhr wrote: > >Why do you need to do this? What problem are you trying to solve? > > I want to associate Postgres functions with rows of a table (eg, a > table column of datatype regproc or regprocedure) and be able to > execute the function associated with that row in a query. Could you post an example? Others might be interested in seeing an application of something like that. -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Resetting priveleges on a table
Bryan White <[EMAIL PROTECTED]> writes: > ec=# \z bulkuploadcfg > Access privileges for database "ec" > Schema | Table |Access privileges > +---+-- > public | bulkuploadcfg | > {pconner=a*r*w*d*R*x*t*/pconner,=arwdRxt/pconner,=arwdRxt/bryan} > (1 row) Hm, this is 7.4.what exactly? The above should be an illegal state (assuming pconner is the table owner) because there is no grant option to bryan allowing him to grant anything to public. There was an old bug that would allow you to get into this state if bryan was a superuser (the system would allow him to grant privileges anyway), but according to the CVS logs we fixed that in 7.4RC1. This table wouldn't happen to be a holdover from a 7.4 beta version would it? Another possibility is that you did an ALTER TABLE OWNER after assigning some initial permissions. 7.4 had that command but it didn't do anything about changing the ACL list to match. I think you could have gotten to the above state if pconner were the original table owner and had done GRANT ALL TO PUBLIC, and then you altered table ownership to bryan and he also did GRANT ALL TO PUBLIC. Best solution might be to forcibly set the table's pg_class.relacl field to null (resetting all the permissions to default) and then grant what you want. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] What's a good default encoding?
If you're going to be putting emdashes, letters with lines and circles above them, and similar stuff that's mostly European and American in a database, what's a good default encoding to use - UTF-8? CSN __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Resetting priveleges on a table
Tom Lane wrote: Bryan White <[EMAIL PROTECTED]> writes: ec=# \z bulkuploadcfg Access privileges for database "ec" Schema | Table |Access privileges +---+-- public | bulkuploadcfg | {pconner=a*r*w*d*R*x*t*/pconner,=arwdRxt/pconner,=arwdRxt/bryan} (1 row) Hm, this is 7.4.what exactly? The above should be an illegal state (assuming pconner is the table owner) because there is no grant option to bryan allowing him to grant anything to public. ec=# select version(); version - PostgreSQL 7.4.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.2 20031022 (Red Hat Linux 3.3.2-1) (1 row) There was an old bug that would allow you to get into this state if bryan was a superuser (the system would allow him to grant privileges anyway), but according to the CVS logs we fixed that in 7.4RC1. This table wouldn't happen to be a holdover from a 7.4 beta version would it? bryan is a super user. Another possibility is that you did an ALTER TABLE OWNER after assigning some initial permissions. 7.4 had that command but it didn't do anything about changing the ACL list to match. I think you could have gotten to the above state if pconner were the original table owner and had done GRANT ALL TO PUBLIC, and then you altered table ownership to bryan and he also did GRANT ALL TO PUBLIC. That would match the history. A while ago I changed the owner of all tables to 'bryan'. I just noticed the permission strangeness today. I had some problems trying to load a dump of this database onto a system running 8.0.7 with no pconner user defined. I decided it was time to clean this stuff up and to do that I had to go back to the source. Best solution might be to forcibly set the table's pg_class.relacl field to null (resetting all the permissions to default) and then grant what you want. That seems to fix it. Thanks!!! -- Bryan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Turn OFF Stats of Postgresql
Hi, I have a search engine and for increase the performance I want to turn off the stats of Postgresql but i don't know if i should do it. Should I make this? Thanks. Marcos ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Turn OFF Stats of Postgresql
Ho Marcos! You can also try to adpat this parameters to your config: TUNING: * First, see: http://www.lyris.com/lm_help/6.0/tuning_postgresql.html * Adjust this params in your "/etc/sysctl.conf" to obtain the best performances: kernel.shmmax=7000 kernel.shmall=1350 net.core.rmem_max=8388608 net.core.rmem_default=65536 net.core.wmem_max=8388608 net.core.wmem_default=65536 * Adapt and copy the "postgresql.conf" to your "/var/pgsql/data" directory. Younes Marcos a écrit : Hi, I have a search engine and for increase the performance I want to turn off the stats of Postgresql but i don't know if i should do it. Should I make this? Thanks. Marcos ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] audit system for parent-child tables
Hello list, I have the need to audit a purchase order process that mainly deals with PO header and details table, users need to know at any given time how the purchase order was, like a snapshot, I already log any change to a separate parallel table. One way I could do is log each change in both tables a insert in the audit tables each row in the detail and the row of the header, but that way could generate a lot of data, any idea or suggestion to get this done in postgresql ? Thanks in advance -- Sinceramente, Josué Maldonado. ... "El verdadero placer está en la búsqueda, más que en la explicación." -- Isaac Asimov ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on
On 3/13/06 5:50 PM, "Kevin Grittner" <[EMAIL PROTECTED]> wrote: > The license for the commercial product contains a clause which > prohibits disclosing benchmarks of their product without their written > permission. (Heaven only knows why they would include such a clause.) > My comment is not based on any formal benchmarks, but on the graphs > produced by our monitoring of the production software under real loads, > and on the query metrics from our middle tier software in the production > environment. Even though it would be a huge stretch to call the > comparison a benchmark under these conditions, this is a litigious > society. I'm sure you understand my concern. > > Short of being compelled by law to open our records, I'm not > comfortable providing any performance comparison which names the > vendor. Hmm.. What vendor throws such a clause into all their licensing agreements. Sounds suspiciously like Microsoft... I'm not a lawyer, but I believe things such as what vendors a government entity is using is required by law to be public information. Wes ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Build failures on RedHat 3.0 with openssl/kerberos
I try to build 8.1.3 with: ./configure --prefix=/usr/local/pgsql8.1.3 --with-openssl --with-pam --enable-thread-safety It fails the openssl test, saying openssl/ssl.h is unavailable. Digging deeper, I find that it is because the test program with #include is failing because it can't include krb5.h. Based on another post, I tried adding "--with-krb5". That explicitly aborted with it unable to find krb5.h. I then tried: ./configure --prefix=/usr/local/pgsql8.1.3 --with-openssl --with-pam --enable-thread-safety --with-krb5 --with-includes=/usr/kerberos/include Now it gets past both the openssl and kerberos, but bites the dust with: configure: error: *** Thread test program failed. Your platform is not thread-safe. *** Check the file 'config.log'for the exact reason. *** *** You can use the configure option --enable-thread-safety-force *** to force threads to be enabled. However, you must then run *** the program in src/tools/thread and add locking function calls *** to your applications to guarantee thread safety. If I remove the --with-krb5, it works. Why does enabling Kerberos break threads? I haven't been able to find any issues in the archives with krb5 and threads. Am I missing something here? Wes ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on
Short of being compelled by law to open our records, I'm not comfortable providing any performance comparison which names the vendor. Hmm.. What vendor throws such a clause into all their licensing agreements. Sounds suspiciously like Microsoft... Or Oracle, DB2, Sybase, Progress ... I'm not a lawyer, but I believe things such as what vendors a government entity is using is required by law to be public information. Wes ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Build failures on RedHat 3.0 with openssl/kerberos
Wes, Did you try to ./configure w/out "--enable-thread-safety?" I recently compiled postgreSQL 8.0.1 on Solaris and _needed_ --enable-thread-safety strictly for building Slony-I against postgresql with that feature enabled. What is the reason you are compiling this _with_ the feature? If it's necessary, then you may need to --with-includes= and/or --with-libs= with additional include directories, such as /usr/include:/usr/include/sys where-ever the thread .h files are for your OS. This configure attempt could be failing, because it can't locate the correct thread headers and/or libraries Wes wrote: I try to build 8.1.3 with: ./configure --prefix=/usr/local/pgsql8.1.3 --with-openssl --with-pam --enable-thread-safety It fails the openssl test, saying openssl/ssl.h is unavailable. Digging deeper, I find that it is because the test program with #include is failing because it can't include krb5.h. Based on another post, I tried adding "--with-krb5". That explicitly aborted with it unable to find krb5.h. I then tried: ./configure --prefix=/usr/local/pgsql8.1.3 --with-openssl --with-pam --enable-thread-safety --with-krb5 --with-includes=/usr/kerberos/include Now it gets past both the openssl and kerberos, but bites the dust with: configure: error: *** Thread test program failed. Your platform is not thread-safe. *** Check the file 'config.log'for the exact reason. *** *** You can use the configure option --enable-thread-safety-force *** to force threads to be enabled. However, you must then run *** the program in src/tools/thread and add locking function calls *** to your applications to guarantee thread safety. If I remove the --with-krb5, it works. Why does enabling Kerberos break threads? I haven't been able to find any issues in the archives with krb5 and threads. Am I missing something here? Wes ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on
Wes wrote: Hmm.. What vendor throws such a clause into all their licensing agreements. Sounds suspiciously like Microsoft... I'm not a lawyer, but I believe things such as what vendors a government entity is using is required by law to be public information. Wes I think we can safely assume it's either Oracle or M$ SQL server. I am leaning towards oracle because the state of WI just had a huge SNAFU with Oracle as their email provider. They spent several million dollars on the Oracle groupware/email package only to find it did not work(disapearing emails etc). Guess they didn't know about postfix and Open Exchange :-) Tony ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on
On 14/3/06 20:43, "Wes" <[EMAIL PROTECTED]> wrote: > On 3/13/06 5:50 PM, "Kevin Grittner" <[EMAIL PROTECTED]> wrote: > > I'm not a lawyer, but I believe things such as what vendors a government > entity is using is required by law to be public information. Maybe in some jurisdictions, but that's not the same as Kevin naming the vendor in the same message or even thread as he has compared their perfomance with ours. Anyway - whilst I'm emailing I'd like to congratulate Kevin on a successful project, and thank him for telling us about it. Regards, Dave ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Possible infinite loop in query using bitmap scans
On Mar 13, 2006, at 5:25 PM, Casey Duncan wrote: [..] If I restart the postmaster, the query will complete in the expected time. Does the problem eventually start happening again? If so, after how long? How did you determine that the restart is relevant? Do you consistently see different (presumably better) performance after a restart than if you don't restart? I restarted postgres this morning and this time it didn't seem to help. That query has been running for several hours now. I'm going to let it go a while longer to see if it eventually completes, but I suspect it won't. So perhaps the prior restart was just dumb luck. I'll try some narrower queries as well. -Casey ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Possible infinite loop in query using bitmap scans
On Mar 14, 2006, at 1:31 PM, Casey Duncan wrote: On Mar 13, 2006, at 5:25 PM, Casey Duncan wrote: [..] If I restart the postmaster, the query will complete in the expected time. Does the problem eventually start happening again? If so, after how long? How did you determine that the restart is relevant? Do you consistently see different (presumably better) performance after a restart than if you don't restart? I restarted postgres this morning and this time it didn't seem to help. That query has been running for several hours now. I'm going to let it go a while longer to see if it eventually completes, but I suspect it won't. So perhaps the prior restart was just dumb luck. I'll try some narrower queries as well. Interestingly, I can only narrow the query by about an hour (i.e., a 23 hour span) before the plan changes to a simple index scan. That may explain why this only started hanging recently since it probably only started using the bitmap scan for a day interval recently. Let me know if there is anything I should do or any info you need to further pin this down. In the mean time I will disable bitmap scans for this query. Thanks. -Casey ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Build failures on RedHat 3.0 with openssl/kerberos
On 3/14/06 2:55 PM, "Louis Gonzales" <[EMAIL PROTECTED]> wrote: > Did you try to ./configure w/out "--enable-thread-safety?" I recently > compiled postgreSQL 8.0.1 on Solaris and _needed_ --enable-thread-safety > strictly for building Slony-I against postgresql with that feature enabled. > > What is the reason you are compiling this _with_ the feature? > If it's necessary, then you may need to --with-includes= and/or --with-libs= > with additional include directories, such as /usr/include:/usr/include/sys > where-ever the thread .h files are for your OS. > > This configure attempt could be failing, because it can't locate the > correct thread headers and/or libraries Why would I not want to specify enable-thread-safety? I want to be able to write threaded programs. --enable-thread-safety works fine until I enable --with-krb5, so it is finding the thread libraries. Wes ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Clustered PostgreSQL
Hi, On Tue, 2006-03-14 at 11:08 -0700, Brendan Duddridge wrote: > Do you know if pgcluster will work with PostgreSQL 8.1.3? I haven't tested but as far as I can read from web page, 1.5.0rc1 works on 8.1.1 -- They now have rc7, and maybe it works on 8.1.3, but I don't know... Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: PL/php, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] question about postgresql time intervals
You will get better, faster answers by sending questions to a PostgreSQL mailing list. By emailing me directly you may not get a timely response if I don't have time to answer. Others can then answer and learn from the subsequent discussion. I'm ccing this to pgsql-general. On Mar 15, 2006, at 6:45 , Linda wrote: Hi, Michael I have a question about the output format of the INTERVAL type in version 8.1.1. In previous versions, I could do the following: select (uptime::varchar)::interval from machine_info; where uptime is an INTEGER type, the number of seconds the machine has been up since last reboot. This used to produce output in this format: 21 days 02:47:04 Now in v8.1.1, the output format is 506:47:04 How can I get the "justified" output as before? Is there some setting of datestyle that affects the output? I have tried specifying "interval day to second" but that doesn't work. Using the new justify_hours function works, but is it possible to do something that will run on older versions of postgresql? Thanks, Linda -- Linda Gray Unitrends Corporation 803.454.0300 ext. 241 justify_hours is also in 8.1 and should do what you want. test=# select '506:47:04'::interval; interval --- 506:47:04 (1 row) test=# select justify_hours('506:47:04'::interval); justify_hours -- 21 days 02:47:04 (1 row) test=# select version(); version -- PostgreSQL 8.1.3 on powerpc-apple-darwin8.4.0, compiled by GCC powerpc-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5247) (1 row) Michael Glaesemann grzm myrealbox com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] [pgsql-advocacy] Wisconsin Circuit Court Access (WCCA) on
>>> On Tue, Mar 14, 2006 at 2:08 am, in message <[EMAIL PROTECTED]>, Simon Riggs <[EMAIL PROTECTED]> wrote: > On Mon, 2006- 03- 13 at 13:27 - 0600, Kevin Grittner wrote: >> Even more important is the fast response we have had when posting >> problems to the lists. We have normally had a fix within 24 hours. >> Frankly, the support has been amazing. > > Kevin, well done. We've all watched your progress with interest. Thanks to all who have offered congratulations. > The reason you've got excellent support is because of the detailed > postings you've made, together with responses to all replies. Doing all > your homework before posting is essential; unfortunately many people > don't do this and then leave disappointed. Here I think you underestimate how well the community helps people in these lists. I have witnessed remarkable patience here when people post vague messages asking for help. You (as a community) generally succeed in drawing out sufficient detail to provide good advice, and / or identify areas for product improvement. I do try to give as much information as I can, including reproducible test cases where practicable; but, I have done so with commercial vendors to whom my clients have paid big money for support, and been very disappointed. With one commercial vendor we've routinely been told by first line support staff that the product was functioning as intended. After days of effort, sometimes involving calls from top management, we've gotten through to someone who can actually understand the problem and acknowledge the bug; only to have it take months (sometimes over a year) to get a fix, With another open source vendor, from whom no support is available without a paid license and a paid support contract, we (after paying for a commercial license and a support contract) have been told that such things as using an OR predicate within the ON clause of a JOIN was an "unimplemented feature" (even though it worked in simple cases). They said they might "add the feature" in the next major release, but that wouldn't be for at least a year, and no guarantees. It was unexpected and quite refreshing to provide the same level of detail in a post to a PostgreSQL list, and get a patch file fast enough to be running a fixed version within 24 hours of posting the problem. When we have been able to provide sufficient detail and / or a test case, this has usually been the result. When we participated in the beta test phase, people were quite helpful in leading me through the use of unfamiliar tools to capture the information they needed to identify and fix problems before the official release. After decades of working as an independent consultant, I've recently (eight days ago) accepted employment with the Wisconsin Court System as a DBA, and I'm told that as a court employee I'm not allowed to endorse one product over another; but, I can speak of my experiences with products so long as I don't violate any constraints of the license agreements. I have worked with quite a few database products in my career and can say unequivocally that the support I've seen provided for PostgreSQL is superior to that which I've seen provided for any other database product. I don't want to name any names, because I would undoubtedly forget several very helpful people here, but I have to admit that my personal favorite was when I posted information about a bug in the JDBC driver shortly before I left for the day, and while I was sleeping a user in Germany created a program to cause the race condition, tracked down the cause, and posted a patch with a suggested fix. By the time I'd finished my coffee the next morning, the patch had been reviewed, scaled back to the minimum change required to effect a fix, applied to CVS, and a new jar file deployed for download. Wow. I can't really accept congratulations for this successful deployment without offering it right back to the community for all the help you've provided, as well as the product itself. Absolutely fantastic, all around! -Kevin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Relation 'pg_largeobject' does not exist
On 3/13/06, Tom Lane <[EMAIL PROTECTED]> wrote: > Brandon Keepers <[EMAIL PROTECTED]> writes: > > Thanks for your quick response! I had actually just been trying that > > (with 7.1) and came across another error: > > > NOTICE: ShmemAlloc: out of memory > > NOTICE: LockAcquire: xid table corrupted > > dumpBlobs(): Could not open large object. Explanation from backend: > > 'ERROR: LockRelation: LockAcquire failed > > Ugh :-( How many blobs have you got, thousands? 7.0 stores each blob > as a separate table, and I'll bet it is running out of lock table space > to hold a lock on each one. My recollection is that we converted blob > storage to a single pg_largeobject table precisely because of that > problem. Looks like there's over 17,000 blobs. :( But they're all very small, if that makes a difference. > What you'll need to do to get around this is to export each blob in a > separate transaction (or at least no more than a thousand or so blobs > per transaction). It looks like pg_dumplo might be easier to hack to do > things that way --- like pg_dump, it puts a BEGIN/COMMIT around the > whole run, but it's a smaller program and easier to move those commands > in. Unfortunately, I don't know C. Would someone be willing to help me hack pg_dumplo in exchange for money? > Another possibility is to increase the lock table size, but that would > probably require recompiling the 7.0 backend. If you're lucky, > increasing max_connections to the largest value the backend will support > will be enough. If you've got many thousands of blobs there's no hope > there, but if it's just a few thousand this is worth a try before you go > hacking code. I'm not the admin of the box that this database is on, so I don't have any control over it. I'm working on moving it to a box that I am the admin of. But anyway, it sounds like this wouldn't work anyway since I have so many blobs. > regards, tom lane Thanks, again for your help, Tom! Brandon ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] pgsql and streams
Hi All- I'm new to Postgres and have a question about bulk loading from streams. I know that I can bulk load from a file using COPY. Is it possible to use a stream instead of a file? If so, and I limited to stdin? I'm attempting to stream data from a remote database into my Postgres instance. I don't want to insert each tuple individually using jdbc since that would be horribly slow... Thanks, Chris ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pgsql and streams
Chris Condit wrote:I'm new to Postgres and have a question about bulk loading from streams.I know that I can bulk load from a file using COPY. Is it possible touse a stream instead of a file? If so, and I limited to stdin? I'mattempting to stream data from a remote database into my Postgresinstance. I don't want to insert each tuple individually using jdbcsince that would be horribly slow...You can execute a pg_dump on the remote host (see -h or --host optionsto pg_dump) and pipe it to a psql on the local host. That shouldreplicate the remote database to your host over the network.You can also use the "-h hostname" option on psql to exectuea "copy to file" on the remote host. The file ends up on your localsystem, where you can do a subsequent copy from file.Hope this helps,
Re: [GENERAL] pgsql and streams
Thanks for your response, Josh. Actually I’m looking for the most general way to do this, since my remote database might not be psql. In fact, I will probably be streaming through a java process. So I’d like to go from the java process directly into the psql db. Is it still possible? From: Josh Rovero [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 14, 2006 4:34 PM To: Christopher Condit; pgsql-general Subject: Re: [GENERAL] pgsql and streams Chris Condit wrote: I'm new to Postgres and have a question about bulk loading from streams. I know that I can bulk load from a file using COPY. Is it possible to use a stream instead of a file? If so, and I limited to stdin? I'm attempting to stream data from a remote database into my Postgres instance. I don't want to insert each tuple individually using jdbc since that would be horribly slow... You can execute a pg_dump on the remote host (see -h or --host options to pg_dump) and pipe it to a psql on the local host. That should replicate the remote database to your host over the network. You can also use the "-h hostname" option on psql to exectue a "copy to file" on the remote host. The file ends up on your local system, where you can do a subsequent copy from file. Hope this helps,
Re: [GENERAL] Turn OFF Stats of Postgresql
Marcos wrote: Hi, I have a search engine and for increase the performance I want to turn off the stats of Postgresql but i don't know if i should do it. Edit your postgresql.conf file, turn them off and then restart postgresql. -- Postgresql & php tutorials http://www.designmagick.com/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Dynamic function execution?
On 14/03/2006, at 10:26 AM, Michael Fuhr wrote:On Tue, Mar 14, 2006 at 07:21:51AM -0800, Nick Johnson wrote: On 14/03/2006, at 12:05 AM, Michael Fuhr wrote: Why do you need to do this? What problem are you trying to solve? I want to associate Postgres functions with rows of a table (eg, a table column of datatype regproc or regprocedure) and be able to execute the function associated with that row in a query. Could you post an example? Others might be interested in seeingan application of something like that.The example that's actually driving this is rather stupid (though amusing). I'm building a Postgres based adventure game. Eg, SELECT n(); SELECT look(); etc. I want to be able to execute on_enter, on_exit etc events, the handles to the event functions being stored in the table.A more sensible example (though not one this will be immediately used for) is something we're doing at my work: The database contains a number of stored procedures implementing 'models' - they're functions that essentially perform data-mining on the database. There's a table that has metadata about each of these models, allowing the system to determine which ones need executing, and when. Being able to directly call them would be useful here, too.-Nick Johnson
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on
On Mon, Mar 13, 2006 at 17:50:44 -0600, Kevin Grittner <[EMAIL PROTECTED]> wrote: > > Short of being compelled by law to open our records, I'm not > comfortable providing any performance comparison which names the > vendor. An open records request inquiring about vendors and contracts with the State of Wisconsin could probably turn up the answer if anyone really wanted to know. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Dynamic function execution?
On 3/14/06, Michael Fuhr <[EMAIL PROTECTED]> wrote: > On Tue, Mar 14, 2006 at 07:21:51AM -0800, Nick Johnson wrote: > > On 14/03/2006, at 12:05 AM, Michael Fuhr wrote: > > >Why do you need to do this? What problem are you trying to solve? > > > > I want to associate Postgres functions with rows of a table (eg, a > > table column of datatype regproc or regprocedure) and be able to > > execute the function associated with that row in a query. > > Could you post an example? Others might be interested in seeing > an application of something like that. If you go a ways back on the performance list you can see an example. It's very simple: there are C functions which you can call which take the oid and parameters. Then you can do clever things like do callbacks in plpgsql a function taking the function argument as a string and looking up its oid. to the original poster...it's probably possible. one way would be to sanity check pg_proc on the C side (at the least, check the # args). there might be better/faster ways though Merlin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] What's a good default encoding?
On Mar 14, 2006, at 3:10 PM, CSN wrote: If you're going to be putting emdashes, letters with lines and circles above them, and similar stuff that's mostly European and American in a database, what's a good default encoding to use - UTF-8? Yes, UTF-8 is good because it can represent every possible character and is efficient with respect to latin character sets. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] â in text field
On Mar 13, 2006, at 9:24 PM, CSN wrote: I'm updating a field via a web form, and an em-dash is getting stored in the database as 'âÂ-', and is getting displayed back on the web page as '— '. The encoding of the database is SQL_ASCII - should I change it? And if so, to what and how? Yes, you should change it. None of the non-ASCII characters are going to be properly encoded in the database. What you should change it to depends the software you are using to process the form. You need to make sure the client_encoding setting of your connection matches what you are working with on the form. Normally you could change the database encoding by dumping and then reloading the database. But you'll likely have problems because your SQL_ASCII encoded database has illegal characters that won't be accepted if you change to something like UTF-8. A utility like iconv might help with this. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Dynamic function execution?
On 14/03/2006, at 6:52 PM, Merlin Moncure wrote:to the original poster...it's probably possible. one way would be tosanity check pg_proc on the C side (at the least, check the # args). there might be better/faster ways thoughI just finished an implementation based on the suggestions of Michael Fuhr. My function retrieves system cache records for itself and the function it's being asked to call, then ensures the return types and all the argument types match. If that test is passed, it modifies its own fcinfo struct into the one required to call the function the user really wants, then calls FunctionCallInvoke to call it.With this method, on the PGSQL end, you can CREATE FUNCTION with any arbitrary parameters and return type as long as the first parameter is oid, regproc, or regprocedure, and then call any function with a matching signature using the defined function.Still not done (and won't be done, unless I develop a need or someone else wants it ) is determining when function calls are compatible even though they're not identical (eg, through use of polymorphic functions and ANYELEMENT/ANYARRAY).-Nick Johnson
[GENERAL] Error in Postgresql after a Machine Crash
Hi people,I was using postgres 7.2.1 in a Debian Woody server (in prodution).The machine crashed and when I restart it the following error occurs a lot in log.2006-03-14 14:35:23 [11858] ERROR: XLogFlush: request 102/7407C864 is not satisfied --- flushed only to 102/4CFEC030 2006-03-15 00:26:38 [23112] ERROR: XLogFlush: request 102/598F8630 is not satisfied --- flushed only to 102/4D1A1DE0How do I fix this?TIA, Paulo Henrique
Re: [GENERAL] Relation 'pg_largeobject' does not exist
"Brandon Keepers" <[EMAIL PROTECTED]> writes: > On 3/13/06, Tom Lane <[EMAIL PROTECTED]> wrote: >> Ugh :-( How many blobs have you got, thousands? > Looks like there's over 17,000 blobs. :( But they're all very small, > if that makes a difference. No, it's just the number of 'em that counts. >> Another possibility is to increase the lock table size, but that would >> probably require recompiling the 7.0 backend. If you're lucky, >> increasing max_connections to the largest value the backend will support >> will be enough. > I'm not the admin of the box that this database is on, so I don't have > any control over it. I'm working on moving it to a box that I am the > admin of. But anyway, it sounds like this wouldn't work anyway since > I have so many blobs. 7.0 sets the lock table size to 64 * max_connections, so if you can crank max_connections up to 300 or so you should be able to dump. I think this will work ... it's definitely worth a shot before you start thinking about hacking the code. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on
* Bruno Wolff III ([EMAIL PROTECTED]) wrote: > On Mon, Mar 13, 2006 at 17:50:44 -0600, > Kevin Grittner <[EMAIL PROTECTED]> wrote: > > Short of being compelled by law to open our records, I'm not > > comfortable providing any performance comparison which names the > > vendor. > > An open records request inquiring about vendors and contracts with the > State of Wisconsin could probably turn up the answer if anyone really > wanted to know. About which vendors they use and what contracts they have and you might be able to figure out which vendors have such a clause. I don't know that such a request could compel the performance data out associated with a specific vendor when that's clearly against a license the state is currently under. Enjoy, Stephen signature.asc Description: Digital signature
Re: [GENERAL] Wisconsin Circuit Court Access (WCCA) on
On Tue, Mar 14, 2006 at 23:27:24 -0500, Stephen Frost <[EMAIL PROTECTED]> wrote: > * Bruno Wolff III ([EMAIL PROTECTED]) wrote: > > On Mon, Mar 13, 2006 at 17:50:44 -0600, > > Kevin Grittner <[EMAIL PROTECTED]> wrote: > > > Short of being compelled by law to open our records, I'm not > > > comfortable providing any performance comparison which names the > > > vendor. > > > > An open records request inquiring about vendors and contracts with the > > State of Wisconsin could probably turn up the answer if anyone really > > wanted to know. > > About which vendors they use and what contracts they have and you might > be able to figure out which vendors have such a clause. I don't know > that such a request could compel the performance data out associated > with a specific vendor when that's clearly against a license the state > is currently under. But we already have an unofficial comment on the performance, we just don't know what database postgres is being compared to. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] out of memory using Postgres with Spring/Hibernate/Java
Hi, We are currently having a problem that our Postgres DB is throwing an SQL error which states that it's 'out of memory'. What we have is a DB with 1 table that has 3.9 million records. We need to find certain records that are to be processed by a Java App so we do a "select id from table where type=a and condition in (1, 2) order by id limit 2000". When this query gets executed we see the memory on the DB Server increasing and after it has finishes it drops a bit but we see it growing a few MB per few minutes. This has caused an out of memory after the system has been processing for a day or 2. The query is heavy because of the order by but that does not explain why the memory is increasing. We use a DAO which extends HibernateDaoSupport and the method (findIds) has been marked as propagation_required. So we assume Spring manages the transaction and thus closing of the resultset. Here's part of the DAO method: Query q = getSession(false).createQuery(query); q.setMaxResults(RESULT_SIZE); List list = q.list(); No rocketscience but somehow this causes a problem on the DB Server. Does anyone have an idea where to look? Thanks, Maarten ps: Java stacktrace: *** [WARN 2006-03-11 20:04:26,288 main] org.hibernate.util.JDBCExceptionReporter.logExcept ions(JDBCExceptionReporter.java:71) SQL Error: 0, SQLState: null *** [ERROR 2006-03-11 20:04:26,292 main] org.hibernate.util.JDBCExceptionReporter.logExcept ions(JDBCExceptionReporter.java:72) Batch entry 0 update activiteit set activiteit_type=102, activiteit_subtype=1305, start_dt=2006-03-11 20:03:40.94+0100, moeder_id=NULL, toestand=8022, laatste_actie_dt=2006-03-11 20:03:41.012000+0100, uiterlijke_actie_dt=2006-03-11 20:03:40.94+0100, pl_id=NULL, communicatie_partner=1905, nr_1=NULL, nr_2=NULL, nr_3=NULL, nr_4=NULL, nr_5=NULL, tekst_1=NULL, tekst_2=NULL, tekst_3=NULL, tekst_4=NULL, tekst_5=NULL where activiteit_id=11079994 was aborted. Call getNextException to see the cause. *** [WARN 2006-03-11 20:04:26,293 main] org.hibernate.util.JDBCExceptionReporter.logExcept ions(JDBCExceptionReporter.java:71) SQL Error: 0, SQLState: 53200 *** [ERROR 2006-03-11 20:04:26,293 main] org.hibernate.util.JDBCExceptionReporter.logExcept ions(JDBCExceptionReporter.java:72) ERROR: out of memory *** [ERROR 2006-03-11 20:04:26,318 main] org.hibernate.event.def.AbstractFlushingEventListe ner.performExecutions(AbstractFlushingEventListene r.java:277) Could not synchronize database state with session org.hibernate.exception.GenericJDBCException: Could not execute JDBC batch update at org.hibernate.exception.SQLStateConverter.handledN onSpecificException(SQLStateConverter.java:82) at org.hibernate.exception.SQLStateConverter.convert( SQLStateConverter.java:70) at org.hibernate.exception.JDBCExceptionHelper.conver t(JDBCExceptionHelper.java:43) at org.hibernate.jdbc.AbstractBatcher.executeBatch(Ab stractBatcher.java:181) at org.hibernate.engine.ActionQueue.executeActions(Ac tionQueue.java:226) at org.hibernate.engine.ActionQueue.executeActions(Ac tionQueue.java:137) at org.hibernate.event.def.AbstractFlushingEventListe ner.performExecutions(AbstractFlushingEventListene r.java:274) at org.hibernate.event.def.DefaultFlushEventListener. onFlush(DefaultFlushEventListener.java:27) at org.hibernate.impl.SessionImpl.flush(SessionImpl.j ava:730) at org.hibernate.impl.SessionImpl.managedFlush(Sessio nImpl.java:324) at org.hibernate.transaction.JDBCTransaction.commit(J DBCTransaction.java:86) at org.springframework.orm.hibernate3.HibernateTransa ctionManager.doCommit(HibernateTransactionManager. java:490) at org.springframework.transaction.support.AbstractPl atformTransactionManager.processCommit(AbstractPla tformTransactionManager.java:495) at org.springframework.transaction.support.AbstractPl atformTransactionManager.commit(AbstractPlatformTr ansactionManager.java:468) at org.springframework.transaction.interceptor.Transa ctionAspectSupport.doCommitTransactionAfterReturni ng(TransactionAspectSupport.java:258) at org.springframework.transaction.interceptor.Transa ctionInterceptor.invoke(TransactionInterceptor.jav a:106) at org.springframework.aop.framework.ReflectiveMethod Invocation.proceed(ReflectiveMethodInvocation.java :144) at org.springframework.aop.framework.JdkDynamicAopPro xy.invoke(JdkDynamicAopProxy.java:174) at $Proxy3.invokeWithinNewTx(Unknown Source) at nl.ictu.spg.service.workflow.ActiviteitCMP.persist NowWithNewGebeurtenis(ActiviteitCMP.java:546) at nl.ictu.spg.service.workflow.WorkflowActiviteit.pe rsistNowWithNewGebeurtenis(WorkflowActiviteit.java :320) at nl.ictu.spg.service.workflow.berichten.LO3.LO3Beri chtDispatcher.executeCycle(LO3BerichtDispatcher.ja va:224) at nl.ictu.spg.service.workflow.berichten.LO3.LO3Beri chtDispatcher.dispatch(LO3BerichtDispatcher.java:1 77) at nl.ictu.spg.service.workflow.berichten.LO3.LO3Beri chtDispatcher.dispatch(LO3BerichtDispatcher.java:2 56) at nl.ictu.spg.service.request.lo3.LO3RequestProcesso r.runOnce(LO3RequestProcessor.java
[GENERAL] Dumping rows into an array?
Hi I'm new to Postgres, but I've been having fun with it. In our application we want to be able to store a variable number of keywords for a record. I first thought that an array column would be the way to go, but after reading caveats on performance, I implemented they keywords as a separate table. This works great, but now I have another array question. I want to pass the keywords for the record as an array to a stored procedure (PL/pqsql), for inserting, and also return them as an array. I got the input part working like this SELECT INTO len array_upper(kw, 1); FOR idx IN 1 .. len LOOP INSERT INTO keywords VALUES(DEFAULT, lid, kw[idx]); END LOOP; kw is a TEXT[] parameter to the fuction. I've been unable to come up with the counterpart to select the keywords and populate an array that I can return.. If I do DECLARE kwlist TEXT[]; ... SELECT ARRAY[keyword] INTO keywordlist I get one keyword, cast as an array. I've tried DECLARE kwlist TEXT[]; DECLARE kw RECORD; ... FOR kw IN SELECT keyword FROM keywords as kk WHERE lesson.id = kk.id ORDER BY id LOOP kwary[idx] := kw; idx := idx + 1; END LOOP; But only one word gets returned in kwary[]. At least that's what I get after doing lesson.keywordlist := kwary Where lesson is the record I return from the function, and keywordlist is a member of the record of type TEXT[]. Any ideas on how I might accomplish this? I"m open to any suggestions, including using a different way of passing the keywords. The only requirement I have is that I must be able to pass in a variable length list of words, which I seem to have solved, and that I need to be able to return a variable length list fo words, which is driving me crazy. Thanks for any pointers, -karl PS, if it makes a difference, the application is using Perl and DBD::Pg to query the database. I see the same results when I call the functions from psql. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] out of memory using Postgres with Spring/Hibernate/Java
On 3/15/06, maarten roosendaal <[EMAIL PROTECTED]> wrote: > Hi, > > We are currently having a problem that our Postgres DB > is throwing an SQL error which states that it's 'out > of memory'. > > What we have is a DB with 1 table that has 3.9 million > records. We need to find certain records that are to > be processed by a Java App so we do a "select id from > table where type=a and condition in (1, 2) order by id > limit 2000". When this query gets executed we see the > memory on the DB Server increasing and after it has > finishes it drops a bit but we see it growing a few MB > per few minutes. This has caused an out of memory > after the system has been processing for a day or 2. > The query is heavy because of the order by but that > does not explain why the memory is increasing. What does explain show for the query? Are the fields indexed appropriately? Have you analyzed the table recently? Postgres needs to store the ordered results somewhere so of course that explains the memory increase. -- Postgresql & php tutorials http://www.designmagick.com/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org