Re: [PERFORM] High CPU Load

2006-09-19 Thread Markus Schaber
Hi, Jerome, Jérôme BENOIS wrote: Now i Have 335 concurrent connections, i decreased work_mem parameter to 32768 and disabled Hyper Threading in BIOS. But my CPU load is still very important. What are your settings for commit_siblings and commit_delay? Tomorrow morning i plan to add 2Giga

Re: [PERFORM] High CPU Load

2006-09-19 Thread Jérôme BENOIS
Markus, Le mardi 19 septembre 2006 à 11:53 +0200, Markus Schaber a écrit : Hi, Jerome, Jérôme BENOIS wrote: Now i Have 335 concurrent connections, i decreased work_mem parameter to 32768 and disabled Hyper Threading in BIOS. But my CPU load is still very important. What are your

Re: [PERFORM] High CPU Load

2006-09-19 Thread Markus Schaber
Hi, Jerome, Jérôme BENOIS wrote: Now i Have 335 concurrent connections, i decreased work_mem parameter to 32768 and disabled Hyper Threading in BIOS. But my CPU load is still very important. What are your settings for commit_siblings and commit_delay? It default : #commit_delay = 01 #

[PERFORM] Optimizing DELETE

2006-09-19 Thread Ivan Voras
I've just fired off a DELETE FROM table command (i.e. unfiltered DELETE) on a trivially small table but with many foreign key references (on similar-sized tables), and I'm waiting for it to finish. It's been 10 minutes now, which seems very excessive for a table of 9000 rows on a 3 GHz desktop

Re: [PERFORM] Large tables (was: RAID 0 not as fast as expected)

2006-09-19 Thread Bucky Jordan
Mike, On Mon, Sep 18, 2006 at 07:14:56PM -0400, Alex Turner wrote: If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). I'll keep that in mind (minimizing

Re: [PERFORM] Optimizing DELETE

2006-09-19 Thread Csaba Nagy
I've just fired off a DELETE FROM table command (i.e. unfiltered DELETE) on a trivially small table but with many foreign key references (on similar-sized tables), and I'm waiting for it to finish. It's been 10 minutes now, which seems very excessive for a table of 9000 rows on a 3 GHz

Re: [PERFORM] Optimizing DELETE

2006-09-19 Thread Rod Taylor
On Tue, 2006-09-19 at 15:22 +0200, Ivan Voras wrote: I've just fired off a DELETE FROM table command (i.e. unfiltered DELETE) on a trivially small table but with many foreign key references (on similar-sized tables), and I'm waiting for it to finish. It's been 10 minutes now, which seems

Re: [PERFORM] Optimizing DELETE

2006-09-19 Thread Mark Lewis
You do not have indexes on all of the columns which are linked by foreign key constraints. For example, let's say that I had a scientist table with a single column scientist_name and another table discovery which had scientist_name as a column with a foreign key constraint to the scientist table.

Re: [PERFORM] Optimizing DELETE

2006-09-19 Thread Ivan Voras
Rod Taylor wrote: On Tue, 2006-09-19 at 15:22 +0200, Ivan Voras wrote: I've just fired off a DELETE FROM table command (i.e. unfiltered DELETE) on a trivially small table but with many foreign key references (on similar-sized tables), and I'm waiting for it to finish. It's been 10 minutes

[PERFORM] Pipelined functions in Postgres

2006-09-19 Thread Milen Kulev
Hello Lister, I am curios whether I can emulate the Oracle pipelined functions functionality in PG too (using RETURN NEXT ). For more information and examples about Oracle pipelined functions see:

Re: [PERFORM] Pipelined functions in Postgres

2006-09-19 Thread Shoaib Mir
I think pipelined functions are code you can pretend is a database table. For example you can do it like this in Oracle: select * from PLSQL_FUNCTION;You can achieve something similar in PostgreSQL using RETURN SETOF functions like this: CREATE OR REPLACE FUNCTION test_pipe (int) RETURNS SETOF

Re: [PERFORM] Pipelined functions in Postgres

2006-09-19 Thread Talha Khan
Hi Milen, Pipelined function is a code that acts like a database table. Inorder to use this functionality in postgres you would need to write the function like this CREATE OR REPLACE FUNCTION get_test_data (numeric) RETURNS SETOF RECORD AS$$DECLARE temp_rec RECORD;BEGIN FOR temp_rec IN

Re: [PERFORM] Pipelined functions in Postgres

2006-09-19 Thread Milen Kulev
Title: Nachricht Hello Shoaib, I know the SETOF funcitons. I want to simulate (somehow) producer/consumer relationship with SETOF(pipelined) functions.The first (producer )function generatesrecords (just like your test_pipe function), and the second function consumers the records, produced

Re: [PERFORM] Pipelined functions in Postgres

2006-09-19 Thread Milen Kulev
Title: Nachricht Talha, do you know how much memory is consumed by the SETOF function ? What happens with memory consumption of the function if SELECT ename FROM emp WHERE sal $1 returns 10 mio rows? I suppose thatmemory for the RECORDstructure is immediatelyreused by the next

Re: [PERFORM] Pipelined functions in Postgres

2006-09-19 Thread Shoaib Mir
I dont think so that will be possible using SETOF function ...You might have to partition the current query and this way can distribute the full load of the query if there is too much data invovled.Thanks, -- Shoaib MirEnterpriseDB (www.enterprisedb.com)On 9/20/06, Milen Kulev [EMAIL PROTECTED]

Re: [PERFORM] Pipelined functions in Postgres

2006-09-19 Thread Jeff Davis
On Tue, 2006-09-19 at 23:22 +0200, Milen Kulev wrote: Hello Shoaib, I know the SETOF funcitons. I want to simulate (somehow) producer/consumer relationship with SETOF(pipelined) functions. The first (producer )function generates records (just like your test_pipe function), and the second