[GENERAL] Cannot connect to server

2008-09-05 Thread Paul Shapley
Hi,



I am attempting to 'add a connection' to the postgres database using
pgadminlll v_1.8.4, postgresql v 8.3.3 -  1 with PostGis 1.3.3 on Mac OS
10.5.4.



After an initial installation it may connect successfully for a couple of
days before I'm unable to connect anymore. It is always the same pattern.



I've tried the recommendations offered on the postgresql community pages but
nothing seems to work. I've made the usual changes to:



postgreql.conf ('*' instead of 'localhost')

pg_hba.conf

but cannot get any further than this error:-





Error connecting to the server: could not connect to server: No such file or
directory

  Is the server running locally and accepting

  connections on Unix domain socket /tmp/.s.PGSQL.5432?


-- 
Paul J. Shapley


[GENERAL] Prepared statements aren't working with parameters with PQexecParams

2008-09-05 Thread Subspace god
The following works executed in the query browser:

PREPARE myquery (text) AS INSERT INTO myTable (word) VALUES ($1);
EXECUTE myquery('blah');

The following works in C++, executed as two statements with PQexecParams
using PQEXECPARAM_FORMAT_BINARY

PREPARE myquery AS INSERT INTO myTable (word) VALUES ('blah');
EXECUTE myquery;

The following does not work in C++, executed as above, in another session
using two subsequent calls to PQexecParams

PREPARE myquery (text) AS INSERT INTO myTable (word) VALUES ($1);
EXECUTE myquery($1::text);

The error is:

+lastError0x00dc4232 ERROR:  bind message supplies 1
parameters, but prepared statement  requires 0
char [1024]

Code is:

PQexecParams(pgConn,
query.C_String(),indices.Size(),0,paramData,paramLength,paramFormat,PQEXECPARAM_FORMAT_BINARY);

Parameters:

+paramData[0]0x00e00208 blahchar *
paramLength[0]4int
paramFormat[0]1int
indices.Size()1unsigned int

The difference between the two C++ calls is that the 2nd call has parameters
passed to the prepared statement, while the first does not.

Any ideas? Note that it is saying that the prepared statement is , rather
than the name of the prepared statement. It also says 0 parameters, although
it takes 1 parameter.


[GENERAL] Range Partititioning Constraint Exclusion Oddities

2008-09-05 Thread Ow Mun Heng
Hi, appreciate if someone can help shed some light on what i may be doing 
wrong. 

I know there are caveat on using constraint exclusion to reduce the # of 
partitions scanned.

pg:8.2.9

create table test (
code varchar,
dummy_col1 int,
dummy_col2 int
)

create table test_experimental_code (
code varchar,
dummy_col1 int,
dummy_col2 int
) inherits(test)


alter table test_experimental_code add check (code not in ('P000','000','0'))
alter table test_prod_code add check (code in ('P000','000','0'))

insert into test_prod_code(code, dummy_col1, dummy_col2) values 
('P000',1,1),('000',2,2),('0',3,3),('P000',44,44)
insert into test_experimental_code(code, dummy_col1, dummy_col2) values 
('AAA',1,1),('BBB',2,2),('BBC',3,3),('DAD',44,44)

set constraint_exclusion = on
select count(*) from test [Expected]
Aggregate  (cost=71.25..71.26 rows=1 width=0)
  -  Append  (cost=0.00..63.00 rows=3300 width=0)
-  Seq Scan on test  (cost=0.00..21.00 rows=1100 width=0)
-  Seq Scan on test_prod_code test  (cost=0.00..21.00 rows=1100 
width=0)
-  Seq Scan on test_experimental_code test  (cost=0.00..21.00 
rows=1100 width=0)

select count(*) from test where code = 'AAA' --[NOT expected result]
Aggregate  (cost=71.30..71.31 rows=1 width=0)
  -  Append  (cost=0.00..71.25 rows=18 width=0)
-  Seq Scan on test  (cost=0.00..23.75 rows=6 width=0)
  Filter: ((code)::text = 'AAA'::text)
-  Seq Scan on test_prod_code test  (cost=0.00..23.75 rows=6 width=0)
  Filter: ((code)::text = 'AAA'::text)
-  Seq Scan on test_experimental_code test  (cost=0.00..23.75 rows=6 
width=0)
  Filter: ((code)::text = 'AAA'::text)

select count(*) from test where code = 'AAA' and code not in ('P000','000','0') 
--[I thought this would help]
Aggregate  (cost=91.92..91.92 rows=1 width=0)
  -  Append  (cost=0.00..91.88 rows=15 width=0)
-  Seq Scan on test  (cost=0.00..30.62 rows=5 width=0)
  Filter: (((code)::text = 'AAA'::text) AND ((code)::text  ALL 
(('{P000,000,0}'::character varying[])::text[])))
-  Seq Scan on test_prod_code test  (cost=0.00..30.62 rows=5 width=0)
  Filter: (((code)::text = 'AAA'::text) AND ((code)::text  ALL 
(('{P000,000,0}'::character varying[])::text[])))
-  Seq Scan on test_experimental_code test  (cost=0.00..30.62 rows=5 
width=0)
  Filter: (((code)::text = 'AAA'::text) AND ((code)::text  ALL 
(('{P000,000,0}'::character varying[])::text[])))

select count(*) from test where code in ('P000','000','0') --[NOT Expected 
result]
Aggregate  (cost=83.75..83.76 rows=1 width=0)
  -  Append  (cost=0.00..83.62 rows=48 width=0)
-  Seq Scan on test  (cost=0.00..27.88 rows=16 width=0)
  Filter: ((code)::text = ANY (('{P000,000,0}'::character 
varying[])::text[]))
-  Seq Scan on test_prod_code test  (cost=0.00..27.88 rows=16 
width=0)
  Filter: ((code)::text = ANY (('{P000,000,0}'::character 
varying[])::text[]))
-  Seq Scan on test_experimental_code test  (cost=0.00..27.88 rows=16 
width=0)
  Filter: ((code)::text = ANY (('{P000,000,0}'::character 
varying[])::text[]))

ALTER TABLE test_prod_code DROP CONSTRAINT test_prod_code_code_check;
ALTER TABLE test_experimental_code DROP CONSTRAINT 
test_experimental_code_code_check;
alter table test_prod_code add check (code = 'PROD')
alter table test_experimental_code add check (code  'PROD')
update test_prod_code set code ='PROD'

select count(*) from test where code = 'AAA' -- Expected Result
Aggregate  (cost=47.53..47.54 rows=1 width=0)
  -  Append  (cost=0.00..47.50 rows=12 width=0)
-  Seq Scan on test  (cost=0.00..23.75 rows=6 width=0)
  Filter: ((code)::text = 'AAA'::text)
-  Seq Scan on test_experimental_code test  (cost=0.00..23.75 rows=6 
width=0)
  Filter: ((code)::text = 'AAA'::text)


select count(*) from test where code::text in ('AAA'::character 
varying,'BBB'::character varying) -- Explicit data-type 
Aggregate  (cost=47.56..47.57 rows=1 width=0)
  -  Append  (cost=0.00..47.50 rows=22 width=0)
-  Seq Scan on test  (cost=0.00..23.75 rows=11 width=0)
  Filter: ((code)::text = ANY ('{AAA,BBB}'::text[]))
-  Seq Scan on test_experimental_code test  (cost=0.00..23.75 rows=11 
width=0)
  Filter: ((code)::text = ANY ('{AAA,BBB}'::text[]))


select count(*) from test where code in ('AAA','BBB') -- W/o it it will query 
all partitions
Aggregate  (cost=79.58..79.59 rows=1 width=0)
  -  Append  (cost=0.00..79.50 rows=33 width=0)
-  Seq Scan on test  (cost=0.00..26.50 rows=11 width=0)
  Filter: ((code)::text = ANY (('{AAA,BBB}'::character 
varying[])::text[]))
-  Seq Scan on test_prod_code test  (cost=0.00..26.50 rows=11 
width=0)
  Filter: ((code)::text = ANY (('{AAA,BBB}'::character 
varying[])::text[]))
-  Seq Scan on test_experimental_code test  (cost=0.00..26.50 rows=11 

Re: [GENERAL] Postgresql optimisator deoptimise queries sometime...

2008-09-05 Thread Maxim Boguk

Anyone can commet that issue?

More extremal sample (simplified version of what i get in real world situation):
same table data...

Query:
select * from (SELECT table1.id,(select count(*) from table2 where table2.fk=table1.id) as total from table1) as t1 where total=990 or total=991 or total=992 or total=993 or 
total=994 or total=995 or total=996 or total=997 or total=998 or total=999 or total=1000 or total=1001 or total=1002 or total=1003 or total=1004 or total=1005 or total=1006 or 
total=1007 or total=1008 or total=1009 or total=1010;


But postgres use bad bad plan for that query:
testdb=# EXPLAIN ANALYZE select * from (SELECT table1.id,(select count(*) from table2 where table2.fk=table1.id) as total from table1) as t1 where total=990 or total=991 or 
total=992 or total=993 or total=994 or total=995 or total=996 or total=997 or total=998 or total=999 or total=1000 or total=1001 or total=1002 or total=1003 or total=1004 or 
total=1005 or total=1006 or total=1007 or total=1008 or total=1009 or total=1010;


   QUERY PLAN 


--
 Seq Scan on table1  (cost=0.00..906433.96 rows=17 width=4) (actual 
time=1035.481..15695.443 rows=12 loops=1)
   Filter: (((subplan) = 990) OR ((subplan) = 991) OR ((subplan) = 992) OR ((subplan) = 993) OR ((subplan) = 994) OR ((subplan) = 995) OR ((subplan) = 996) OR ((subplan) = 997) OR 
((subplan) = 998) OR ((subplan) = 999) OR ((subplan) = 1000) OR ((subplan) = 1001) OR ((subplan) = 1002) OR ((subplan) = 1003) OR ((subplan) = 1004) OR ((subplan) = 1005) OR 
((subplan) = 1006) OR ((subplan) = 1007) OR ((subplan) = 1008) OR ((subplan) = 1009) OR ((subplan) = 1010))

   SubPlan
 -  Aggregate  (cost=849.50..849.51 rows=1 width=0) (actual 
time=16.308..16.309 rows=1 loops=39)
   -  Seq Scan on table2  (cost=0.00..847.00 rows=1000 width=0) 
(actual time=0.021..14.839 rows=1000 loops=39)
 Filter: (fk = $0)
 -  Aggregate  (cost=849.50..849.51 rows=1 width=0) (actual 
time=16.286..16.288 rows=1 loops=39)
   -  Seq Scan on table2  (cost=0.00..847.00 rows=1000 width=0) 
(actual time=0.021..14.817 rows=1000 loops=39)
 Filter: (fk = $0)
 -  Aggregate  (cost=849.50..849.51 rows=1 width=0) (actual 
time=16.434..16.436 rows=1 loops=39)
   -  Seq Scan on table2  (cost=0.00..847.00 rows=1000 width=0) 
(actual time=0.021..14.957 rows=1000 loops=39)
 Filter: (fk = $0)

17 more aggregate seq scans

 -  Aggregate  (cost=849.50..849.51 rows=1 width=0) (actual 
time=16.316..16.317 rows=1 loops=12)
  -  Seq Scan on table2  (cost=0.00..847.00 rows=1000 width=0) (actual 
time=0.020..14.845 rows=1000 loops=12)
Filter: (fk = $0)
 Total runtime: 15696.295 ms
(70 rows)

vs right version:

testdb=# EXPLAIN ANALYZE select * from (SELECT table1.id,(select count(*) from table2 where table2.fk=table1.id) as total from table1 offset 0) as t1 where total=990 or total=991 
or total=992 or total=993 or total=994 or total=995 or total=996 or total=997 or total=998 or total=999 or total=1000 or total=1001 or total=1002 or total=1003 or total=1004 or 
total=1005 or total=1006 or total=1007 or total=1008 or total=1009 or total=1010;


QUERY PLAN 


-
 Subquery Scan t1  (cost=0.00..42480.25 rows=17 width=12) (actual 
time=63.121..804.438 rows=12 loops=1)
   Filter: ((t1.total = 990) OR (t1.total = 991) OR (t1.total = 992) OR (t1.total = 993) OR (t1.total = 994) OR (t1.total = 995) OR (t1.total = 996) OR (t1.total = 997) OR 
(t1.total = 998) OR (t1.total = 999) OR (t1.total = 1000) OR (t1.total = 1001) OR (t1.total = 1002) OR (t1.total = 1003) OR (t1.total = 1004) OR (t1.total = 1005) OR (t1.total = 
1006) OR (t1.total = 1007) OR (t1.total = 1008) OR (t1.total = 1009) OR (t1.total = 1010))

   -  Limit  (cost=0.00..42477.12 rows=50 width=4) (actual 
time=15.029..804.190 rows=50 loops=1)
 -  Seq Scan on table1  (cost=0.00..42477.12 rows=50 width=4) (actual 
time=15.027..804.053 rows=50 loops=1)
   

Re: [GENERAL] Prepared statements aren't working with parameters with PQexecParams

2008-09-05 Thread David Wilson
On Fri, Sep 5, 2008 at 2:52 AM, Subspace god [EMAIL PROTECTED] wrote:

 The following does not work in C++, executed as above, in another session
 using two subsequent calls to PQexecParams

 PREPARE myquery (text) AS INSERT INTO myTable (word) VALUES ($1);
 EXECUTE myquery($1::text);

You're doing prepared statements incorrectly. Use PQprepare() and
PQexecPrepared() when using them from libpq; don't do them by hand on
your own.

-- 
- David T. Wilson
[EMAIL PROTECTED]

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] max_stack_depth Exceeded

2008-09-05 Thread Ow Mun Heng
Hi,

I'm playing around with triggers to implement partitioning. 
I hit something which I don't know what and I don't have internet here
at work to find out what is the cause.


ERROR : stack depth limit exceeded

I see that this is one of the options in postgresql.conf but I don't
know exactly what it is.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth Exceeded

2008-09-05 Thread Magnus Hagander
Ow Mun Heng wrote:
 Hi,
 
 I'm playing around with triggers to implement partitioning. 
 I hit something which I don't know what and I don't have internet here
 at work to find out what is the cause.
 
 
 ERROR : stack depth limit exceeded
 
 I see that this is one of the options in postgresql.conf but I don't
 know exactly what it is.

Sounds like you may have created a situation with infinite recursion.

Like in some branch your trigger is inserting back into the parent
table, thus firing the trigger again in an endless loop, instead of
inserting it into the proper child table.

//Magnus


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth Exceeded

2008-09-05 Thread Ow Mun Heng
On Fri, 2008-09-05 at 10:35 +0200, Magnus Hagander wrote:
 Ow Mun Heng wrote:
  Hi,
  
  I'm playing around with triggers to implement partitioning. 
  I hit something which I don't know what and I don't have internet here
  at work to find out what is the cause.
  
  
  ERROR : stack depth limit exceeded
  
  I see that this is one of the options in postgresql.conf but I don't
  know exactly what it is.
 
 Sounds like you may have created a situation with infinite recursion.
 
 Like in some branch your trigger is inserting back into the parent
 table, thus firing the trigger again in an endless loop, instead of
 inserting it into the proper child table.

This seems simple enough.

CREATE OR REPLACE FUNCTION head_raw_all_test_2_insert_trigger()
  RETURNS trigger AS
$BODY$
BEGIN

IF ( NEW.test_run_start_date_time = '2008-08-18' and
NEW.test_run_start_date_time  '2008-08-19' ) THEN
   INSERT INTO head_raw_all_test_2_prod_8_18 VALUES (NEW.*);
ELSEIF ( NEW.test_run_start_date_time = '2008-08-19' and
NEW.test_run_start_date_time  '2008-08-20' ) THEN
   INSERT INTO head_raw_all_test_2_prod_8_19 VALUES (NEW.*);
ELSE
  INSERT INTO head_raw_all_test_2 VALUES (NEW.*);
END IF;
RETURN NULL;
END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE;


This one, though, works.

CREATE OR REPLACE FUNCTION head_raw_all_test_insert_trigger()
  RETURNS trigger AS
$BODY$
BEGIN

IF ( NEW.dcm_evaluation_code = 'PROD' OR NEW.dcm_evaluation_code is
null) THEN
   INSERT INTO head_raw_all_test_prod VALUES (NEW.*);
ELSEIF ( NEW.dcm_evaluation_code  'PROD' ) THEN
   INSERT INTO head_raw_all_test_eval VALUES (NEW.*);
ELSE
  INSERT INTO head_raw_all_test VALUES (NEW.*);
END IF;
RETURN NULL;
END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE;
ALTER FUNCTION head_raw_all_test_insert_trigger() OWNER TO operator;


Am I doing something wrong? Is the ELSE condition that is making it
recurse further and further?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] xml queries date format

2008-09-05 Thread Jef Peeraer

i am using the xml add-ons, but the date output format seems to be wrong :
i have 
show datestyle;
 DateStyle
---
 SQL, DMY

select agenda_datum from dossiers where id = 61;
 agenda_datum
--
 29/07/2008

select table_to_xml('dossiers', false, false, '');
gives  (knip )
row
   id62/id
   voorwerp_detail5 coils 5.622 kg/voorwerp_detail
   schade_datum2008-07-29/schade_datum
   voorbehoudfalse/voorbehoud
   protestfalse/protest
   vorderingfalse/vordering
   afgewezenfalse/afgewezen
   gedeeltelijk_afgewezenfalse/gedeeltelijk_afgewezen
   verhaalfalse/verhaal
   administratieffalse/administratief
 /row



jef

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot connect to server

2008-09-05 Thread Filip Rembiałkowski
2008/9/5 Paul Shapley [EMAIL PROTECTED]:
 Hi,



 I am attempting to 'add a connection' to the postgres database using
 pgadminlll v_1.8.4, postgresql v 8.3.3 -  1 with PostGis 1.3.3 on Mac OS
 10.5.4.



 After an initial installation it may connect successfully for a couple of
 days before I'm unable to connect anymore. It is always the same pattern.



 I've tried the recommendations offered on the postgresql community pages but
 nothing seems to work. I've made the usual changes to:



 postgreql.conf ('*' instead of 'localhost')

 pg_hba.conf


 but cannot get any further than this error:-





 Error connecting to the server: could not connect to server: No such file or
 directory

   Is the server running locally and accepting

   connections on Unix domain socket /tmp/.s.PGSQL.5432?

the question included in above error message is quite important.

is the server running (see process list, ps ax | grep postgres)
does the socket file exist? (see filesystem, ls -l /tmp/.s.PGSQL.5432)

it's possible that the client tries to connect via non-existing
socket. it happens when server config is different from compiled-in
defaults for socket location.

in this case try to connect via TCP,  this should help.







 --
 Paul J. Shapley



-- 
Filip Rembiałkowski

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth Exceeded

2008-09-05 Thread Magnus Hagander
Ow Mun Heng wrote:
 On Fri, 2008-09-05 at 10:35 +0200, Magnus Hagander wrote:
 Ow Mun Heng wrote:
 Hi,

 I'm playing around with triggers to implement partitioning. 
 I hit something which I don't know what and I don't have internet here
 at work to find out what is the cause.


 ERROR : stack depth limit exceeded

 I see that this is one of the options in postgresql.conf but I don't
 know exactly what it is.
 Sounds like you may have created a situation with infinite recursion.

 Like in some branch your trigger is inserting back into the parent
 table, thus firing the trigger again in an endless loop, instead of
 inserting it into the proper child table.
 
 This seems simple enough.
 
 CREATE OR REPLACE FUNCTION head_raw_all_test_2_insert_trigger()
   RETURNS trigger AS
 $BODY$
 BEGIN
 
 IF ( NEW.test_run_start_date_time = '2008-08-18' and
 NEW.test_run_start_date_time  '2008-08-19' ) THEN
INSERT INTO head_raw_all_test_2_prod_8_18 VALUES (NEW.*);
 ELSEIF ( NEW.test_run_start_date_time = '2008-08-19' and
 NEW.test_run_start_date_time  '2008-08-20' ) THEN
INSERT INTO head_raw_all_test_2_prod_8_19 VALUES (NEW.*);
 ELSE
   INSERT INTO head_raw_all_test_2 VALUES (NEW.*);
 END IF;
 RETURN NULL;
 END;
 $BODY$
   LANGUAGE 'plpgsql' VOLATILE;
 
 
 This one, though, works.
 
 CREATE OR REPLACE FUNCTION head_raw_all_test_insert_trigger()
   RETURNS trigger AS
 $BODY$
 BEGIN
 
 IF ( NEW.dcm_evaluation_code = 'PROD' OR NEW.dcm_evaluation_code is
 null) THEN
INSERT INTO head_raw_all_test_prod VALUES (NEW.*);
 ELSEIF ( NEW.dcm_evaluation_code  'PROD' ) THEN
INSERT INTO head_raw_all_test_eval VALUES (NEW.*);
 ELSE
   INSERT INTO head_raw_all_test VALUES (NEW.*);
 END IF;
 RETURN NULL;
 END;
 $BODY$
   LANGUAGE 'plpgsql' VOLATILE;
 ALTER FUNCTION head_raw_all_test_insert_trigger() OWNER TO operator;
 
 
 Am I doing something wrong? Is the ELSE condition that is making it
 recurse further and further?

If your trigger is defined on the head_raw_all_test_2 table,  then yes.
Because it will do a new insert there, and the new insert will fire the
trigger again, which will do a new insert, which wil lfire the trigger etc.

//Magnus

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot connect to server

2008-09-05 Thread Reid Thompson
On Fri, 2008-09-05 at 11:44 +0200, Filip Rembiałkowski wrote:

 
 is the server running (see process list, ps ax | grep postgres)
 does the socket file exist? (see filesystem, ls -l /tmp/.s.PGSQL.5432)
Since you say things work for several days, then stop -- make sure
there's no braindead automated cleanup of /tmp that is
removing /tmp/.s.PGSQL.5432
 
 it's possible that the client tries to connect via non-existing
 socket. it happens when server config is different from compiled-in
 defaults for socket location.
 
 in this case try to connect via TCP,  this should help

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] large inserts and fsync

2008-09-05 Thread Aaron Burnett

Forgive me if this is a many-times rehashed topic. I¹m very new to
postgresql, most of my background is in Oracle.

Running postgres 8.2.5 with one master and three slaves (using slony)

For an upcoming release there is a 16 million row insert that on our test
cluster takes about 2.5 hours to complete with all indices dropped
beforehand.

If I turn off fsync, it completes in under 10 minutes.

Other than the protection that fsync will allow me should there be a crash
in the middle of such a process, my thinking was to turn off fsync for this
part of the release, get the insert done, then restart fsync once the insert
is complete.

Am I opening myself up to any dangers that aren¹t obvious by doing this? Any
advice to the contrary? And of course, if I am out of my mind for doing
this, please let me know.

Thanks in advance

Aaron


Re: [GENERAL] a performence question

2008-09-05 Thread Filip Rembiałkowski
2008/9/4 Rafal Pietrak [EMAIL PROTECTED]:
 Hi,

 Maybe someone on this list actually have already tried this:

 I'm planning to make a partitioned database. From Postgres documentation
 I can see, that there are basically two methods to route INSERTS into
 partitioned table:
one. is a TRIGGER
other. is a RULE

 My Table will have over 1000 partitions. Some not so big, but
 significant number of them will be of multimillion rows. Partitioning
 will be done using a single column, on equality meaning:

 CREATE TABLE mainlog (sel int, tm timestamp, info text,...);
 CREATE TABLE mainlog_p1 (CHECK (sel=1)) INHERITS (mainlog);
 CREATE TABLE mainlog_p2 (CHECK (sel=2)) INHERITS (mainlog);
 ...etc.

 If I route INSERT with a TRIGGER, the function would look like:
 CREATE  TRIGGER...AS $$ DECLARE x RECORD; BEGIN
 SELECT id INTO x FROM current_route; NEW.sel := x.id;
 IF NEW.sel = 1 THEN INSERT INTO mainlog_p1 VALUES (NEW.*);
 ELSE IF NEW.sel = 2 THEN INSERT INTO mainlog_p2 VALUES (NEW.*);
 
 END IF;
 RETURN NULL;
 $$;

 If I route INSETS with a RULE, I'd have something like 1000 rules hooked
 up to MAINLOG, all looking like:
 CREATE RULE  ON INSERT ... WHERE EXISTS(SELECT 1 FROM current_route
 WHERE id = 1) DO INSTEAD INSERT INTO mainlog_p1 VALUES SELECT
 x.id,new.tm... FROM (SELECT id FROM current_route) x;
 ... and similar RULES for cases WHERE id = 2, etc.

 My question is, where should I expect better performance on those
 INSERTS).

 I would prefer a set of RULES (as I wouldn't like to rewrite TRIGGER
 function every time I add a partition ... a thousand lines function),
 but since they all must make a select query on CURRENT_ROUTE table, may
 be that will not be particularly effective? The TRIGGER function does a
 single query - may be it'll be faster? I was planning to generate some
 dummy data and run a simulation, but may be someone already has that
 experience? Or maybe the TRIGGER should look differently? Or the set of
 RULES?


I had a bit spare time so I tested this

see 
http://filip.rembialkowski.net/postgres-partitioning-performance-rules-vs-triggers/

seems that in your scenario trigger will be better.

but If I had to do this, and if performance was very important, I
would move partition selection logic out of the INSERT phase. the
application can know this before the actual insert. unless you want to
shift selections very often...





-- 
Filip Rembiałkowski

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large inserts and fsync

2008-09-05 Thread Sam Mason
On Fri, Sep 05, 2008 at 09:16:41AM -0400, Aaron Burnett wrote:
 For an upcoming release there is a 16 million row insert that on our test
 cluster takes about 2.5 hours to complete with all indices dropped
 beforehand.
 
 If I turn off fsync, it completes in under 10 minutes.

Have you tried bundling all the INSERT statements into a single
transaction?  If you haven't then PG will run each statement in its own
transaction and then commit each INSERT statement to disk separately,
incurring large overheads.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large inserts and fsync

2008-09-05 Thread Aaron Burnett

Yes, the developer already made sure of that and I verified.


On 9/5/08 11:10 AM, Sam Mason [EMAIL PROTECTED] wrote:

 On Fri, Sep 05, 2008 at 09:16:41AM -0400, Aaron Burnett wrote:
 For an upcoming release there is a 16 million row insert that on our test
 cluster takes about 2.5 hours to complete with all indices dropped
 beforehand.
 
 If I turn off fsync, it completes in under 10 minutes.
 
 Have you tried bundling all the INSERT statements into a single
 transaction?  If you haven't then PG will run each statement in its own
 transaction and then commit each INSERT statement to disk separately,
 incurring large overheads.
 
 
   Sam


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth Exceeded

2008-09-05 Thread Tom Lane
Magnus Hagander [EMAIL PROTECTED] writes:
 Ow Mun Heng wrote:
 Am I doing something wrong?

 If your trigger is defined on the head_raw_all_test_2 table,  then yes.
 Because it will do a new insert there, and the new insert will fire the
 trigger again, which will do a new insert, which wil lfire the trigger etc.

Of course, the way to have the row be inserted into the parent table is
to just let the trigger return it, instead of returning null.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xml queries date format

2008-09-05 Thread Tom Lane
Jef Peeraer [EMAIL PROTECTED] writes:
 i am using the xml add-ons, but the date output format seems to be wrong :

I think the conversion to xml intentionally always uses ISO date format,
because that's required by some spec somewhere.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large inserts and fsync

2008-09-05 Thread Sam Mason
On Fri, Sep 05, 2008 at 11:19:13AM -0400, Aaron Burnett wrote:
 On 9/5/08 11:10 AM, Sam Mason [EMAIL PROTECTED] wrote:
  On Fri, Sep 05, 2008 at 09:16:41AM -0400, Aaron Burnett wrote:
  For an upcoming release there is a 16 million row insert that on our test
  cluster takes about 2.5 hours to complete with all indices dropped
  beforehand.
  
  If I turn off fsync, it completes in under 10 minutes.
  
  Have you tried bundling all the INSERT statements into a single
  transaction?
 
 Yes, the developer already made sure of that and I verified.

I was under the impression that the only time PG synced the data to disk
was when the transaction was COMMITed.  I've never needed to turn off
fsync for performance reasons even when pulling in hundreds of millions
of rows.  I do tend to use a single large COPY rather than many small
INSERT statements.  PG spends an inordinate amount of time parsing
millions of SQL statements, whereas a tab delimited file is much easier
to parse.

Could you try bumping checkpoint_segments up a bit? or have you tried
that already?


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large inserts and fsync

2008-09-05 Thread Alan Hodgson
   Have you tried bundling all the INSERT statements into a single
   transaction?
 
  Yes, the developer already made sure of that and I verified.

I would verify that again, because fsync shouldn't make much of a difference 
in that circumstance. I might not do all 16 million in one transaction, but 
if you're doing 10 or 100 thousand at a time, it should be pretty fast.

A language-level auto-commit remains to be disabled, perhaps?


-- 
Alan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] large inserts and fsync

2008-09-05 Thread Tom Lane
Aaron Burnett [EMAIL PROTECTED] writes:
 On 9/5/08 11:10 AM, Sam Mason [EMAIL PROTECTED] wrote:
 Have you tried bundling all the INSERT statements into a single
 transaction?

 Yes, the developer already made sure of that and I verified.

Hmm, in that case the penalty probably comes from pushing WAL data out
to disk synchronously.  It might be worth playing with wal_sync_method
and/or raising wal_buffers.

The trouble with turning fsync off is that a system crash midway through
the import might leave you with a corrupt database.  If you're willing
to start over from initdb then okay, but if you are importing into a
database that already contains valuable data, I wouldn't recommend it.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error Installing Postgre 8.3 Silent Mod

2008-09-05 Thread Anderson dos Santos Donda
Hi All

I'm trying to install the Postgre 8.3 on Windows XP/Vista using the silent
mod of msi.

But, the installation returns Internal Account lookup Failure after create
the service account user.

Here is my line:

msiexec /i postgresql-8.3-int.msi /qr ADDLOCAL=server,psql,pgadmin
INTERNALLAUNCH=1 DOSERVICE=1 DOINITDB=1 SERVICEDOMAIN=%COMPUTERNAME%
SERVICEACCOUNT=postgres_service SERVICEPASSWORD=bob_forever
CREATESERVICEUSER=1 SERVICENAME=DatabaseLocal SUPERUSER=postgres
SUPERPASSWORD=bobmarley PERMITREMOTE=0 PL_PGSQL=1 NOSHORTCUTS=1
TRANSFORMS=:lang_pt_br


Anyone can help me?


Re: [GENERAL] max_stack_depth Exceeded

2008-09-05 Thread Ow Mun Heng


-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 05, 2008 11:22 PM
To: Magnus Hagander
Cc: Ow Mun Heng; pgsql-general@postgresql.org
Subject: Re: [GENERAL] max_stack_depth Exceeded 

Magnus Hagander [EMAIL PROTECTED] writes:
 Ow Mun Heng wrote:
 Am I doing something wrong?

 If your trigger is defined on the head_raw_all_test_2 table,  then yes.
 Because it will do a new insert there, and the new insert will fire the
 trigger again, which will do a new insert, which wil lfire the trigger
etc.

Of course, the way to have the row be inserted into the parent table is
to just let the trigger return it, instead of returning null.

Er.. which is how? I'm new in this so, syntax would be appreciated.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log_statement not working on pl/pgsql functions

2008-09-05 Thread Bruce Momjian
Fernando Moreno wrote:
 Hi, I've changed the setting log_statement to mod, in order to log data
 modifications, and it's working fine with sentences sent by the client
 application (psql included), but insert/update/delete sentences executed
 inside functions are not logged. Functions are called in a select query.
 
 I've reloaded (even restarted) the server, the line with the setting is
 uncommented and show log_statement returns mod. I changed its value to
 all for a while and it worked as expected, logging every single query. By
 the way, I'm using Postgresql 8.3.1 on window xp.
 
 Am I doing something wrong?

The problem is that the checks for the statement type are done at the
time the query arrives from the client, not while executing in a
function.

-- 
  Bruce Momjian  [EMAIL PROTECTED]http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general