[GENERAL] Create Trigger Function For Table Partition.

2010-01-26 Thread Yan Cheng Cheok
By referring to 
http://www.postgresql.org/docs/current/static/ddl-partitioning.html

(1) I create trigger function as follow :

CREATE OR REPLACE FUNCTION measurement_insert_trigger()
RETURNS TRIGGER AS 
$BODY$DECLARE
measurement_table_index bigint;
measurement_table_name text;
BEGIN
measurement_table_index = NEW.measurement_id % 20;
measurement_table_name = 'measurement_' || measurement_table_index;
EXECUTE 'INSERT INTO ' || measurement_table_name || ' VALUES (NEW.*);';
RETURN NULL;
END;$BODY$
LANGUAGE plpgsql;

CREATE TRIGGER insert_measurement_trigger
BEFORE INSERT ON measurement
FOR EACH ROW EXECUTE PROCEDURE measurement_insert_trigger();


I can see my trigger function named measurement_insert_trigger under pgadmin. 
However, I cannot see insert_measurement_trigger

http://sites.google.com/site/yanchengcheok/Home/cannot-see-trigger-point.PNG?attredirects=0

How can I see insert_measurement_trigger under pgadmin?

(2) When I execute SELECT * FROM create_lot();

CREATE OR REPLACE FUNCTION create_lot()
  RETURNS void AS
$BODY$DECLARE
_lot_id bigint;
_unit_id bigint;
count int;
count2 int;   
BEGIN 
INSERT INTO lot (v) VALUES ('Lot0') RETURNING lot_id INTO _lot_id;
count = 1;
LOOP 
INSERT INTO unit (fk_lot_id, v) VALUES (_lot_id, 'Unit') RETURNING 
unit_id INTO _unit_id;
   
count2 = 1;
LOOP
INSERT INTO measurement (fk_unit_id, v) VALUES (_unit_id, 
'Measurement');
count2 = count2 + 1;
EXIT WHEN count2  3;
END LOOP;

count = count + 1;
EXIT WHEN count  3;
END LOOP;  
END;$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;
ALTER FUNCTION create_lot() OWNER TO postgres;

I get the following error :

ERROR:  NEW used in query that is not in a rule
LINE 1: INSERT INTO measurement_9 VALUES (NEW.*);
  ^
QUERY:  INSERT INTO measurement_9 VALUES (NEW.*);
CONTEXT:  PL/pgSQL function measurement_insert_trigger line 7 at EXECUTE 
statement
SQL statement INSERT INTO measurement (fk_unit_id, v) VALUES ( $1 , 
'Measurement')
PL/pgSQL function create_lot line 14 at SQL statement

It seems that NEW is not being recognized within EXECUTE statement. How can I 
avoid this problem? I cannot have static SQL, as my table name needed to be 
dynamic generated.

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Shall I convert all my stored procedure to use EXECUTE, to ensure I get index-scan

2010-01-26 Thread Yan Cheng Cheok
Does it mean, if it isn't broken, don't fix it?

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Create Trigger Function For Table Partition.

2010-01-26 Thread yccheok

I had solved my second problem using the following technique :

EXECUTE 'INSERT INTO ' || measurement_table_name || '(fk_unit_id, v) VALUES
(' || NEW.fk_unit_id || ',' || NEW.v || ')';



yccheok wrote:
 
 By referring to
 http://www.postgresql.org/docs/current/static/ddl-partitioning.html
 
 (1) I create trigger function as follow :
 
 CREATE OR REPLACE FUNCTION measurement_insert_trigger()
 RETURNS TRIGGER AS 
 $BODY$DECLARE
 measurement_table_index bigint;
 measurement_table_name text;
 BEGIN
 measurement_table_index = NEW.measurement_id % 20;
 measurement_table_name = 'measurement_' || measurement_table_index;
 EXECUTE 'INSERT INTO ' || measurement_table_name || ' VALUES
 (NEW.*);';
 RETURN NULL;
 END;$BODY$
 LANGUAGE plpgsql;
 
 CREATE TRIGGER insert_measurement_trigger
 BEFORE INSERT ON measurement
 FOR EACH ROW EXECUTE PROCEDURE measurement_insert_trigger();
 
 
 I can see my trigger function named measurement_insert_trigger under
 pgadmin. However, I cannot see insert_measurement_trigger
 
 http://sites.google.com/site/yanchengcheok/Home/cannot-see-trigger-point.PNG?attredirects=0
 
 How can I see insert_measurement_trigger under pgadmin?
 
 (2) When I execute SELECT * FROM create_lot();
 
 CREATE OR REPLACE FUNCTION create_lot()
   RETURNS void AS
 $BODY$DECLARE
 _lot_id bigint;
 _unit_id bigint;
 count int;
 count2 int;   
 BEGIN 
 INSERT INTO lot (v) VALUES ('Lot0') RETURNING lot_id INTO _lot_id;
 count = 1;
 LOOP 
 INSERT INTO unit (fk_lot_id, v) VALUES (_lot_id, 'Unit') RETURNING
 unit_id INTO _unit_id;

 count2 = 1;
 LOOP
 INSERT INTO measurement (fk_unit_id, v) VALUES (_unit_id,
 'Measurement');
 count2 = count2 + 1;
 EXIT WHEN count2  3;
 END LOOP;
 
 count = count + 1;
 EXIT WHEN count  3;
 END LOOP;  
 END;$BODY$
   LANGUAGE 'plpgsql' VOLATILE
   COST 100;
 ALTER FUNCTION create_lot() OWNER TO postgres;
 
 I get the following error :
 
 ERROR:  NEW used in query that is not in a rule
 LINE 1: INSERT INTO measurement_9 VALUES (NEW.*);
   ^
 QUERY:  INSERT INTO measurement_9 VALUES (NEW.*);
 CONTEXT:  PL/pgSQL function measurement_insert_trigger line 7 at EXECUTE
 statement
 SQL statement INSERT INTO measurement (fk_unit_id, v) VALUES ( $1 ,
 'Measurement')
 PL/pgSQL function create_lot line 14 at SQL statement
 
 It seems that NEW is not being recognized within EXECUTE statement. How
 can I avoid this problem? I cannot have static SQL, as my table name
 needed to be dynamic generated.
 
 Thanks and Regards
 Yan Cheng CHEOK
 
 
   
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 

-- 
View this message in context: 
http://old.nabble.com/Create-Trigger-Function-For-Table-Partition.-tp27319259p27319924.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Primary Key Increment Doesn't Seem Correct Under Table Partition

2010-01-26 Thread Yan Cheng Cheok
Currently, I have a table which I implement table (measurement) partition 
policy.

http://www.postgresql.org/docs/current/static/ddl-partitioning.html

Whenever an item being inserted into measurement table, modulo will be perform 
on measurement table primary key. Then, by using the result of modulo, dynamic 
table name will be generated. and that particular row will be assigned into 
measurement's child table.

Some portion of code is as follow :


-- measurement table


CREATE TABLE measurement
(
  measurement_id bigserial NOT NULL,
  fk_unit_id bigint NOT NULL,
  v text NOT NULL,
  CONSTRAINT pk_measurement_id PRIMARY KEY (measurement_id),
  CONSTRAINT fk_unit_id FOREIGN KEY (fk_unit_id)
  REFERENCES unit (unit_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE
); 


-- measurement table trigger function


CREATE OR REPLACE FUNCTION measurement_insert_trigger()
RETURNS TRIGGER AS 
$BODY$DECLARE
measurement_table_index bigint;
measurement_table_name text;
BEGIN
-- 20 is just an example here right now. The true value will be 100,000,000
measurement_table_index = NEW.measurement_id % 20;
measurement_table_name = 'measurement_' || measurement_table_index;

-- Since measurement_id for parent table is already a bigserial
-- Do I still need to create index for child's measurement_id?

IF NOT EXISTS(SELECT * FROM information_schema.tables WHERE table_name = 
measurement_table_name) THEN
EXECUTE 'CREATE TABLE ' || quote_ident(measurement_table_name) || '
(
) INHERITS (measurement);';
EXECUTE 'CREATE INDEX ' || quote_ident(measurement_table_name) || 
'_measurement_id ON ' || quote_ident(measurement_table_name) || 
'(measurement_id);';  
END IF;

EXECUTE 'INSERT INTO ' || quote_ident(measurement_table_name) || 
'(fk_unit_id, v) VALUES (' || NEW.fk_unit_id || ',' || quote_literal(NEW.v) || 
')';
RETURN NULL;
END;$BODY$
LANGUAGE plpgsql;

CREATE TRIGGER insert_measurement_trigger
BEFORE INSERT ON measurement
FOR EACH ROW EXECUTE PROCEDURE measurement_insert_trigger();


However, whenever I insert row into measurement table, I realize its primary 
key value is going from 2, 4, 6, 8, 10...

May I know how can I prevent this?

The complete code is at 

http://sites.google.com/site/yanchengcheok/Home/table-partition.sql?attredirects=0d=1

(1) create a database named sandbox.

(2) execute script in table-partition.sql

(3) SELECT * FROM create_lot();

(4) View on measurement table.

Also, is it necessary to create index for measurement_id found in measurement's 
child table? I am concern on the read speed.

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Correct Concept On Table Partition

2010-01-26 Thread Yan Cheng Cheok
Currently, I plan to use table partition to solve the following problem.

I have a table which is going to grow to a very huge row, as time goes on.

As I know, as table grow larger, the read operation will be slower.

Hence, I decide to use table partition, in order to improve read speed.

I have parent table named measurement.

Then I will have child tables named measurement_1, measurement_2, 

First 1st millions rows will be write to measurement_1, 2nd millions into 
measurement_2, 
 
My understanding is, 

(1) measurement table will act as a virtual table, which make me easy for me to 
perform query read and query write.

(2) measurement_1, measurement_2 will be real table.

(3) when viewing the 2nd millions row (1,000,001 - 2,000,000) of measurement, 

before partition

instead of reading total 2 millions row, and displaying the (1,000,001 - 
2,000,000)

after partition
===
we will just need to access table measurement_2 only, which is smaller, and 
shall be faster.

(4) extensive join operation will be involve. I am more concern into read speed.


Is this the correct expectation, on table partition?

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary Key Increment Doesn't Seem Correct Under Table Partition

2010-01-26 Thread Alban Hertroys
On 26 Jan 2010, at 11:00, Yan Cheng Cheok wrote:

 However, whenever I insert row into measurement table, I realize its primary 
 key value is going from 2, 4, 6, 8, 10...
 
 May I know how can I prevent this?

Apparently nextval on that sequence gets called multiple times in your queries.

 Also, is it necessary to create index for measurement_id found in 
 measurement's child table? I am concern on the read speed.


Serials don't create indexes, only a sequence. Primary keys create indexes (to 
guarantee uniqueness) - foreign keys do not.
So yes, you want an index on that column in the child table.

Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4b5ec59910605107914066!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Correct Concept On Table Partition

2010-01-26 Thread A. Kretschmer
In response to Yan Cheng Cheok :
 Currently, I plan to use table partition to solve the following problem.
 I have a table which is going to grow to a very huge row, as time goes on.
 As I know, as table grow larger, the read operation will be slower.
 
 Hence, I decide to use table partition, in order to improve read speed.
 ...
 
 First 1st millions rows will be write to measurement_1, 2nd millions into 
 measurement_2, 
 
 
 Is this the correct expectation, on table partition?

Depends on your selects. You needs an attribute to decide which
child-table contains your data.

For instance, create tables for every month. Now you can 'select ...
where date = '2010-01-01'::date and date  '2010-02-01'::date to select
all data for this particular month.

Your child-tables should contains contraints to enforce this
partitioning-schema.

There are a lot of examples in the internet how to do that, for instance:
http://www.if-not-true-then-false.com/2009/11/howto-create-postgresql-table-partitioning-part-1/


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres Host

2010-01-26 Thread S Arvind
Hi Everyone,
me and my friend wants a central db to do our development as we are in
different location. Do any one know postgres service provider who is doing
service which can help us?

Arvind S


Re: [GENERAL] initdb failes on Traditional chinese machine when postgres install directory contains chinese characters.

2010-01-26 Thread Craig Ringer
On 23/01/2010 5:19 AM, Sarkar, Sudipta wrote:
 Hi,
 
 I downloaded postgres 8.4 in zip format and installed it under 
 c:\postgres用�裘� on a traditional Chinese windows 2003 server. Note the 
 Chinese characters in the folder name. Then I tried to create a database 
 using initdb. I specified the following command:
 
 initdb.exe --encoding UTF-8 -D c:\mydb\db --username user1 �CW �CL 
 c:\postgres用�裘�\share

Hi

I'd like to try to reproduce this issue, but as I don't have a Chinese
localized Windows install I can't use the appropriate characters on the
console.

Is there any way you know of to switch Windows' locale in a cmd.exe
(console) window so you can use other locale's charsets?  What is the
name of the encoding Windows uses on your system? I know how to do all
this stuff in Linux, but everything language/locale related seems to be
painfully hard, expensive, and complicated under Windows.

Windows (except Vista Ultimate and 7 Ultimate) doesn't offer the option
to change languages for the system (MUI) and the language interface
packs (LIP) only work on top of a particular base language and don't
support major languages. I can't really install a Chinese windows VM, as
I *really* don't have the language skills to navigate around it and test
with it.

Anyway, what I expect is happening here is that initdb is assuming that
the path is in the database system encoding, where it's actually in the
system's native encoding. If you're using a path that is valid in both
encodings (ie each byte means the same thing) then you get away with it,
which is why ASCII works.

Most likely initdb needs to set client_encoding at some point where it's
forgetting to.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Host

2010-01-26 Thread Thom Brown
2010/1/26 S Arvind arvindw...@gmail.com

 Hi Everyone,
 me and my friend wants a central db to do our development as we are in
 different location. Do any one know postgres service provider who is doing
 service which can help us?

 Arvind S


There is a list of hosts available on the PostgreSQL site:
http://www.postgresql.org/support/professional_hosting

Regards

Thom


Re: [GENERAL] Postgres Host

2010-01-26 Thread Craig Ringer

On 26/01/2010 9:01 PM, S Arvind wrote:

Hi Everyone,
me and my friend wants a central db to do our development as we are in
different location. Do any one know postgres service provider who is
doing service which can help us?


There are quite a few services that host PostgreSQL. Google can help you 
- postgresql hosting. Alternately, you can use a cloud hosting 
provider like Amazon's EC to run a VM with PostgreSQL on it, though you 
shouldn't expect much from the performance of such a setup.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Make Install contrib/tablefunc Problems

2010-01-26 Thread Stefan Schwarzer


The following does basically the same thing, but I find it a bit  
easier

to follow:

 sudo -u postgres /usr/local/pgsql/bin/pgsql  tablefunc.sql


Thanks for that. Looks indeed less complicated! :-)

But not yet success for me:

I did re-start, just to be sure.

for postgres:
./configure (with parameters)
make

for tablefunc:
make
sudo make install

and then:
	sudo -u postgres /usr/local/pgsql/bin/psql -d geodataportal  contrib/ 
tablefunc/tablefunc.sql

could not identify current directory: Permission denied
could not identify current directory: Permission denied
SET
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
ERROR:  relation tablefunc_crosstab_2 already exists
ERROR:  relation tablefunc_crosstab_3 already exists
ERROR:  relation tablefunc_crosstab_4 already exists
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture
	ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
found.  Did find:

/usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
architecture

Besides the fact that I have no idea what is going on, if I look  
through my database as indicated (-d geodataportal) I don't see any  
table_func functions.


Thanks for any help.

smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Postgres Host

2010-01-26 Thread S Arvind
most of the site provided there r not have postgres,.. i think its better to
clear up the links in that page...
As Craig told i am also interested in cloud..
have to check it up..


On Tue, Jan 26, 2010 at 6:51 PM, Thom Brown thombr...@gmail.com wrote:

 2010/1/26 S Arvind arvindw...@gmail.com

 Hi Everyone,
 me and my friend wants a central db to do our development as we are in
 different location. Do any one know postgres service provider who is doing
 service which can help us?

 Arvind S


 There is a list of hosts available on the PostgreSQL site:
 http://www.postgresql.org/support/professional_hosting

 Regards

 Thom



Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Andy Colson

On 1/25/2010 8:12 PM, Craig Ringer wrote:

On 26/01/2010 12:15 AM, Dino Vliet wrote:


5) Other considerations?



Even better is to use COPY to load large chunks of data. libpq provides
access to the COPY interface if you feel like some C coding. The JDBC
driver (dev version only so far) now provides access to the COPY API, so
you can also bulk-load via Java very efficiently now.

--
Craig Ringer



I recall seeing someplace that you can avoid WAL if you start a 
transaction, then truncate the table, then start a COPY.


Is that correct?  Still hold true?  Would it make a lot of difference?

(Also, small plug, perl supports the COPY api too)

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Make Install contrib/tablefunc Problems

2010-01-26 Thread Tom Lane
Stefan Schwarzer stefan.schwar...@grid.unep.ch writes:
   ERROR:  could not load library /usr/local/pgsql/lib/tablefunc.so:  
 dlopen(/usr/local/pgsql/lib/tablefunc.so, 10): no suitable image  
 found.  Did find:
   /usr/local/pgsql/lib/tablefunc.so: mach-o, but wrong 
 architecture

That's very curious.  Apparently you managed to build a .so of the wrong
machine architecture (ppc vs x86 vs x86_64).  I would guess that you are
using somebody else's build of Postgres and didn't manage to duplicate
their configuration completely.  Now, if it is somebody else's build,
they really should have provided the contrib modules too ... so why
didn't you just grab tablefunc from their distribution?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg dump.. issue with when using crontab

2010-01-26 Thread Moe
Hi folks,

I have a simple script file db :
#!/bin/bash
pg_dump -U postgres prodDB  /var/a/db/$(date +%Y-%m-%d_%H:%M)-prodDB.dump

--
Which works fine when executed manually ( ./db ).. I get a dump file which
is around 1.9 MB


I run this script from the crontab schedueler (crontab -e) :

# m h  dom mon dow   command
33 04 * * * /var/a/scripts/db

Which appears to executing the script file, but what I get is zero sized
dump files.
So perhaps it is some use access that is limiting crontab, making it not run
as root.

How can I fix this to work with crontab ? I am thinking that I should change
:
pg_dump -U postgres prodDB  /var/a/db/$(date +%Y-%m-%d_%H:%M)-prodDB.dump


Thank you in advance / Moe


Re: [GENERAL] pg dump.. issue with when using crontab

2010-01-26 Thread APseudoUtopia
On Tue, Jan 26, 2010 at 11:54 AM, Moe mohamed5432154...@gmail.com wrote:
 Hi folks,

 I have a simple script file db :
 #!/bin/bash
 pg_dump -U postgres prodDB  /var/a/db/$(date +%Y-%m-%d_%H:%M)-prodDB.dump

 --
 Which works fine when executed manually ( ./db ).. I get a dump file which
 is around 1.9 MB


 I run this script from the crontab schedueler (crontab -e) :

 # m h  dom mon dow   command
 33 04 * * * /var/a/scripts/db

 Which appears to executing the script file, but what I get is zero sized
 dump files.
 So perhaps it is some use access that is limiting crontab, making it not run
 as root.

 How can I fix this to work with crontab ? I am thinking that I should change
 :
 pg_dump -U postgres prodDB  /var/a/db/$(date +%Y-%m-%d_%H:%M)-prodDB.dump

 Thank you in advance / Moe


Inside your script, try giving the full path to pg_dump. Cron
generally resets your $PATH.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg dump.. issue with when using crontab

2010-01-26 Thread Andreas Kretschmer
Moe mohamed5432154...@gmail.com wrote:

 Hi folks,
 
 I have a simple script file db :
 #!/bin/bash
 pg_dump -U postgres prodDB  /var/a/db/$(date +%Y-%m-%d_%H:%M)-prodDB.dump
 
 --
 Which works fine when executed manually ( ./db ).. I get a dump file which is
 around 1.9 MB
 
 
 I run this script from the crontab schedueler (crontab -e) :
 
 # m h  dom mon dow   command
 33 04 * * * /var/a/scripts/db
 
 Which appears to executing the script file, but what I get is zero sized dump
 files.
 So perhaps it is some use access that is limiting crontab, making it not run 
 as
 root.
 
 How can I fix this to work with crontab ? I am thinking that I should change :
 pg_dump -U postgres prodDB  /var/a/db/$(date +%Y-%m-%d_%H:%M)-prodDB.dump

Add the complete path to pg_dump, if it runs under cron it has not your
environment and not the PATH - pg_dump was not found.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] create role in a pl/pgsql trigger

2010-01-26 Thread Keresztury Balázs
hi,

I would like to write a trigger on a table which contains detailed
information about users. If I add a new user to this table, I'd like this
trigger to add the user to the database as well. Later I also want to
implement triggers for updating and deleting, but I got stuck at the first
part of this task.

Everything went fine until the point I had to insert the username into the
create role command. Appearently the command takes only parameters without
the ' signs, and the language supports only substituting parameters with the
apostrophe.

Any idea? 

Thanks,
Balazs



Relevant metadata:

CREATE TABLE felhasznalo.felhasznalo (
  felhasznalo_id VARCHAR NOT NULL, 
  vezeteknev VARCHAR, 
  keresztnev VARCHAR, 
  utolso_belepes TIMESTAMP WITHOUT TIME ZONE DEFAULT now(), 
  kell_uj_jelszo BOOLEAN DEFAULT false NOT NULL, 
  aktiv BOOLEAN DEFAULT true NOT NULL, 
  aktiv_mettol TIMESTAMP WITHOUT TIME ZONE DEFAULT now(), 
  aktiv_meddig TIMESTAMP WITHOUT TIME ZONE, 
  modosito VARCHAR DEFAULT current_user(), 
  modositas_idopont TIMESTAMP WITHOUT TIME ZONE DEFAULT now(), 
  CONSTRAINT felhasznalo_pkey PRIMARY KEY(felhasznalo_id)
) WITH OIDS;

CREATE TRIGGER felhasznalo_letrehozas BEFORE INSERT 
ON felhasznalo.felhasznalo FOR EACH ROW 
EXECUTE PROCEDURE public.felhasznalo_letrehozas_trigger();

CREATE OR REPLACE FUNCTION public.felhasznalo_letrehozas_trigger (
)
RETURNS trigger AS
$body$
BEGIN
CREATE ROLE NEW.felhasznalo_id INHERIT LOGIN IN ROLE USR ENCRYPTED
PASSWORD NEW.felhasznalo_id;
  RETURN NEW;
END;
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100;




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg dump.. issue with when using crontab

2010-01-26 Thread Tom Lane
Moe mohamed5432154...@gmail.com writes:
 I have a simple script file db :
 #!/bin/bash
 pg_dump -U postgres prodDB  /var/a/db/$(date +%Y-%m-%d_%H:%M)-prodDB.dump

 Which works fine when executed manually ( ./db ).. I get a dump file which
 is around 1.9 MB

 I run this script from the crontab schedueler (crontab -e) :

 # m h  dom mon dow   command
 33 04 * * * /var/a/scripts/db

 Which appears to executing the script file, but what I get is zero sized
 dump files.

cron jobs typically run with a very minimal PATH.  I'll bet pg_dump
isn't in it.  Try putting the full path to pg_dump in the script.

Also consider sending its stderr someplace more useful than /dev/null,
so that you're not flying blind while debugging problems ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg dump.. issue with when using crontab

2010-01-26 Thread Moe
Thank you both, that did the trick.

Sincerely / Moe

On Tue, Jan 26, 2010 at 7:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Moe mohamed5432154...@gmail.com writes:
  I have a simple script file db :
  #!/bin/bash
  pg_dump -U postgres prodDB  /var/a/db/$(date
 +%Y-%m-%d_%H:%M)-prodDB.dump

  Which works fine when executed manually ( ./db ).. I get a dump file
 which
  is around 1.9 MB

  I run this script from the crontab schedueler (crontab -e) :

  # m h  dom mon dow   command
  33 04 * * * /var/a/scripts/db

  Which appears to executing the script file, but what I get is zero sized
  dump files.

 cron jobs typically run with a very minimal PATH.  I'll bet pg_dump
 isn't in it.  Try putting the full path to pg_dump in the script.

 Also consider sending its stderr someplace more useful than /dev/null,
 so that you're not flying blind while debugging problems ...

regards, tom lane



[GENERAL] row level security best practice

2010-01-26 Thread Keresztury Balazs
hi,

I'd like to implement row level security in a PostgreSQL 8.4.1 database, but
after several unsuccessful trial I got stuck a little bit.

I have a fact table (project) with a unique id (lets call this project_id)
which is going to be secured. There is another table (access) containing the
access data in the following format: user_id, project_id. If a user is
assigned to a project, a new record is entered in this table.

With the concept above I can restrict the projects using a simple view:

create view project_v as
select * from project inner join access a using(project_id) where a.user_id
= current_user::text;

grant select on project_v to public;
revoke select on project from public;

Users won't be able to select any other rows, than theirs. But what if I
want to let them update or delete from the original table?

grant delete, update on project to public;

CREATE TRIGGER projekt_1_jogosultsag BEFORE UPDATE OR DELETE 
ON project FOR EACH ROW 
EXECUTE PROCEDURE public.jogosultsag_trigger();

CREATE OR REPLACE FUNCTION public.jogosultsag_trigger (
)
RETURNS trigger AS
$body$
BEGIN
 --nem saját projekt adatait nem lehet módosítani
 IF OLD.project_id NOT IN(SELECT project_id FROM project_v) THEN
RAISE EXCEPTION 'You cannot modify this project! (%)',
OLD.project_id;
 END IF;

 IF TG_OP='UPDATE' THEN
RETURN NEW;
 ELSE
RETURN OLD;
 END IF;
END;
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100;

Using the grants above, users clearly can't update or delete from the
original fact table, since WHERE condition won't work without SELECT
privileges. I also considered using rules on the view, but if I understood
well, it isn't possible to use a similar IF condition in the rule system.

How could I solve this problem?

Any help is appreciated!
Balazs


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Host

2010-01-26 Thread John R Pierce

S Arvind wrote:
most of the site provided there r not have postgres,.. i think its 
better to clear up the links in that page...


if you found specific hosts listed there that do not in fact offer 
postgres, it would be quite helpful to provide a list of those you've 
contacted so the web folks can update their listings.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 100% of CPU utilization postgres process

2010-01-26 Thread Greg Smith
Hashimoto Yuya wrote:
 Judging from the result, I could see that stats collector process
 caused this unusually high CPU utilization rate.
 I found similar problem at
 http://archives.postgresql.org/pgsql-general/2008-06/msg00934.php,
 although there seemed
 no clear cause proven nor the statement that it's because of postgres bug.

Right, that thread concluded with
http://archives.postgresql.org/pgsql-general/2008-06/msg01026.php where
Tom suggested it looked like a FreeBSD bug on that version. I just poked
around a bit, and there do seem to have been a number of bugs in their
poll() implementation in various versions of that OS, so it seems
reasonable this is just another one of those.

Note sure if Depez is reading this list or not, just added him to the cc
list here. Herbert, did you ever get anywhere with tracking this issue down?

-- 
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com



Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Greg Smith

Andy Colson wrote:
I recall seeing someplace that you can avoid WAL if you start a 
transaction, then truncate the table, then start a COPY.


Is that correct?  Still hold true?  Would it make a lot of difference?


That is correct, still true, and can make a moderate amount of 
difference if the WAL is really your bottleneck.  More of a tweak for 
loading small to medium size things as I see it.  Once the database and 
possibly its indexes get large enough, the loading time starts being 
dominated by handling all that work, with its random I/O, rather than 
being limited by the sequential writes to the WAL.  It's certainly a 
useful optimization to take advantage of when you can, given that it's 
as easy as:


BEGIN;
TRUNCATE TABLE x;
COPY x FROM ... ;
COMMIT;

--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg dump.. issue with when using crontab

2010-01-26 Thread steeles
I am new to posgres.
By running Pg-dump like this, do we need to type in pwd for login manually?

Thanks 
Sent from my BlackBerry device on the Rogers Wireless Network

-Original Message-
From: Moe mohamed5432154...@gmail.com
Date: Tue, 26 Jan 2010 19:25:50 
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg dump.. issue with when using crontab

Thank you both, that did the trick.

Sincerely / Moe

On Tue, Jan 26, 2010 at 7:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Moe mohamed5432154...@gmail.com writes:
  I have a simple script file db :
  #!/bin/bash
  pg_dump -U postgres prodDB  /var/a/db/$(date
 +%Y-%m-%d_%H:%M)-prodDB.dump

  Which works fine when executed manually ( ./db ).. I get a dump file
 which
  is around 1.9 MB

  I run this script from the crontab schedueler (crontab -e) :

  # m h  dom mon dow   command
  33 04 * * * /var/a/scripts/db

  Which appears to executing the script file, but what I get is zero sized
  dump files.

 cron jobs typically run with a very minimal PATH.  I'll bet pg_dump
 isn't in it.  Try putting the full path to pg_dump in the script.

 Also consider sending its stderr someplace more useful than /dev/null,
 so that you're not flying blind while debugging problems ...

regards, tom lane




Re: [GENERAL] pg dump.. issue with when using crontab

2010-01-26 Thread Raymond O'Donnell
On 26/01/2010 22:28, stee...@gmail.com wrote:
 I am new to posgres.
 By running Pg-dump like this, do we need to type in pwd for login manually?

It depends on what access rules are defined in the pg_hba.conf file.
Read about it here:

  http://www.postgresql.org/docs/8.4/interactive/auth-pg-hba-conf.html

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Host

2010-01-26 Thread Greg Smith

S Arvind wrote:

 There is a list of hosts available on the PostgreSQL site: 
http://www.postgresql.org/support/professional_hosting
most of the site provided there r not have postgres,.. i think its 
better to clear up the links in that page...


Any suggestions about inaccurate info there would be appreciated.  I 
know for sure that you can get inexpensive PostgreSQL hosting from two 
of the US companies listed there:  hub.org provides shared hosts via 
FreeBSD jails, and A2 hosting has a variety of Linux-based offerings.


The whole cloud thing is more trouble than its worth IMHO for a lot of 
these situations, given how inexpensive these more permanent installs 
are.  Seems like I'm always fighting to persist my data properly in any 
cloud deployment.  The main upside as I see for that work is that you 
end up with easy to replicate installs for scaling applications out, but 
if it's only a single server you're targeting I feel like the cloud 
setup overhead just isn't worth it in most cases.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SMP Read-only Performance

2010-01-26 Thread Mike Bresnahan
I have a read-only database that I am testing the performance of to get a sense
of how many concurrent users I can support. The database fits entirely in RAM so
I expect there to be little to no disk activity. Because of this, I expect
throughput to scale almost linearly with the number of CPUs I have. However,
that is not what I am seeing. For example, take the following results.

Postgres 8.3.8
Fedora 9 
Intel(R) Xeon(R) CPU 5160 @ 3.00GHz X 2 (4 cores total)
4 GB RAM

-bash-3.2$ pgbench -c 1 -S -t 40 test
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
number of clients: 1
number of transactions per client: 40
number of transactions actually processed: 40/40
tps = 11373.127109 (including connections establishing)
tps = 11373.907131 (excluding connections establishing)

-bash-3.2$ pgbench -c 2 -S -t 20 test
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
number of clients: 2
number of transactions per client: 20
number of transactions actually processed: 40/40
tps = 11422.541958 (including connections establishing)
tps = 11423.979642 (excluding connections establishing)

-bash-3.2$ pgbench -c 3 -S -t 20 test
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
number of clients: 3
number of transactions per client: 20
number of transactions actually processed: 60/60
tps = 11330.641664 (including connections establishing)
tps = 11332.074176 (excluding connections establishing)

-bash-3.2$ pgbench -c 4 -S -t 10 test
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
number of clients: 4
number of transactions per client: 10
number of transactions actually processed: 40/40
tps = 11747.647262 (including connections establishing)
tps = 11750.710786 (excluding connections establishing)

During these tests top(1) reported 60-70% idle CPU.

As you see, the TPS remains the same as I increase the number of clients. These
results make it look like PostgreSQL is single-threaded and not taking advantage
of the multiple cores. Could someone please explain?







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] create role in a pl/pgsql trigger

2010-01-26 Thread Craig Ringer

On 27/01/2010 1:09 AM, Keresztury Balázs wrote:

hi,

I would like to write a trigger on a table which contains detailed
information about users. If I add a new user to this table, I'd like this
trigger to add the user to the database as well. Later I also want to
implement triggers for updating and deleting, but I got stuck at the first
part of this task.

Everything went fine until the point I had to insert the username into the
create role command. Appearently the command takes only parameters without
the ' signs, and the language supports only substituting parameters with the
apostrophe.

Any idea?


Use 'EXECUTE' (without USING because EXECUTE ... USING doesn't work 
for utility statements).


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SMP Read-only Performance

2010-01-26 Thread Scott Marlowe
On Tue, Jan 26, 2010 at 3:01 PM, Mike Bresnahan
mike.bresna...@bestbuy.com wrote:
 During these tests top(1) reported 60-70% idle CPU.

 As you see, the TPS remains the same as I increase the number of clients. 
 These
 results make it look like PostgreSQL is single-threaded and not taking 
 advantage
 of the multiple cores. Could someone please explain?

This is more likely a limitation of pgbench and not postgresql.  pgsql
9.0 has a multi-threaded pgbench coming with it that should help for
testing these situations.

http://www.depesz.com/index.php/2009/08/26/waiting-for-8-5-multi-threaded-pgbench/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SMP Read-only Performance

2010-01-26 Thread Greg Smith

Mike Bresnahan wrote:

As you see, the TPS remains the same as I increase the number of clients. These
results make it look like PostgreSQL is single-threaded and not taking advantage
of the multiple cores. Could someone please explain?
  


You're probably running into this problem:  
http://notemagnet.blogspot.com/2008/05/pgbench-suffering-with-linux-2623-2626.html


I discovered that the issue with pgbench not running well on recent 
Linux kernels only occurs if you're connecting over the default sockets 
interface.  If you setup your server to listen over TCP/IP instead (may 
have to tweak pg_hba.conf and listen_address in the postgresql.conf 
file), so that you can connect to it like this successfully:


psql -h localhost

You can then use the same method on pgbench:

pgbench -c 1 -S -t 40 test -h localhost

And I'd bet that you'd then see the scaling you expect.

--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] back out configure options

2010-01-26 Thread zhong ming wu
Is there a way to figure out from binaries what options were used to
compile/config?  For example with apache I can do httpd -l

Thanks

mr. wu

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] back out configure options

2010-01-26 Thread Greg Smith

zhong ming wu wrote:

Is there a way to figure out from binaries what options were used to
compile/config?  For example with apache I can do httpd -l
  

pg_config is what you're looking for.

In some distributions, this may not be installed by default with the 
rest of the server.  For example, in the RPM version you need the 
postgresql-devel package to have it available.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary Key Increment Doesn't Seem Correct Under Table Partition

2010-01-26 Thread Yan Cheng Cheok

Thanks and Regards
Yan Cheng CHEOK


--- On Tue, 1/26/10, Alban Hertroys dal...@solfertje.student.utwente.nl wrote:

 From: Alban Hertroys dal...@solfertje.student.utwente.nl
 Subject: Re: [GENERAL] Primary Key Increment Doesn't Seem Correct Under Table 
 Partition
 To: Yan Cheng Cheok ycch...@yahoo.com
 Cc: pgsql-general@postgresql.org
 Date: Tuesday, January 26, 2010, 6:36 PM
 On 26 Jan 2010, at 11:00, Yan Cheng
 Cheok wrote:
 
  However, whenever I insert row into measurement table,
 I realize its primary key value is going from 2, 4, 6, 8,
 10...
  
  May I know how can I prevent this?
 
 Apparently nextval on that sequence gets called multiple
 times in your queries.
 

Do you have any idea why it happens so?

  Also, is it necessary to create index for
 measurement_id found in measurement's child table? I am
 concern on the read speed.
 
 
 Serials don't create indexes, only a sequence. Primary keys
 create indexes (to guarantee uniqueness) - foreign keys do
 not.
 So yes, you want an index on that column in the child
 table.
 

But...

measurement table does contains

CONSTRAINT pk_measurement_id PRIMARY KEY (measurement_id),

when measurement_0 child table inherits from measurement table, isn't it will 
inherit PRIMARY KEY (measurement_id)?

Do I still have to create index for measurement_0.measurement_id ?

 Alban Hertroys
 
 --
 Screwing up is the best way to attach something to the
 ceiling.
 
 
 !DSPAM:737,4b5ec59910605107914066!
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question on Type of Query Which Will Take Advantage On Table Partition

2010-01-26 Thread Yan Cheng Cheok
Hello all,

By referring to tutorial on 
http://www.if-not-true-then-false.com/2009/11/howto-create-postgresql-table-partitioning-part-1/

I have several doubt, on the type of query, which will take advantage on table 
partition.


CREATE TABLE impressions_by_day (
advertiser_id INTEGER NOT NULL,
day DATE NOT NULL DEFAULT CURRENT_DATE,
impressions INTEGER NOT NULL,
PRIMARY KEY (advertiser_id, day)
);

CREATE TABLE impressions_by_day_y2009m1ms2 (
PRIMARY KEY (advertiser_id, day), 
CHECK ( day = DATE '2009-01-01' AND day  DATE '2009-03-01' )
) INHERITS (impressions_by_day);


SET constraint_exclusion = ON;


// This query doesn't take advantage of table partition.
// It need to scan through *every* child table.
SELECT * FROM impressions_by_day


// Will this takes advatage of table partition.
// Is PostgreSQL smart enough to know, it only need to look for
// impressions_by_day_y2009m1ms2 ???

SELECT * FROM impressions_by_day WHERE day  = DATE '2009-02-02'

// I am sure this will take advantage of table partition, isn't it ???

SELECT * FROM impressions_by_day WHERE day = DATE '2009-01-01' AND day  DATE 
'2009-03-01'


Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dynamic crosstab

2010-01-26 Thread Pierre Chevalier

Hello,

Some time ago, it was written here:

...
I think there should be a generic way in Postgres to return from an 
EAV model. Although I have no evidence on that I keep thinking that 
the db must be more effective at that than the application would be.

...

Yes, thanks.
The problem with those function is that they all have an AS (columname 
type,...) part or equivalent.



SWK, I fully understand your needs, and your (our) kind of frustration...

I am in a similar situation, with an EAV table, and the need to do 
crosstab queries, without knowing in advance which columns the query 
should return, and how many columns.
This is for chemical analysis results; laboratories store their results 
in an EAV way, and it is definitely a good choice, for a large number of 
reasons.


On your side, have you found a decent solution?
Has anyone got an answer?

I am just about to try this one:
http://www.ledscripts.com/tech/article/view/5.html 
http://www.ledscripts.com/tech/article/view/5.ht=


But I don't like this style too much: the crosstab approach seems more 
natural to me. I may be totally wrong.


So if there is a crosstab thing working in a generic way, that would be 
just super!


A+
Pierre



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SMP Read-only Performance

2010-01-26 Thread Mike Bresnahan
Greg Smith greg at 2ndquadrant.com writes:
 You're probably running into this problem:  
 http://notemagnet.blogspot.com/2008/05/pgbench-suffering-with-linux-2623-2626.html

You are so right. The last thing I would have suspected is a kernel bug. I am
definitely going to try to be more aware of kernel happenings from now on. Thank
you!

-bash-3.2$ uname -a
Linux devpgre 2.6.25-14.fc9.x86_64 #1 SMP Thu May 1 06:06:21 EDT 2008 x86_64
x86_64 x86_64 GNU/Linux

-bash-3.2$ pgbench -c 1 -S -t 40 -h localhost test
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
number of clients: 1
number of transactions per client: 40
number of transactions actually processed: 40/40
tps = 10716.907529 (including connections establishing)
tps = 10717.650674 (excluding connections establishing)

-bash-3.2$ pgbench -c 2 -S -t 40 -h localhost test
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
number of clients: 2
number of transactions per client: 40
number of transactions actually processed: 80/80
tps = 14355.737471 (including connections establishing)
tps = 14356.991106 (excluding connections establishing)

-bash-3.2$ pgbench -c 3 -S -t 40 -h localhost test
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
number of clients: 3
number of transactions per client: 40
number of transactions actually processed: 120/120
tps = 19068.715403 (including connections establishing)
tps = 19070.958268 (excluding connections establishing)

-bash-3.2$ pgbench -c 4 -S -t 40 -h localhost test
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
number of clients: 4
number of transactions per client: 40
number of transactions actually processed: 160/160
tps = 22951.995256 (including connections establishing)
tps = 22955.104092 (excluding connections establishing)




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning on Table with Foreign Key

2010-01-26 Thread Yan Cheng Cheok
Hello Vick,

Can I get some advice from your side?

Currently, I have table :

1 lot is pointing to many units
1 unit is pointing to many measurements

Say, let say, I want Give me all the measurements result within this lot.

For a single lot, with 100,000 units, with each unit having 48 different 
measurements (total 4,800,000 measurements). It will takes ~30 seconds to 
return result.

I use the following stored procedure to help me do so.

==
CREATE OR REPLACE FUNCTION get_measurements(IN bigint)
  RETURNS TABLE(_type text, _value double precision, _unit text) AS
$BODY$DECLARE
_lotID ALIAS FOR $1;
BEGIN
RETURN QUERY EXECUTE 'SELECT measurement_type.value, measurement.value, 
measurement_unit.value
FROM 
measurement INNER JOIN unit ON (fk_unit_id = unit_id)
WHERE fk_lot_id = $1'
USING _lotID;
END;$BODY$
  LANGUAGE 'plpgsql' VOLATILE;
ALTER FUNCTION get_measurements(bigint) OWNER TO postgres;
==

However, when time goes on and more lots being added, the performance for the 
above stored procedure getting worst.

I plan to use table partition, to solve the above problem.

(1) Store unit, according to their unit_id (primary key). For example, 

unit_id from 0 till 99 will store in table unit_0. 
unit_id from 100 till 199 will store in table unit_1. 


(2) Store measurement, according to their measurment_id (primary key). For 
example, 

measurement_id from 0 till 99 will store in table measurement_0. 
measurement_id from 100 till 199 will store in table measurement_1.

However, does the above stored procedure, will gain advantage from my (1) and 
(2) strategy?

As from my point of view, PostgreSQL seems doesn't get enough hint, where he 
need to go to unit_? and measurement_? table to perform read. In my query, I 
didn't specific rule like

WHERE unit_id / 100 = 0

Advice and comment are very much appreciated.


The below are my table partition rule :

CREATE TABLE measurement_0 (
  CONSTRAINT pk_measurement_id PRIMARY KEY (measurement_id),
  CONSTRAINT fk_unit_id FOREIGN KEY (fk_unit_id)
  REFERENCES unit (unit_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE,

CHECK ( measurement_id / 100 = 0 )
) INHERITS (measurement);

CREATE TABLE measurement_1 (
  CONSTRAINT pk_measurement_id PRIMARY KEY (measurement_id),
  CONSTRAINT fk_unit_id FOREIGN KEY (fk_unit_id)
  REFERENCES unit (unit_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE,

CHECK ( measurement_id / 100 = 1 )
) INHERITS (measurement);

..

CREATE TABLE unit_0 (
  CONSTRAINT pk_unit_id PRIMARY KEY (unit_id),
  CONSTRAINT fk_lot_id FOREIGN KEY (fk_lot_id)
  REFERENCES lot (lot_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE,

CHECK ( unit_id / 100 = 0 )
) INHERITS (measurement);

CREATE TABLE unit_1 (
  CONSTRAINT pk_unit_id PRIMARY KEY (unit_id),
  CONSTRAINT fk_lot_id FOREIGN KEY (fk_lot_id)
  REFERENCES lot (lot_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE,

CHECK ( unit_id / 100 = 1 )
) INHERITS (measurement);

...


Here are my table defination :
==
IF NOT EXISTS(SELECT * FROM information_schema.tables WHERE table_name = 
'unit') THEN
CREATE TABLE unit
(
  unit_id bigserial NOT NULL,
  fk_lot_id bigint NOT NULL,
  cycle bigint NOT NULL,
  CONSTRAINT pk_unit_id PRIMARY KEY (unit_id),
  CONSTRAINT fk_lot_id FOREIGN KEY (fk_lot_id)
  REFERENCES lot (lot_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE
);

CREATE INDEX idx_fk_lot_id
  ON unit
  USING btree
  (fk_lot_id);
END IF;

IF NOT EXISTS(SELECT * FROM information_schema.tables WHERE table_name = 
'measurement') THEN
CREATE TABLE measurement
(
  measurement_id bigserial NOT NULL,
  fk_unit_id bigint NOT NULL,
  value double precision,
  CONSTRAINT pk_measurement_id PRIMARY KEY (measurement_id),
  CONSTRAINT fk_unit_id FOREIGN KEY (fk_unit_id)
  REFERENCES unit (unit_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE
); 

CREATE INDEX idx_fk_unit_id
  ON measurement
  USING btree
  (fk_unit_id);

CREATE INDEX idx_measurement_value
  ON measurement
  USING btree
  (value) WHERE value IS NULL;
END IF; 


Thanks and Regards
Yan Cheng CHEOK


--- On Thu, 1/21/10, Yan Cheng Cheok ycch...@yahoo.com wrote:

 From: Yan Cheng Cheok ycch...@yahoo.com
 Subject: Re: [GENERAL] Partitioning on Table with Foreign Key
 To: 

Re: [GENERAL] Primary Key Increment Doesn't Seem Correct Under Table Partition

2010-01-26 Thread Yan Cheng Cheok
Hello all,

I solve my problem using the following. It seems that when inherit from parent 
table, the parent table's constraint is not being carried over to child table.

CREATE OR REPLACE FUNCTION measurement_insert_trigger()
RETURNS TRIGGER AS 
$BODY$DECLARE
measurement_table_index bigint;
measurement_table_name text;
BEGIN
-- 20 is just an example here right now. The true value will be 100,000,000
measurement_table_index = NEW.measurement_id % 20;
measurement_table_name = 'measurement_' || measurement_table_index;

-- Since measurement_id for parent table is already a bigserial
-- Do I still need to create index for child's measurement_id?

IF NOT EXISTS(SELECT * FROM information_schema.tables WHERE table_name = 
measurement_table_name) THEN
EXECUTE 'CREATE TABLE ' || quote_ident(measurement_table_name) || '
(
  CONSTRAINT pk_measurement_id_' || measurement_table_index || ' 
PRIMARY KEY (measurement_id),
  CONSTRAINT fk_unit_id_' || measurement_table_index || ' FOREIGN KEY 
(fk_unit_id)
  REFERENCES unit (unit_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE
) INHERITS (measurement);';
EXECUTE 'CREATE INDEX ' || quote_ident(measurement_table_name) || 
'_measurement_id ON ' || quote_ident(measurement_table_name) || 
'(measurement_id);';  
END IF;

EXECUTE 'INSERT INTO ' || quote_ident(measurement_table_name) || 
'(measurement_id, fk_unit_id, v) VALUES (' || NEW.measurement_id || ',' || 
NEW.fk_unit_id || ',' || quote_literal(NEW.v) || ')';
RETURN NULL;
END;$BODY$
LANGUAGE plpgsql;


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with execution of an update rule

2010-01-26 Thread Ken Winter
Mark this one solved.  I finally stumbled across an old, forgotten e-mail
thread from 2006 where Tom Lane solved exactly this problem.  See
http://archives.postgresql.org/pgsql-general/2006-02/msg01039.php.

~ Thanks again, Tom!
~ Ken

 
 -Original Message-
 From: Ken Winter [mailto:k...@sunward.org] 
 Sent: Friday, January 22, 2010 7:00 PM
 To: 'PostgreSQL pg-general List'
 Subject: Problem with execution of an update rule
 
 I’m trying to implement a history-keeping scheme using PostgreSQL views
and update rules.  My problem is that one of the commands in one of my
crucial update rules apparently never executes.
 
 Briefly, the history-keeping scheme involves: 
 
 * Two tables: an h table that contains the columns for which we want to
preserve a full history of all updates, and an i table that contains
columns whose history we don't want to preserve.
 
 * A view of the two tables, showing all the columns of the h and I tables.

 
 * A set of rules that makes the view behave like a fully updatable table,
while invisibly preserving a copy of the record as it existed prior to each
update.
 
 The problem rule (see example in the PS below) is the one that fires
when the user issues a SQL UPDATE against the view.  This rule fires if the
UPDATE has changed any column value.  It is supposed to execute three
commands:
 
 1. Insert a new record into the _h table, containing the old values of the
record being updated.  This is the record that preserves the prior state of
the record.
 
 2. Update the existing h table record with the new values. 
 
 3. Update the existing i table record with the new values. 
 The problem is that command 1 apparently never executes.  That is, in
response to an UPDATE against the view, a new h table record is NOT inserted
- even though data changes in both the h and the i table are successfully
recorded, and no error messages occur.
 
 I have tried changing the order of the 3 commands in the rule - no effect.

 Can you tell me what's wrong with this picture? 
 
 ~ TIA 
 ~ Ken 
 
 PS: 
 This example involves a view named people, an h table named people_h
(including columns first_name and last_name), an i table named
people_i (including column birth_date), a sequence-assigned identifier
people_id in both tables, some effective and expiration timestamps in
people_h, and some rules including this troublesome one:
 
 CREATE OR REPLACE RULE on_update_2_preserve AS 
   ON UPDATE TO people 
   WHERE ( 
 (OLD.people_id  NEW.people_id 
   OR (OLD.people_id IS NULL AND NEW.people_id IS NOT NULL) 
   OR (OLD.people_id IS NOT NULL AND NEW.people_id IS NULL )) 
 OR (OLD.effective_date_and_time  NEW.effective_date_and_time 
   OR (OLD.effective_date_and_time IS NULL 
   AND NEW.effective_date_and_time IS NOT NULL) 
   OR (OLD.effective_date_and_time IS NOT NULL 
 AND NEW.effective_date_and_time IS NULL )) 
 OR (OLD.first_name  NEW.first_name 
   OR (OLD.first_name IS NULL AND NEW.first_name IS NOT NULL) 
   OR (OLD.first_name IS NOT NULL AND NEW.first_name IS NULL )) 
 OR (OLD.last_name  NEW.last_name 
   OR (OLD.last_name IS NULL AND NEW.last_name IS NOT NULL) 
   OR (OLD.last_name IS NOT NULL AND NEW.last_name IS NULL )) 
 OR (OLD._action  NEW._action 
   OR (OLD._action IS NULL AND NEW._action IS NOT NULL) 
   OR (OLD._action IS NOT NULL AND NEW._action IS NULL )) 
 OR (OLD.birth_date  NEW.birth_date 
   OR (OLD.birth_date IS NULL AND NEW.birth_date IS NOT NULL) 
   OR (OLD.birth_date IS NOT NULL AND NEW.birth_date IS NULL ))) 
 ) 
   DO 
 ( 
 /* Copy the old values to a new record. 
 Expire it either now (if no effective date 
 was provided) or whenever the update query specifies.*/ 
 INSERT INTO people_h ( 
   people_id, 
   first_name, 
   last_name, 
   effective_date_and_time, 
   expiration_date_and_time) 
 VALUES ( 
   OLD.people_id, 
   OLD.first_name, 
   OLD.last_name, 
   OLD.effective_date_and_time, 
   NEW.effective_date_and_time) 
 ; 
 /* Update the current H record and make it effective 
 as of either now (if no effective date 
 was provided) or whenever the update query specifies.*/ 
 UPDATE people_h 
   SET 
 people_id = NEW.people_id, 
 first_name = NEW.first_name, 
 last_name = NEW.last_name, 
 _action = 'preserved', 
   effective_date_and_time = 
 CASE 
   WHEN NEW.effective_date_and_time = OLD.effective_date_and_time 
    THEN CURRENT_TIMESTAMP 
   ELSE NEW.effective_date_and_time 
 END 
   WHERE 
 people_id = OLD.people_id 
 AND effective_date_and_time = OLD.effective_date_and_time 
 ; 
 /* Update I table. */ 
 UPDATE people_i 
   SET 
 people_id = NEW.people_id, 
 birth_date = NEW.birth_date, 
   WHERE 
 people_id = OLD.people_id; 
 SELECT 

Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Jayadevan M
Hi,
Regarding Pentaho - please keep in mind that Pentaho needs significant 
amount of memory. We had a lot of issues with Pentaho crashing with java 
out of memory error. If you are using a 64 bit machine, you may be able to 
give it sufficient RAM and keep it happy. If all you have is one 4 GB 
machine to run PostgreSQL and the ETL tool, I have my doubts. It depends 
on the volume of data - how many GBs, rather than the number of records. 
Pentaho added PostgreSQL bulk loader as an experimental component 
recently. You can try that out. Talend can generate Java or perl 
components and was faster than Pentaho in our case. Since Talend community 
edition did not provide a shared development environment, we opted for 
Pentato.
 If there is not a lot of complex 'transformations', you should be able to 
manage fine with shell scripts.
Jayadevan




From:   Dino Vliet dino_vl...@yahoo.com
To: pgsql-general@postgresql.org
Date:   01/25/2010 09:57 PM
Subject:[GENERAL] general questions postgresql performance config
Sent by:pgsql-general-ow...@postgresql.org




Dear postgresql people,
 
Introduction
Today I've been given the task to proceed with my plan to use postgresql 
and other open source techniques to demonstrate to the management of my 
department the usefullness and the cost savings potential that lies 
ahead. You can guess how excited I am right now. However, I should plan 
and execute at the highest level because I really want to show results. 
I'm employed in the financial services.
 
Context of the problem
Given 25 million input data, transform and load 10 million records to a 
single table DB2 database containing already 120 million records (the 
whole history).
 
The current process is done on the MVS mainframe while the SAS system is 
used to process the records (ETL like operations). The records of the two 
last months (so 20 million records) are also stored in a single SAS 
dataset, where users can access them through SAS running on their Windows 
PC's. With SAS PC's they can also analyse the historical records in the 
DB2 table on the mainframe.
 
These users are not tech savvy so this access method is not very 
productive for them but because the data is highly valued, they use it 
without complaining too much.
 
Currently it takes 5 to 6 hours before everything is finished.
 
Proof of concept
I want to showcase that a solution process like:
 
input--Talend/Pentaho Kettle for ETL--postgresql--pentaho report 
designer, is feasible while staying in the 5~6 hours processing and 
loading time.
 
Input: flat files, position based
ETL: Pentaho Kettle or Talend to process these files
DBMS: postgresql 8 (on debian, opensuse, or freebsd)
Reporting: Pentaho report wizard
 
Hardware
AMD AM2 singlecore CPU with 4GB RAM
Two mirrored SATA II disks (raid-0)
 
Now that I have introduced my situation, I hope this list can give me some 
tips, advice, examples, pitfalls regarding the requirements I have.
 
Questions
1) Although this is not exactly rocket science, the sheer volume of the 
data makes it a hard task. Do you think my solution is 
viable/achievable?
 
2) What kind of OS would you choose for the setup I have proposed? I 
prefer FreeBSD with UFS2 as a filesystem, but I guess Debian with ext3 
filesystems or openSUSE with ext3 or Ubuntu server with ext3 would all be 
very good candidates too??
 
3) Would you opt for the ETL tools mentioned by me (pentaho and talend) or 
just rely on the unix/linux apps like gawk, sed, perl? I'm familiar with 
gawk. The ETL tools require java, so I would have to configure postgresql 
to not use all the available RAM otherwise risking the java out of memory 
error message. With that said, it would be best if I first configure my 
server to do the ETL processing and then afterwards configure it for 
database usage. 
 
4) what values would you advice for the various postgresql.conf values 
which can impact performance like shared buffers, temp_buffers, sort_mem, 
etc etc? Or is this more of like an art where I change and restart the 
db server, analyze the queries and iterate until I find optimal values?
 
5) Other considerations?
 
Thanks in advanced,
 
Dino
 






DISCLAIMER: 

The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect.






[GENERAL] [GRNERAL] drop domain xx cascade

2010-01-26 Thread hx.li
hi,

I test it as follow:

TEST=# select version();
   version
-
 PostgreSQL 8.4.1, compiled by Visual C++ build 1400, 32-bit
(1 row)

TEST=# CREATE DOMAIN MY_DOMAIN AS DECIMAL (5, 2);
CREATE DOMAIN
TEST=# CREATE TABLE TT(tt_c MY_DOMAIN);
CREATE TABLE
TEST=# \d TT
   Table public.tt
 Column |   Type| Modifiers
+---+---
 tt_c   | my_domain |


TEST=# CREATE VIEW V_TT AS SELECT * FROM TT;
CREATE VIEW
TEST=# \d V_TT
   View public.v_tt
 Column |   Type| Modifiers
+---+---
 tt_c   | my_domain |
View definition:
 SELECT tt.tt_c
   FROM tt;


TEST=# DROP DOMAIN MY_DOMAIN CASCADE;
NOTICE:  drop cascades to 3 other objects
DETAIL:  drop cascades to table tt column tt_c
drop cascades to view v_tt
drop cascades to view v_tt column tt_c
DROP DOMAIN
TEST=# \d TT
 Table public.tt
 Column | Type | Modifiers
+--+---

-

But, SQL standard(sql92 or sql2008) said:
---
Let C be any column descriptor that includes DN, let T be the table 
described by the table descriptor that includes C, and let TN be the table 
name of T. C is modified as follows:
a) DN is removed from C. A copy of the data type descriptor of D is included 
in C.
b) If C does not include a default clause and the domain descriptor of D 
includes a default clause,then a copy of the default clause of D is 
included in C.

My question is: When run DROP DOMAIN MY_DOMAIN CASCADE, if Whether 
postgresql will permit from the SQL standard ?

So, I expect the result is (after drop domain):
-
TEST=# \d TT
 Table public.TT
 Column | Type | Modifiers
+--+---
 tt_c   | numeric(5,2) |



Any help is appreciated!
Hx.li





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dynamic crosstab

2010-01-26 Thread Pavel Stehule
2010/1/27 Pierre Chevalier pierre.chevalier1...@free.fr:
 Hello,

 Some time ago, it was written here:

 ...
 I think there should be a generic way in Postgres to return from an EAV
 model. Although I have no evidence on that I keep thinking that the db must
 be more effective at that than the application would be.
 ...

 Yes, thanks.
 The problem with those function is that they all have an AS (columname
 type,...) part or equivalent.


 SWK, I fully understand your needs, and your (our) kind of frustration...

 I am in a similar situation, with an EAV table, and the need to do crosstab
 queries, without knowing in advance which columns the query should return,
 and how many columns.
 This is for chemical analysis results; laboratories store their results in
 an EAV way, and it is definitely a good choice, for a large number of
 reasons.

 On your side, have you found a decent solution?
 Has anyone got an answer?

 I am just about to try this one:
 http://www.ledscripts.com/tech/article/view/5.html
 http://www.ledscripts.com/tech/article/view/5.ht=

 But I don't like this style too much: the crosstab approach seems more
 natural to me. I may be totally wrong.

 So if there is a crosstab thing working in a generic way, that would be just
 super!

you cannot get crosstab via SELECT statement. There is workaround

http://okbob.blogspot.com/2008/08/using-cursors-for-generating-cross.html

Pavel


 A+
 Pierre



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general