Re: [GENERAL] what's going on here?
Ben <[EMAIL PROTECTED]> writes: > music=# explain ... where ... playlist.stream=1 > -> Seq Scan on playlist (cost=0.00..300.81 rows=2321 width=20) > music=# explain ... where ... playlist.stream=2 > -> Seq Scan on playlist (cost=0.00..300.81 rows=205 width=20) I am betting that 1 is the most common value in playlist.stream, or at least is being chosen as the most common value by VACUUM ANALYZE's not-totally-accurate estimation process. The 2321 rowcount estimate then falls out of the stored statistic for the most common value's frequency. In the second case, the estimator knows that 2 is *not* the most common value, but it has absolutely no statistical basis on which to guess what the frequency really is. I think it uses 1/10th of the most common frequency for anything that's not the most common value (look in utils/adt/selfuncs.c to be sure). There's probably also some contribution from the "playlist.played is null" clause, else the row count estimate would be exactly 1/10th as much. However, I don't believe that the thing currently makes any serious effort to gauge the selectivity of IS NULL, which is a shame because that would critically affect the results here. (You did say some thousands of rows matching the stream=N clause, but only a few matching IS NULL, right?) Given the fundamental difference in this initial row count estimate, the large difference in the subsequent join plan structure is not too surprising. In short: another demonstration of the limitations of our current statistics about data frequencies. BTW, you didn't actually say which plan was faster. Since the second one was closer to the true statistic (only a few rows returned from playlist), I'm hoping it was faster... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] COPY problem
I think this is a question regarding the backend, but... I'm in the process of changing 1 large table (column wise) into 6 smaller tables, and ran into a situation. I'm using Postgresql 7.1beta5, Pg as included, Perl 5.6, Solaris 2.6 on an Ultra 5. The new setup is 6 tables, the 'main' table loc with id SERIAL, while the other 5 (u,b,v,r,i) are identical, id INT4 REFERENCES loc. What I've done is copy the original table into a file, and am now attempting to copy from stdin, using Perl/Pg to break out the data into the 6 tables. I'm working with 2.5 million records btw. I've narrowed the situation to occur when copying to any one of the 5 referring tables (COPY u FROM stdin). The backend process which handles the db connection decides that it needs a whole lot of memory, although in a nice controlled manner. The backend starts with using 6.5Mb, and at 25000 records copied, it's taken 10Mb and has slowed down substantially. Needless to say, this COPY will not finish before running out of memory (estimated 300Mb). When executing the COPY to the loc table, this problem does not occur. Am I going to have to resort to inserts for the referring tables? Robert Creager Senior Software Engineer Client Server Library 303.673.2365 V 303.661.5379 F 888.912.4458 P StorageTek INFORMATION made POWERFUL ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
RE: [GENERAL] How to use locks, for DB noivces?
SELECT FOR UPDATE should work. Did you use it in a transaction? For example: Session 1: CREATE TABLE accounts (amount float8); INSERT INTO accounts VALUES (10); BEGIN; SELECT * FROM accounts FOR UPDATE; Session 2: BEGIN; SELECT * FROM accounts FOR UPDATE; < This should block Session 1: UPDATE accounts SET amount = 1; END; Session 2: <--- This should now return '1' END; Hope that helps, Mike Mascari [EMAIL PROTECTED] -Original Message- From: [EMAIL PROTECTED] [SMTP:[EMAIL PROTECTED]] Sent: Friday, March 09, 2001 4:13 PM To: [EMAIL PROTECTED] Subject:[GENERAL] How to use locks, for DB noivces? I've never used the locking features of Postgres before, and now I find that I need to use them. I read through the instructions, but I've never taken a database theory course so I don't understand all the terms they used. Here's what I need to do: The database keeps accounts. If one process is accessing an account, no other process should be able to access it at the same time. I need to do the following logical sequence of things: 1. Lock the account 2. Check to see if the change to the account is permitted 3. Make the change 4. Unlock the account I need this so that there isn't a race condition. Ie, if there are $10 in the account, and one backen says "withdraw $9" and the other also says "withdraw $9" at the same time, I need to make sure they they don't execute at the same time, which would result in a negative balance. I tried to do this with SELECT FOR UPDATE but I couldn't get that to work. Thanks ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] How to use locks, for DB noivces?
I've never used the locking features of Postgres before, and now I find that I need to use them. I read through the instructions, but I've never taken a database theory course so I don't understand all the terms they used. Here's what I need to do: The database keeps accounts. If one process is accessing an account, no other process should be able to access it at the same time. I need to do the following logical sequence of things: 1. Lock the account 2. Check to see if the change to the account is permitted 3. Make the change 4. Unlock the account I need this so that there isn't a race condition. Ie, if there are $10 in the account, and one backen says "withdraw $9" and the other also says "withdraw $9" at the same time, I need to make sure they they don't execute at the same time, which would result in a negative balance. I tried to do this with SELECT FOR UPDATE but I couldn't get that to work. Thanks ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] what's going on here?
I think that's because the stream=1 query is doing a sequential scan on files, so it has to look at everything, whereas the index scan in the stream=2 query only looks at the important rows. (Which in this case was just 1 row.) I think this question is: why is one using an index scan and the other using a sequential scan? On Fri, 9 Mar 2001, Stephan Szabo wrote: > > Hmm, I also notice that it's getting very different numbers for > rows from files as well. > > On Fri, 9 Mar 2001, Ben wrote: > > > Every night. There are 6223 rows with stream=2 and 7041 rows with > > stream=1. At any given time, there will be between 1 to 30 rows with > > played=null for both values. > > > > On Fri, 9 Mar 2001, Stephan Szabo wrote: > > > > > > > > Has vacuum analyze been run on both recently? What is the maximum number > > > of rows with a particular stream value, and how many does each of 1 and 2 > > > actually have? > > > > > > > Interestingly, the sequential scan on playlist claims to be returning 2000 > > > > results for stream=1 and only 200 for stream=2. I'm not sure which part of > > > > the where clause this guess comes from, because the playlist table has > > > > equal numbers of entries for both streams. > > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] what's going on here?
Hmm, I also notice that it's getting very different numbers for rows from files as well. On Fri, 9 Mar 2001, Ben wrote: > Every night. There are 6223 rows with stream=2 and 7041 rows with > stream=1. At any given time, there will be between 1 to 30 rows with > played=null for both values. > > On Fri, 9 Mar 2001, Stephan Szabo wrote: > > > > > Has vacuum analyze been run on both recently? What is the maximum number > > of rows with a particular stream value, and how many does each of 1 and 2 > > actually have? > > > > > Interestingly, the sequential scan on playlist claims to be returning 2000 > > > results for stream=1 and only 200 for stream=2. I'm not sure which part of > > > the where clause this guess comes from, because the playlist table has > > > equal numbers of entries for both streams. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Blowfish, DESC and etc.
Hi guys, I've put encode/decode functions (pg_bf-1.0.tar.gz) for Postgres on my web-site for those who might need them http://boulat.net/pub Regards, Boulat Khakimov -- Nothing Like the Sun ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] what's going on here?
Every night. There are 6223 rows with stream=2 and 7041 rows with stream=1. At any given time, there will be between 1 to 30 rows with played=null for both values. On Fri, 9 Mar 2001, Stephan Szabo wrote: > > Has vacuum analyze been run on both recently? What is the maximum number > of rows with a particular stream value, and how many does each of 1 and 2 > actually have? > > > Interestingly, the sequential scan on playlist claims to be returning 2000 > > results for stream=1 and only 200 for stream=2. I'm not sure which part of > > the where clause this guess comes from, because the playlist table has > > equal numbers of entries for both streams. > > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] what's going on here?
Has vacuum analyze been run on both recently? What is the maximum number of rows with a particular stream value, and how many does each of 1 and 2 actually have? > Interestingly, the sequential scan on playlist claims to be returning 2000 > results for stream=1 and only 200 for stream=2. I'm not sure which part of > the where clause this guess comes from, because the playlist table has > equal numbers of entries for both streams. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] what's going on here?
We have two postgres-driven web pages, and we've noticed in the last few days that one is loading about 10 times faster than the other. Looking into it, we found out that postgres has very different execution paths for what is essentially the same query, and this makes no sense to me. Sorry for the deludge of info here, but maybe it'll explain why this is happening First, here are the explains. Note the only parameter that changes is playlist.stream, and yet for some reason the second query makes use of indices which are unrelated to playlist.stream. music=# explain select users.name as username, playlist.id as id, paths.len as len, paths.path as path, timestamp(playlist.added) as added, timestamp(playlist.played) as played from users, playlist, paths where paths.md5=files.md5 and files.song=playlist.songid and playlist.userid=users.id and playlist.played is null and playlist.stream=1 order by playlist.added, playlist.id; NOTICE: QUERY PLAN: Sort (cost=10845.28..10845.28 rows=1857 width=80) -> Hash Join (cost=10007.93..10744.44 rows=1857 width=80) -> Merge Join (cost=10006.44..10668.17 rows=1857 width=64) -> Sort (cost=3754.03..3754.03 rows=2321 width=36) -> Hash Join (cost=306.62..3624.28 rows=2321 width=36) -> Seq Scan on files (cost=0.00..1072.13 rows=55413 width=16) -> Hash (cost=300.81..300.81 rows=2321 width=20) -> Seq Scan on playlist (cost=0.00..300.81 rows=2321 width=20) -> Sort (cost=6252.41..6252.41 rows=50617 width=28) -> Seq Scan on paths (cost=0.00..1677.17 rows=50617 width=28) -> Hash (cost=1.39..1.39 rows=39 width=16) -> Seq Scan on users (cost=0.00..1.39 rows=39 width=16) EXPLAIN music=# explain select users.name as username, playlist.id as id, paths.len as len, paths.path as path, timestamp(playlist.added) as added, timestamp(playlist.played) as played from users, playlist, paths where paths.md5=files.md5 and files.song=playlist.songid and playlist.userid=users.id and playlist.played is null and playlist.stream=2 order by playlist.added, playlist.id; NOTICE: QUERY PLAN: Sort (cost=1467.44..1467.44 rows=164 width=80) -> Hash Join (cost=1.49..1461.40 rows=164 width=80) -> Nested Loop (cost=0.00..1452.85 rows=164 width=64) -> Nested Loop (cost=0.00..771.76 rows=205 width=36) -> Seq Scan on playlist (cost=0.00..300.81 rows=205 width=20) -> Index Scan using files_song_btree_key on files (cost=0.00..2.28 rows=1 width=16) -> Index Scan using paths_md5_btree_key on paths (cost=0.00..3.31 rows=1 width=28) -> Hash (cost=1.39..1.39 rows=39 width=16) -> Seq Scan on users (cost=0.00..1.39 rows=39 width=16) EXPLAIN Okay, so now the relevant schema: Table "users" Attribute | Type|Modifier -+---+ id | integer | not null default nextval('users_id_seq'::text) name| text | not null Indices: users_name_key, users_pkey Index "users_name_key" Attribute | Type ---+-- name | text unique btree Index "users_pkey" Attribute | Type ---+- id| integer unique btree (primary key) Table "playlist" Attribute | Type | Modifier ---+-+--- id| integer | not null default nextval('playlist_id_seq'::text) songid| integer | userid| integer | added | integer | played| integer | stream| integer | Indices: playlist_added_key, playlist_pkey, playlist_played_key, playlist_songid_key Index "playlist_added_key" Attribute | Type ---+- added | integer btree Index "playlist_pkey" Attribute | Type ---+- id| integer unique btree (primary key) Index "playlist_played_key" Attribute | Type ---+- played| integer btree Index "playlist_songid_key" Attribute | Type ---+- songid| integer btree Table "paths" Attribute | Type| Modifier ---+---+-- md5 | char(32) | not null path | text | not null len | integer | Indices: paths_md5_btree_key, paths_md5_key, paths_path_key Index "paths_md5_btree_key" Attribute | Type ---+-- md5 | char(32) btree Index "paths_md5_key" Attribute | Type ---+-- md5 | char(32) hash Index "paths_path_key" Attribute | Type ---+-- path | text btree Table "files" Attribute | Type | Modifier ---+--+-- song
Re: [GENERAL] functions for triggers: passing parameters
On Fri, 9 Mar 2001, Nico wrote: > What is wrong? > > CREATE FUNCTION set_value(text) RETURNS OPAQUE AS ' > DECLARE > val ALIAS FOR $1; > BEGIN > NEW.inf := val; > RETURN NEW; > END; > ' LANGUAGE 'plpgsql'; > > > CREATE TABLE t1 ( > id serial, > info text > ); > CREATE TRIGGER t1_set_val BEFORE INSERT ON t1 > FOR EACH ROW EXECUTE PROCEDURE set_value('some info'); > > > psql 7.1beta5 result: > > > CREATE > psql:function_for_trigger.sql:15: NOTICE: CREATE TABLE will create implicit > sequence 't1_id_seq' for SERIAL column 't1.id' > psql:function_for_trigger.sql:15: NOTICE: CREATE TABLE/UNIQUE will create > implicit index 't1_id_key' for table 't1' > CREATE > psql:function_for_trigger.sql:17: ERROR: CreateTrigger: function set_value() > does not exist > test=# > > (the language plpgsql is already loaded) Right, because triggers take arguments differently. Trigger functions must return opaque and take no arguments. Arguments passed at create trigger time are passed in via TG_ARGV[] (number in TG_NARGS i believe) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] inheritance and foreign key
Yes, it's currently not implemented (see other currently running thread in pgsql-general about this topic) On Fri, 9 Mar 2001 [EMAIL PROTECTED] wrote: > Hello, > > We have the following schema : > create table A (id_A integer primary key,b_A integer); > create table B (a_B integer) inherits A; > create table C (id_C integer primary key, b_C integer); > create table D (id1 integer references A (id_A), > id2 integer references C (id_C), > primary key(id1, id2)); > > Problem occurs when we want to insert a tuple into D > that references a B's tuple. Our idea is : > create table D (id1 integer references A* (id_A), > id2 integer references C (id_C), > primary key(id1, id2)); > which is not allowed by postgres. > > Does anyone has experimented such problem? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] inheritance and primary/foreign keys
On Fri, 9 Mar 2001, Einar Karttunen wrote: > On Wed, 7 Mar 2001, Stephan Szabo wrote: > > > > You cannot safely reference tops of inheritance trees under 7.0 or 7.1 and > > have it reference the trees. > > > Is there anyway to emulate this? I want to have several types of persons Someone's been trying to get a workaround. Hopefully if they get it done, they'll post it to the list. However it's still fairly ugly. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] functions for triggers: passing parameters
Nico <[EMAIL PROTECTED]> writes: > What is wrong? [...] > CREATE TRIGGER t1_set_val BEFORE INSERT ON t1 > FOR EACH ROW EXECUTE PROCEDURE set_value('some info'); Trigger functions can't take arguments. -Doug ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Forein Key
On Fri, Mar 09, 2001 at 03:43:55PM -0300, Marcelo Pereira wrote: > >Hi, > > > >I'm a new user and I would like to know if I can create my table using >forein key's. Sure. Here's an example: test=> CREATE TABLE statename ( code CHAR(2) PRIMARY KEY, test(> name CHAR(30) test(> ); CREATE test=> INSERT INTO statename VALUES ('AL', 'Alabama'); INSERT 18934 1 test=> CREATE TABLE customer( test(> customer_id INTEGER, test(> name CHAR(30), test(> telephone CHAR(20), test(> street CHAR(40), test(> city CHAR(25), test(> state CHAR(2) REFERENCES statename, test(> zipcode CHAR(10), test(> country CHAR(20) test(> ); CREATE (from Bruce Momjian's excellent book on PostgreSQL) Regards, Frank ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] Postgresql.org website search
That search feature regularly breaks. That's about all I can say. [ Charset ISO-8859-1 unsupported, converting... ] > I apologise if this has been posted/asked already, but since it has been > about a week and it hasn't been "fixed" I needed to ask. Every time I > issue a search at http://www.postgresql.org/search.mpl even on the > easiest of search items, such as "postgres" I get a result of this, and > only this: > > Search results: postgres : 33688 > > Displaying documents 0-0 of total 33688 found. (0.00 secs) > > Now it still seems to take its sweet time to search so it seems like the > search is working, just no results are actually getting displayed. > > Has anyone else noticed this? Does it just not like me? > > Thanks. > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup.| Drexel Hill, Pennsylvania 19026 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Forein Key
Hi, I'm a new user and I would like to know if I can create my table using forein key's. Thanks Marcelo Pereira Computer Programmer
[GENERAL] functions for triggers: passing parameters
What is wrong? CREATE FUNCTION set_value(text) RETURNS OPAQUE AS ' DECLARE val ALIAS FOR $1; BEGIN NEW.inf := val; RETURN NEW; END; ' LANGUAGE 'plpgsql'; CREATE TABLE t1 ( id serial, info text ); CREATE TRIGGER t1_set_val BEFORE INSERT ON t1 FOR EACH ROW EXECUTE PROCEDURE set_value('some info'); psql 7.1beta5 result: CREATE psql:function_for_trigger.sql:15: NOTICE: CREATE TABLE will create implicit sequence 't1_id_seq' for SERIAL column 't1.id' psql:function_for_trigger.sql:15: NOTICE: CREATE TABLE/UNIQUE will create implicit index 't1_id_key' for table 't1' CREATE psql:function_for_trigger.sql:17: ERROR: CreateTrigger: function set_value() does not exist test=# (the language plpgsql is already loaded) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] Postgresql.org website search
I apologise if this has been posted/asked already, but since it has been about a week and it hasn't been "fixed" I needed to ask. Every time I issue a search at http://www.postgresql.org/search.mpl even on the easiest of search items, such as "postgres" I get a result of this, and only this: Search results: postgres : 33688 Displaying documents 0-0 of total 33688 found. (0.00 secs) Now it still seems to take its sweet time to search so it seems like the search is working, just no results are actually getting displayed. Has anyone else noticed this? Does it just not like me? Thanks. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Re: pgsql for Python
> What's the difference between PoPy and the PygreSQL driver > included in > the PostgreSQL distribution? Just curious. Several things actually. PyGreSQL is a C module and two Python script files, while PoPy is built entirely as a C module which should mean more speed. PoPy is Python DB API 2.0 compliant (http://www.python.org/topics/database/DatabaseAPI-2.0.html) which is the equivalent of JDBC for Java and DBI for Perl. PyGreSQL is not compliant. PoPy is threadsafe, while PyGreSQL is not. Brent __ Do You Yahoo!? Get email at your own domain with Yahoo! Mail. http://personal.mail.yahoo.com/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] Re: pgsql for Python
"Brent R. Matzelle" wrote: > > > > I use PoPy and think it's dandy. I also thought it *was* the > > DB API 2.0-- > > > am I mistaken here? > > I noticed a couple posts asking for PoPy RPMs so I created some and posted > them below if anyone is interested. What's the difference between PoPy and the PygreSQL driver included in the PostgreSQL distribution? Just curious. -- Lamar Owen WGCR Internet Radio 1 Peter 4:11 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [GENERAL] table name case sensitivity
Daniel J. Kressin writes: > I know that you can make table names case-sensitive by putting them in > quotes. Is there a way (compile-time option, anything?) to make them > case-sensitive by default (i.e. without quotes)? No. You need to write to the SQL standards committee if you want to change this. -- Peter Eisentraut [EMAIL PROTECTED] http://yi.org/peter-e/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] table name case sensitivity
Peter Eisentraut wrote: > > Daniel J. Kressin writes: > > > I know that you can make table names case-sensitive by putting them > > in quotes. Is there a way (compile-time option, anything?) to make > > them case-sensitive by default (i.e. without quotes)? > > No. You need to write to the SQL standards committee if you want to > change this. Fair enough, thanks. This was actually asked on behalf of a friend, so no skin off my back. :) -- Dan Kressin .---. /___ \/ __ \ | Unix SysAdmin | \ \/ / \ | | Global Crossing | ___/ __\/\/rench_ \__/ | | [EMAIL PROTECTED] | \/\/ | http://www.vib.org/wrench/| `---' ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] inheritance and foreign key
Hello, We have the following schema : create table A (id_A integer primary key,b_A integer); create table B (a_B integer) inherits A; create table C (id_C integer primary key, b_C integer); create table D (id1 integer references A (id_A), id2 integer references C (id_C), primary key(id1, id2)); Problem occurs when we want to insert a tuple into D that references a B's tuple. Our idea is : create table D (id1 integer references A* (id_A), id2 integer references C (id_C), primary key(id1, id2)); which is not allowed by postgres. Does anyone has experimented such problem? Thanks ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Re: pgsql for Python
> > I use PoPy and think it's dandy. I also thought it *was* the > DB API 2.0-- > > am I mistaken here? I noticed a couple posts asking for PoPy RPMs so I created some and posted them below if anyone is interested. http://www.linuxshare.com/popy/ Brent ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] calling a fuction within a function
Hi, I have an application that receives data with attributes concatenated as a string seperated by a / I use perl code to split it into seperate attributes and store it from one table into another (Solid database) What I would like to do is: - Receive the data in a table that stores the cancat. string - have a trigger on that table that receives splits the string into pieces using plperl ( some tricky regexps are involved so perl would be nice ) - have this function insert the new record into another table with the attibutes seperated. QUESTION: Can I call a function from the perl-based trigger to store the new record ? Thanks, Feite Brekeveld ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html