[SQL] RULE and default nextval() column
Hello. I'mtrying to implement history tables using rules. I have test_table -- create sequence history_seq start 1; create sequence test_sequence; # source table drop table test_table; create table test_table ( i integer default nextval('test_sequence'), c character(10) ); # history table create table test_table_history ( hist integer default nextval('history_seq'), i integer, c character(10) ); # rule to save history create rule test_table_history_insert as on insert to test_table do insert into test_table_history values ( nextval('history_seq'), new.i, new.c ); # Then I try to insert into test_table; test= insert into test_table values( nextval('test_sequence'), 'a'); INSERT 3299176 1 test= insert into test_table (c) values('a'); INSERT 3299178 1 drweb= select * from test_table; i | c ---+ 1 | a 3 | a (2 rows) test= select * from test_table_history; hist | i | c --+---+ 1 | 2 | a 2 | 4 | a (2 rows) == Problem is : value of i field increments during insertion into both test_table and test_table history. This also happens if I omit i in insert statement and let it be filled by default. Can something be done to avoid it? Sincerely, Dmitry Ovechkin. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] RULE and default nextval() column
On Wed, 27 Oct 2004, Dmitry P. Ovechkin wrote: Hello. I'mtrying to implement history tables using rules. I have test_table -- create sequence history_seq start 1; create sequence test_sequence; # source table drop table test_table; create table test_table ( i integer default nextval('test_sequence'), c character(10) ); # history table create table test_table_history ( hist integer default nextval('history_seq'), i integer, c character(10) ); # rule to save history create rule test_table_history_insert as on insert to test_table do insert into test_table_history values ( nextval('history_seq'), new.i, new.c ); # Then I try to insert into test_table; test= insert into test_table values( nextval('test_sequence'), 'a'); INSERT 3299176 1 test= insert into test_table (c) values('a'); INSERT 3299178 1 drweb= select * from test_table; i | c ---+ 1 | a 3 | a (2 rows) test= select * from test_table_history; hist | i | c --+---+ 1 | 2 | a 2 | 4 | a (2 rows) == Problem is : value of i field increments during insertion into both test_table and test_table history. This also happens if I omit i in insert statement and let it be filled by default. Can something be done to avoid it? Pretty much the only real answer is to use a trigger rather than a rule. Rules are basically like macro expansions and have many of the same problem with side effects that macros do. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] How do you compare (NULL) and (non-NULL)?
On Tue, Oct 26, 2004 at 16:23:20 -0400, Wei Weng [EMAIL PROTECTED] wrote: In the following query SELECT Parent FROM Channels ORDER BY Parent ASC; If I have a couple of (NULL)s in the field [Parent], they will be listed at the bottom of the query result. Is it because PostgreSQL considers (NULL) as the biggest value? If I run the same query under MSSQL Server 2000, I get the exact opposite result regarding the order of (NULL)s and (non-NULL) values. They are listed at the very beginning of the query result. If the order matters, you can order by IS NULL or IS NOT NULL. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] How to recognize trigger-inserted rows?
I have a table with an INSERT/UPDATE/DELETE statement trigger. The trigger's action is to insert jobs into a queue noting that the table has changed. A number of other tables have FK relationships with this table, and they have their own statement triggers that fire on DELETE. When I delete a number of rows from the first table, the cascading deletes into the other tables generate a rather large number of trigger-fires, so I end up with way too many rows in the queue-table. What I would like to do is, within the transaction doing the top-level delete, examine the queue-table for duplicate rows and remove those, since they are extraneous. Ideally I would look for rows that have the same transaction ID, but I'm having trouble determining what the current ID is. I'm using Pg 7.4. -- (Posted from an account used as a SPAM dump. If you really want to get in touch with me, dump the 'jboes' and substitute 'mur'.) Jeffery Boes [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html