Re: [SQL] Rule causes baffling error

2005-12-19 Thread Richard Huxton

Ken Winter wrote:

This rule is supposed to (1) cause an update directed to the view
"my_data_now" to be made to the underlying table "my_data", (2) reset the
"effective_date_and_time" of that row to 'now', (3) insert a record
containing the old values into "my_data", and (4) expire that "old" record
by setting its "expiration_date_and_time" to 'now'.


I think you want a trigger rather than a rule.

Rules rewrite the query structure, triggers let you deal with values on 
a row-by-row basis (for row-level triggers).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] How to Force Transactions to Process Serially on A Table

2005-12-19 Thread Lane Van Ingen
I am using PL/SQL functions on Windows 2003, version 8.0.1.

I have not used explicit PostgreSQL LOCKing before, but I need some advice
on how properly to use some explicit locking. I think that duplicate key
violations I am now getting are the result.

I want to force transactions being used to update a table to be processed on
a first-come, first-served basis. I want my Pl/sql function to execute to
completion on each transaction before another starts.

Need some advice on how to do this. From what I can read in the docs, it
looks like I need to solve the problem by using the following, but doing so
gives me an SPI_execution error:
  BEGIN;
  LOCK  IN SHARE ROW  EXCLUSIVE MODE;
lock adns_report_hour_history in share row exclusive mode;
  INSERT INTO  VALUES ...  - or - UPDATE  SET 
  COMMIT;
Will this make the next transaction wait until the previous transaction has
completed? Do I need to set any config parameters?

If you can include an actual code snippet in the response, it would help ...



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Does VACUUM reorder tables on clustered indices

2005-12-19 Thread Andrew Sullivan
On Sun, Dec 18, 2005 at 10:08:22PM -0500, Tom Lane wrote:
> 
> Just for the record, that behavior is seriously broken: it violates
> MVCC if any of the deleted tuples are still visible to anyone else.

Does it remove tuples that VACUUM FULL wouldn't?  I always thought it
did essentially the same thing?

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] How to Force Transactions to Process Serially on A Table

2005-12-19 Thread Achilleus Mantzios
O Lane Van Ingen έγραψε στις Dec 19, 2005 :

> I am using PL/SQL functions on Windows 2003, version 8.0.1.
> 
> I have not used explicit PostgreSQL LOCKing before, but I need some advice
> on how properly to use some explicit locking. I think that duplicate key
> violations I am now getting are the result.
> 
> I want to force transactions being used to update a table to be processed on
> a first-come, first-served basis. I want my Pl/sql function to execute to
> completion on each transaction before another starts.
> 
> Need some advice on how to do this. From what I can read in the docs, it
> looks like I need to solve the problem by using the following, but doing so
> gives me an SPI_execution error:
>   BEGIN;
>   LOCK  IN SHARE ROW  EXCLUSIVE MODE;
> lock adns_report_hour_history in share row exclusive mode;
>   INSERT INTO  VALUES ...  - or - UPDATE  SET 
>   COMMIT;
> Will this make the next transaction wait until the previous transaction has
> completed? Do I need to set any config parameters?
> 
> If you can include an actual code snippet in the response, it would help ...

what you want is to set the xaction isolation level.

BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
..
COMMIT;

Be prepared for serialization failures though.

> 
> 
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 

-- 
-Achilleus


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] How to Force Transactions to Process Serially on A Table

2005-12-19 Thread Lane Van Ingen
Thanks, that helped.

Please answer 2 other related questions, if you would:
(1) What must I do to 'Be prepared for serialization failures'  (how to
detect, how to handle)?
 Do you have a sample?
(2) Also, I am assuming that the effect of all of this is to just force
transactions to wait in line
to be processed serially, and that it only lasts as long as the pl/pgsql
transaction block or
the next COMMIT.

-Original Message-
From: Achilleus Mantzios [mailto:[EMAIL PROTECTED]
Sent: Monday, December 19, 2005 9:25 AM
To: Lane Van Ingen
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] How to Force Transactions to Process Serially on A Table

O Lane Van Ingen έγραψε στις Dec 19, 2005 :

> I am using PL/SQL functions on Windows 2003, version 8.0.1.
>
> I have not used explicit PostgreSQL LOCKing before, but I need some advice
> on how properly to use some explicit locking. I think that duplicate key
> violations I am now getting are the result.
>
> I want to force transactions being used to update a table to be processed
on
> a first-come, first-served basis. I want my Pl/sql function to execute to
> completion on each transaction before another starts.
>
> Need some advice on how to do this. From what I can read in the docs, it
> looks like I need to solve the problem by using the following, but doing
so
> gives me an SPI_execution error:
>   BEGIN;
>   LOCK  IN SHARE ROW  EXCLUSIVE MODE;
> lock adns_report_hour_history in share row exclusive mode;
>   INSERT INTO  VALUES ...  - or - UPDATE  SET 
>   COMMIT;
> Will this make the next transaction wait until the previous transaction
has
> completed? Do I need to set any config parameters?
>
> If you can include an actual code snippet in the response, it would help
...

what you want is to set the xaction isolation level.

BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
..
COMMIT;

Be prepared for serialization failures though.

>
>
>
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
>

--
-Achilleus



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] How to Force Transactions to Process Serially on A Table

2005-12-19 Thread Achilleus Mantzios
O Lane Van Ingen έγραψε στις Dec 19, 2005 :

> Thanks, that helped.
> 
> Please answer 2 other related questions, if you would:
> (1) What must I do to 'Be prepared for serialization failures'  (how to
> detect, how to handle)?
>  Do you have a sample?

Look at
http://www.postgresql.org/docs/7.4/interactive/transaction-iso.html

> (2) Also, I am assuming that the effect of all of this is to just force
> transactions to wait in line
> to be processed serially, and that it only lasts as long as the pl/pgsql
> transaction block or
> the next COMMIT.
> 

Then transaction isolation SERIALIZABLE is not for this task.

What you would do is use the
SELECT ... FOR UPDATE
construct.

Normally you would not care about locking the whole table
but only row of interest.

> -Original Message-
> From: Achilleus Mantzios [mailto:[EMAIL PROTECTED]
> Sent: Monday, December 19, 2005 9:25 AM
> To: Lane Van Ingen
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] How to Force Transactions to Process Serially on A Table
> 
> O Lane Van Ingen έγραψε στις Dec 19, 2005 :
> 
> > I am using PL/SQL functions on Windows 2003, version 8.0.1.
> >
> > I have not used explicit PostgreSQL LOCKing before, but I need some advice
> > on how properly to use some explicit locking. I think that duplicate key
> > violations I am now getting are the result.
> >
> > I want to force transactions being used to update a table to be processed
> on
> > a first-come, first-served basis. I want my Pl/sql function to execute to
> > completion on each transaction before another starts.
> >
> > Need some advice on how to do this. From what I can read in the docs, it
> > looks like I need to solve the problem by using the following, but doing
> so
> > gives me an SPI_execution error:
> >   BEGIN;
> >   LOCK  IN SHARE ROW  EXCLUSIVE MODE;
> > lock adns_report_hour_history in share row exclusive mode;
> >   INSERT INTO  VALUES ...  - or - UPDATE  SET 
> >   COMMIT;
> > Will this make the next transaction wait until the previous transaction
> has
> > completed? Do I need to set any config parameters?
> >
> > If you can include an actual code snippet in the response, it would help
> ...
> 
> what you want is to set the xaction isolation level.
> 
> BEGIN;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> ..
> COMMIT;
> 
> Be prepared for serialization failures though.
> 
> >
> >
> >
> > ---(end of broadcast)---
> > TIP 2: Don't 'kill -9' the postmaster
> >
> 
> --
> -Achilleus
> 
> 
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
> 

-- 
-Achilleus


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] How to Force Transactions to Process Serially on A Table

2005-12-19 Thread Andrew Sullivan
On Mon, Dec 19, 2005 at 08:46:39AM -0500, Lane Van Ingen wrote:
> Thanks, that helped.
> 
> Please answer 2 other related questions, if you would:
> (1) What must I do to 'Be prepared for serialization failures'  (how to
> detect, how to handle)?
>  Do you have a sample?

You'll get an error.  You should read this section of the docs:

http://www.postgresql.org/docs/8.1/interactive/transaction-iso.html#XACT-SERIALIZABLE

> (2) Also, I am assuming that the effect of all of this is to just force
> transactions to wait in line
> to be processed serially, and that it only lasts as long as the pl/pgsql
> transaction block or
> the next COMMIT.

No.  The effect is to _emulate_ the case where the set transaction is
processed serially.  Importantly, on a high-concurrency database, you
tend to get serialization failures.  Moreover, it is not true
mathematical serialization.  See section 12.2.2.1 for details in case
you need such a feature, in which case you're back to explicit
locking.

A

> 
> -Original Message-
> From: Achilleus Mantzios [mailto:[EMAIL PROTECTED]
> Sent: Monday, December 19, 2005 9:25 AM
> To: Lane Van Ingen
> Cc: pgsql-sql@postgresql.org
> Subject: Re: [SQL] How to Force Transactions to Process Serially on A Table
> 
> O Lane Van Ingen ??  Dec 19, 2005 :
> 
> > I am using PL/SQL functions on Windows 2003, version 8.0.1.
> >
> > I have not used explicit PostgreSQL LOCKing before, but I need some advice
> > on how properly to use some explicit locking. I think that duplicate key
> > violations I am now getting are the result.
> >
> > I want to force transactions being used to update a table to be processed
> on
> > a first-come, first-served basis. I want my Pl/sql function to execute to
> > completion on each transaction before another starts.
> >
> > Need some advice on how to do this. From what I can read in the docs, it
> > looks like I need to solve the problem by using the following, but doing
> so
> > gives me an SPI_execution error:
> >   BEGIN;
> >   LOCK  IN SHARE ROW  EXCLUSIVE MODE;
> > lock adns_report_hour_history in share row exclusive mode;
> >   INSERT INTO  VALUES ...  - or - UPDATE  SET 
> >   COMMIT;
> > Will this make the next transaction wait until the previous transaction
> has
> > completed? Do I need to set any config parameters?
> >
> > If you can include an actual code snippet in the response, it would help
> ...
> 
> what you want is to set the xaction isolation level.
> 
> BEGIN;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> ..
> COMMIT;
> 
> Be prepared for serialization failures though.
> 
> >
> >
> >
> > ---(end of broadcast)---
> > TIP 2: Don't 'kill -9' the postmaster
> >
> 
> --
> -Achilleus
> 
> 
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
When my information changes, I alter my conclusions.  What do you do sir?
--attr. John Maynard Keynes

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Does VACUUM reorder tables on clustered indices

2005-12-19 Thread Tom Lane
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Sun, Dec 18, 2005 at 10:08:22PM -0500, Tom Lane wrote:
>> Just for the record, that behavior is seriously broken: it violates
>> MVCC if any of the deleted tuples are still visible to anyone else.

> Does it remove tuples that VACUUM FULL wouldn't?

Yes.  CLUSTER works on SnapshotNow, so it will remove committed-dead
tuples even if there are still open transactions that could see them.
Of course, said transactions couldn't be actively using the table
while the CLUSTER runs, because it takes an exclusive table lock.
But they *could* look at it afterwards.  Offhand I think you'd only
be likely to notice the difference if the open transactions were
SERIALIZABLE --- in READ COMMITTED mode, by the time they could look
at the clustered table, they'd likely be using a snapshot that postdates
the DELETE.

[ experiments a bit... ]  Hmm.  Actually, it's far worse than I
thought.  It looks like CLUSTER puts the tuples into the new table with
its own xid, which means that concurrent serializable transactions will
see the new table as completely empty!

<< session 1 >>

regression=# select * from int4_tbl;
 f1
-
   0
  123456
 -123456
  2147483647
 -2147483647
(5 rows)

regression=# create index fooi on int4_tbl(f1);
CREATE INDEX
regression=# begin isolation level serializable;
BEGIN
regression=# select 2+2; -- establish transaction snapshot
 ?column?
--
4
(1 row)

<< session 2 >>

regression=# delete from int4_tbl where f1 = -123456;
DELETE 1
regression=# cluster fooi on int4_tbl;
CLUSTER

<< back to session 1 >>

regression=# select * from int4_tbl;
 f1

(0 rows)

regression=# commit;
COMMIT
regression=# select * from int4_tbl;
 f1
-
 -2147483647
   0
  123456
  2147483647
(4 rows)


regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Rule causes baffling error

2005-12-19 Thread Ken Winter
Richard ~

Thanks for your response.

Can a trigger be written on a *view*?  I can't find anything in the
PostgreSQL docs that answers this question.

I originally wrote these actions (described in my original message) as a
trigger on my base table, but then realized I was getting in deeper and
deeper trouble because (a) I was getting into cascading triggers that I
didn't want and (b) I need to enable some queries to access the base table
without triggering these actions.  That's why I set up the view, and then I
assumed that the only way I could implement these actions was as rules.  

~ Ken


> -Original Message-
> From: Richard Huxton [mailto:[EMAIL PROTECTED]
> Sent: Monday, December 19, 2005 4:08 AM
> To: Ken Winter
> Cc: 'PostgreSQL pg-sql list'
> Subject: Re: [SQL] Rule causes baffling error
> 
> Ken Winter wrote:
> > This rule is supposed to (1) cause an update directed to the view
> > "my_data_now" to be made to the underlying table "my_data", (2) reset
> the
> > "effective_date_and_time" of that row to 'now', (3) insert a record
> > containing the old values into "my_data", and (4) expire that "old"
> record
> > by setting its "expiration_date_and_time" to 'now'.
> 
> I think you want a trigger rather than a rule.
> 
> Rules rewrite the query structure, triggers let you deal with values on
> a row-by-row basis (for row-level triggers).
> 
> --
>Richard Huxton
>Archonet Ltd



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Question on indexes

2005-12-19 Thread Emil Rachovsky

Hi,
Can anyone show me a simple way of creating an index
in PostGre like that:
create index indName on someTable(someIntColumn DESC)
?

Thanks In Advance,
Emil 

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] how to convert relational column to array?

2005-12-19 Thread george young
[PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1]
I'm trying to convert a column from a traditional relational form to an array:

create table old_tab(name text, id int, permits text);

newschm3=# select * from old_tab order by name;
   name   |  id   |  permits   
--+---+
 baker|   581 | operator
 lawless  |   509 | operator
 lawless  |   509 | originator
 lcalvet  |   622 | originator
 loomis   |   514 | operator
 loomis   |   514 | originator
 pig  |   614 | operator
 pig  |   614 | originator
 pig  |   614 | supervisor

create table new_tab(name text, id int, permits text[]);

-- I insert one row per name:
insert into new_tab select distinct name,id,cast('{}' as text[]) from old_tab;

Now I want to fold all the 'permits' values into the new permits arrays.
I can do:

update new_tab set permits=new_tab.permits||ot.permits from old_tab ot where 
ot.name=new_tab.name and ot.permits!=all(new_tab.permits);

but this only gets one permits value per name.  Repeating this many times would
eventually get all of them, but it seems there must be a more reliable way?
[I don't care about the *order* of permits values in the array, since order did 
not exist in old_tab]

Just to be clear, I want to end up with:

newschm3=# select * from new_tab order by name;
  name   | id  | permits  
-+-+--
 baker   | 581 | {operator}
 lawless | 509 | {operator,originator}
 lcalvet | 622 | {originator}
 loomis  | 514 | {operator,originator}
 pig | 614 | {operator,originator,supervisor}

-- George Young
-- 
"Are the gods not just?"  "Oh no, child.
What would become of us if they were?" (CSL)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Rule causes baffling error

2005-12-19 Thread Richard Huxton

Ken Winter wrote:

Richard ~

Thanks for your response.

Can a trigger be written on a *view*?  I can't find anything in the
PostgreSQL docs that answers this question.


There's nothing for them to fire against even if you could attach the 
trigger. I suppose you could have a statement-level trigger in more 
recent versions, but for row-level triggers there aren't any rows in the 
view to be affected.



I originally wrote these actions (described in my original message) as a
trigger on my base table, but then realized I was getting in deeper and
deeper trouble because (a) I was getting into cascading triggers that I
didn't want and (b) I need to enable some queries to access the base table
without triggering these actions.  That's why I set up the view, and then I
assumed that the only way I could implement these actions was as rules.  


Hmm - the cascading should be straightforward enough to deal with. When 
you are updating check if NEW.expiration_date_and_time = now() and if so 
exit the trigger function (since there's nothing to do anyway).


The other thing you might want to consider is whether the "live" data 
should be in the same table as the "old" data. That will depend on how 
you want to use it - conceptually is it all one continuum or is the 
"old" data just for archive purposes.


Now, having got this feature working, why do you want to bypass it? Will 
it be a specific user, involve specific patterns of values or what?


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Rule causes baffling error

2005-12-19 Thread Tom Lane
Richard Huxton  writes:
> Ken Winter wrote:
>> Can a trigger be written on a *view*?

> There's nothing for them to fire against even if you could attach the 
> trigger.

Currently we reject CREATE TRIGGER on a view, but it occurred to me the
other day that that could be relaxed, at least for BEFORE triggers.
The system could feed the trigger with the synthetic view row, and the
trigger could update the view's underlying tables and then return NULL
to suppress any actual "operation" on the view proper.

To do this, instead of erroring out in the rewriter if a view has no
DO INSTEAD rule, we would have to error out down in the guts of
the executor if control got as far as trying to actually
insert/update/delete a tuple in a view.

The trickiest part of this is probably generating the "old" row for
UPDATE and DELETE cases.  I think you'd need to adjust the planner
so that it would generate all the "old" view columns, rather than
the current situation in which it generates just the "new" columns
for an UPDATE, or no columns at all (only the CTID) for a DELETE.
I don't see any fundamental reason why this couldn't be made to work
though.

Triggers would be better than rules for quite a few view-rewriting
scenarios, mainly because you'd avoid all the gotchas with double
evaluation and so on.  So it seems like it might be worth doing.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Question on indexes

2005-12-19 Thread Alvaro Herrera
Emil Rachovsky wrote:
> 
> Hi,
> Can anyone show me a simple way of creating an index
> in PostGre like that:
> create index indName on someTable(someIntColumn DESC)
> ?

Not using that particular syntax, but you can do that if you create the
appropiate operator classes.  Note that if you want to use btree index
on a single column, you don't need to have a descending index, because
btree indexes can be scanned in either direction.  The opclass trick is
only needed if you want to have a multicolumn index.  (I guess in the
other access methods it doesn't make much sense to think of descending
indexes.)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] how to convert relational column to array?

2005-12-19 Thread Michael Fuhr
On Mon, Dec 19, 2005 at 11:06:12AM -0500, george young wrote:
> create table new_tab(name text, id int, permits text[]);
> 
> -- I insert one row per name:
> insert into new_tab select distinct name,id,cast('{}' as text[]) from old_tab;
> 
> Now I want to fold all the 'permits' values into the new permits arrays.

In PostgreSQL 7.4 and later you can build an array from a select,
so I think the following update should work (it did for me when I
tested it):

UPDATE new_tab SET permits = array(
  SELECT permits
  FROM old_tab
  WHERE old_tab.name = new_tab.name AND old_tab.id = new_tab.id
);

-- 
Michael Fuhr

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] how to convert relational column to array?

2005-12-19 Thread george young
On Mon, 19 Dec 2005 09:54:49 -0700
Michael Fuhr <[EMAIL PROTECTED]> threw this fish to the penguins:

> On Mon, Dec 19, 2005 at 11:06:12AM -0500, george young wrote:
> > create table new_tab(name text, id int, permits text[]);
> > 
> > -- I insert one row per name:
> > insert into new_tab select distinct name,id,cast('{}' as text[]) from 
> > old_tab;
> > 
> > Now I want to fold all the 'permits' values into the new permits arrays.
> 
> In PostgreSQL 7.4 and later you can build an array from a select,
> so I think the following update should work (it did for me when I
> tested it):
> 
> UPDATE new_tab SET permits = array(
>   SELECT permits
>   FROM old_tab
>   WHERE old_tab.name = new_tab.name AND old_tab.id = new_tab.id
> );

That's exactly what I needed.  Works great.

Thanks,
George
-- 
"Are the gods not just?"  "Oh no, child.
What would become of us if they were?" (CSL)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Rule causes baffling error

2005-12-19 Thread Ken Winter
Richard ~

Let me zoom out for a moment, for the bigger picture.

As you have inferred, what I'm trying to do is develop a history-preserving
table ("my_data" in the example that started this thread).  *Most* user
programs would see and manipulate this table as if it contained only the
current rows (marked by effective_date_and_time <= 'now' and
expiration_date_and_time = 'infinity').  

When these programs do an INSERT, I need automatic actions that set the
expiration and date timestamps to 'now' and 'infinity'; when they do an
UPDATE, I need automatic actions that save the old data in a history record
and expire it as of 'now' and the new data in a record that's effective
'now' and expires at 'infinity'; when they do a DELETE, I need an automatic
action to expire the target record as of 'now' rather than actually deleting
it.  

However, I also need certain maintenance programs, designed to enable
certain users to correct inaccurately entered data.  These need to be able
to "rewrite history" by doing actions against "my_data" without these
automatic actions occurring.  It may prove advisable to provide some
automatic actions for these programs too, but they definitely won't be the
actions described above.  If the above actions were implemented as triggers,
all the ways I could think of to conditionally disable them (and possibly
replace them with other actions) seemed architecturally very klunky.  That's
when I decided I needed the "my_data_now" view, and from that I inferred
(apparently correctly) that the actions would have to be implemented as
rewrite rules.

The cascading problem was solkable.  But the solution was a bit hard to
reach because the user-invoked UPDATE action triggered both an INSERT and an
UPDATE on the same table (and user DELETE triggered an UPDATE), and so one
had to take into account that all of these triggered actions would cause
their triggers to fire again.  Not a deal-killer, but the solution felt
brittle.

Yes, I did consider having a "live" table and a separate "history" table.
The killer of that idea was my inability to find a way to implement foreign
keys that could refer to both tables and that could follow a record when it
was moved from "live" to "history".  Much of the history I'm trying to
preserve is not in the "my_data" table; it's in related tables that refer to
it.  I presumably could do this by not declaring the FKs to PostgreSQL, and
implementing the necessary referential integrity with triggers, but - well,
in a word, yuck.

As it happens, I have found a rewrite of my UPDATE rule that works, so my
immediate need is past.  FYI, the old update rule was:

CREATE OR REPLACE RULE upd_my_data_now AS
ON UPDATE TO my_data_now
DO INSTEAD
(
/* Update current record, and make it effective now. */
UPDATE my_data
SET id = NEW.id, 
user_name = NEW.user_name, 
effective_date_and_time = CURRENT_TIMESTAMP
WHERE effective_date_and_time = CURRENT_TIMESTAMP
AND id = OLD.id;
/* Insert a record containing the old values, 
and expire it as of now. */
INSERT INTO my_data (
effective_date_and_time,
expiration_date_and_time,   
id, 
user_name) 
VALUES (  
OLD.effective_date_and_time,
CURRENT_TIMESTAMP,
OLD.id, 
OLD.user_name)
)
;

And the one that works is:

CREATE OR REPLACE RULE upd_my_data_now AS
ON UPDATE TO my_data_now
DO INSTEAD
(
/* Expire the current record. */
UPDATE my_data
SET expiration_date_and_time = CURRENT_TIMESTAMP
WHERE effective_date_and_time =
OLD.effective_date_and_time 
AND id = OLD.id
AND effective_date_and_time <= CURRENT_TIMESTAMP
AND expiration_date_and_time >= CURRENT_TIMESTAMP;
/* Insert a record containing the new values, 
effective as of now. */
INSERT INTO my_data (
effective_date_and_time,
id, 
user_name) 
VALUES (  
CURRENT_TIMESTAMP,
NEW.id, 
NEW.user_name)
)
;

The relevant change is that I'm now expiring the record with the old data
and inserting the one with the new data, rather than vice versa.  I still
don't know why the old rule didn't work and this one does, but hey,