I think I'm doing this wrongly. Before I go out re-invent the wheel, I
thought I'll just check w/ the list. (I previously got the idea from IRC)

Table "Master" 
--> Table "Child1"
--> Table "Child2"
....
--> Table "Child2"
Table "Update" --> Table to update come key items from source table.


The Master table is populated with some huge amount of data on a per minute
and based on date_ranges, it is diverted to the different child tables.
(classic partitioning)

What I'm trying to do here is to consolidate the (subset of) whole list of
data inserted into the individual child tables into a table called "update"

I tried it via a trigger function which is called "BEFORE INSERT" (I also
tried "AFTER INSERT")

But what happens is that once the insertion to the child table is in
progress, the update_table_trigger is fired, and the Update table gets
updated, _but_ the child table is un-touched.


How do I go about this? As of right now, I've turned off the triggers.


CREATE OR REPLACE FUNCTION update_table_trigger()
  RETURNS trigger AS
$BODY$ 
BEGIN
INSERT INTO update
SELECT NEW.uniq_id, 
NEW.start_date_time, 
NEW.week_id 
WHERE NOT EXISTS (  SELECT 1 FROM update WHERE uniq_id=NEW. uniq_id
  AND start_date_time=NEW.start_date_time 
  AND week_id = NEW.week_id
  );
    RETURN NULL;
END;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;
ALTER FUNCTION head_raw_prod_sn_trigger() OWNER TO "operator";

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to