[SQL] Archival of Live database to Historical database
Hello everyone, I have hit on a limit in my knowledge and i am looking for some guidance. Currently I have two seperate databases, one for live data, the other for historical data. The only difference really being that the historical data has a Serial in it so that the tables can keep more than one 'version history'. What i would like to do, is after my insert transaction to the live database, i would like the information also transferred to the historical one. Now. I can do this via perl (and i have been doing it this way) and using two database handles. This is rather clumsy and I know there must be a 'better' or more 'elegant' solution. So i stumbled onto triggers and functions. All well and good. I create the trigger to fire off the procedure after a succesful insert into the table (And yes i do have triggers on the 30 tables or so i use). The problem arises, in the procedure. I dont know the syntax to reference another database. I assume there must be someway to simply say (in a function) copy the data inserted into this database as well. If it helps any, the tables are the same name, and all the same fields (Apart from the SERIAL in the historical version, but since that auto increments i wouldnt have to worry about it) I am interested on ideas, code and pointers as to if this is a good idea or not. thank you. Regards, Steff
Re: [SQL] Archival of Live database to Historical database
Richard wrote: > > Hello everyone, > > I have hit on a limit in my knowledge and i am looking for > > some guidance. Currently I have two seperate databases, one for > > live data, the other for historical data. The only difference really > > being that the historical data has a Serial in it so that the tables > > can keep more than one 'version history'. > > > > What i would like to do, is after my insert transaction to the > > live database, i would like the information also transferred to the > > historical one. Now. I can do this via perl (and i have been doing > > this way) and using two database handles. This is rather clumsy and > > I know there must be a 'better' or more 'elegant' solution. > > Not really (AFAIK) - this crops up fairly regularly but there's no way > to do a cross-database query. > After going through the mailing list archive, i can see that yes, this is asked a lot and that no, there is no real solution to it at present. a shame to be sure. > You could use rules/triggers to set a "dirty" flag for each record > that needs copying - but it sounds like you're already doing that. > > If you wanted things to be more "real-time" you could look at > LISTEN/NOTIFY What i would ideally like to do, is have the live database have a trigger setup after an insert, so that the data will also be copied across using a function. However, if cross database functions or triggers are not possible, then i cant do this and will have to stick with the current scheme (two database handles). Its not pretty, but it works. which is the main thing. Can i ask the postgreSQL powers that be, how hard would it be to have the ability to reference different databases on the same machine ? I know it might make sense to have the two on seperate machines, but that would require hostname resolution and other silly things. All that is really needed is the ability to reference another database on the SAME machine. Of course, i can see this is a loaded gun. It would be very easy to do some very nasty things and more than a few race conditions spring to mind. Anyway, i look forward to getting screamed at for such a silly preposterous idea ;) regards, Steff
[SQL] Trigger Function and Html Output
Hello, i find i must turn once again to the list for help, in what is probably another silly request. I have a view (made of 4 tables) and now find that users need to have the view different based on some criteria from the database. its. well. its rather icky to go into. but i can figure that part out myself. The long and short of it is this, I would like the view to return a html table. I understand that I can do this via a function and a trigger on select, massage the data, and then construct each line to simply be returned to the perl program. The only problem comes with, as far as i am aware, a trigger returns and is processed on a 'per line' basis for a select, so how would i get the view's column titles output for the table header ? I ideally want to make as much of the perl construction of the table from the database, this includes the head of the table, all and each row, and then of course closing the table 'off'. I know this sounds like a strange way to do things, but i think this is the best way to go. many thanks stefs ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Trigger Function and Html Output
Richard Huxton wrote: > That's what the list is for, and it's only silly if you already know the > answer :-) well, i do agree that the only stupid question is the one not asked but there are some questions that are universally stupid and jst hoped this wasnt one of them ;> > > The long and short of it is this, I would like the view to > > return a html table. I understand that I can do this via a function > > and a trigger on select, massage the data, and then construct > > each line to simply be returned to the perl program. The only > > problem comes with, as far as i am aware, a trigger returns and > > is processed on a 'per line' basis for a select, so how would i > > get the view's column titles output for the table header ? > > For this sort of stuff, I like to make as much use of Perl's DBI and > HTML templating modules as I can. For the templating, it's trivial to > write your own stuff for this sort of problem, but have a look at the > various modules on CPAN - there'll be something there for you. actually, I have been using the perl DBI and have written my own templating software. maybe i should try to explain more. I have a perl cgi, which until it is run doesnt know the query. nothing new there as most queries are dynamic by nature, but this query comes from the user preferences which are stored in the database. Each select works on a view, rather than hardcode the view into the perl CGI, i would rather have the table header/column titles returned as the first item as text/html (i know about the func procedure to get the table_attributes) and then all the formatting thereafter done by the database for each row. maybe i am naive in thinking this way, but surely the a database function formatting the returned string must be quicker then perl. (speaking generically of course, i conceed that there are times when the reverse is true) i -am- a perl convert. i truly am. i would jst prefer to take the massaging of data and put that into a trigger function for the database. after all, i would rather use the database then jst have it as a large flat file ;) > I've got to admit, I try to avoid using SELECT * if I can - maybe it's > just me, but I've always felt if the database changes the code needs to > break. I'd rather get errors than unexpected results. IMHO of course. i dont like select * either, but i do see that there are some justified uses for it. 'never say never' in my book ;) > > I ideally want to make as much of the perl construction > > of the table from the database, this includes the head of the table, > > all and each row, and then of course closing the table 'off'. I know > > this sounds like a strange way to do things, but i think this is the > > best way to go. > > Definitely look at some of the general-purpose templating modules. > They'll all handle tables. > thank you for the input, and if i was jst starting out i would agree with you. I cant really explain it any better than i have previously, but hopefully you will see that i want to use the database to do this. hopefully that isnt that strange a request ;) many thanks, stef ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] breakage in schema with foreign keys between 7.0.3 and 7.1
Hello everyone me again (apologies in advance :). I have been running a database under 7.0.3 for some months now, and it was all fine. The tables all loaded and it was working flawlessly. Then 7.1 came out and I noticed it had outer joins (which are a big win in one of the main views i use). So, i started loading in the schema into 7.1, but it seems to break. Now, i have included the 3 tables below, but first i would like to tell some of the design criteria behind this. 1) I need to have order_id as a primary key across the system (system key ?) so that i can pull out based on an order_id. The same goes for history_id in the client. 2) I also need to have the client_id as a secondary key across the system, as another application frontend references on client_id. its icky but it works. 3) i have taken out some of the non-important fields, so please dont tell me that i have over-normalised my data ;p for some reason though, under 7.1 when trying to get the tables i get this error -> UNIQUE constraint matching given keys for referenced table "client" not found. I know what it is saying, but i dont quite understand what has changed between 7.0.3 and 7.1 CREATE TABLE action ( ORDER_IDintegerPRIMARY KEY, ORDERTYPE integerNOT NULL, client_idchar(16)NOT NULL, priority integerDEFAULT 5 NOT NULL, creation_idnamedefault user, creation_datedatetime default now(), close_id nameNULL, close_datedatetime NULL, lock_id nameNULL, lock_date datetime NULL ) \g CREATE TABLE client ( ORDER_IDinteger REFERENCES action (ORDER_ID) ON UPDATE CASCADE INITIALLY DEFERRED, history_id SERIAL, active boolean, client_id char(16)NOT NULL, change_id nameDEFAULT USER, change_date datetimeDEFAULT NOW(), PRIMARY KEY (ORDER_ID,history_id) ) \g CREATE TABLE client_dates ( ORDER_IDinteger REFERENCES action (ORDER_ID) ON UPDATE CASCADE INITIALLY DEFERRED, LOCATION_ID integer NOT NULL, history_id integer REFERENCES client (history_id) ON UPDATE CASCADE INITIALLY DEFERRED, active boolean, client_id char(16)REFERENCES client (client_id) ON UPDATE CASCADE INITIALLY DEFERRED, dte_action integer NULL, change_id nameDEFAULT USER, change_date datetimeDEFAULT NOW(), PRIMARY KEY (ORDER_ID,LOCATION_ID,history_id) ) \g thank you, i know its something almost smackingly obvious but i cant seem to understand why it was working and now isnt. i even went through the changelog! regards Stef ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] breakage in schema with foreign keys between 7.0.3 and 7.1
Stephan Szabo wrote: > Hmm, don't know why it's not in changelog, but the spec requires that > the target fields of a foreign key constraint are themselves constrained > by a unique or primary key constraint. maybe its time for me to go and re-read the changelog with a fine tooth comb (it has been known for me to be blind to the obvious before and if this is the case then i more than apologise :) > 7.0 didn't actually check this, > but 7.1 does. The reason for this is because while 7.0 would let you > specify such a constraint, it wouldn't really work entirely properly > if the field wasn't actually unique. You'll need a unique constraint > on client.client_id. hhrrm. the only problem with -that- is that client_id by itself is not unique, but in conjunction with order_id it is. order_id is wholly unique. maybe i should jst drop the foreign key on client_id then, although i did want to use referential integrity on the client_id on an insert. although now i think about this, the criteria for having the changes on client_id cascading are totally gone and i could (read will) jst use a 'references' column. in short, thank you, i have jst figured out what an idiot i have been (again i hear you all say ;) many thanks and good work on postrgresql 7.1, it seems to be quite a bit quicker (and praise the lord for outer joins =) stefs ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] Referential Integrity Question (Delete/Insert during Transaction)
hello again everyone, I seem to have hit what i -think- may be a bug (but i am not crying wolf jst yet ;). I have three tables. action, client and order_details. action has a primary key 'order', client references action (along with adding the client_id as part of its primary key), order_details references client (both parts of the primary key there). all foreign keys (order in client and order+client_id in order_details) are set to INITIALLY DEFERRED. so far so good i hope. Now, i have a trigger that fires on insert, so i delete from the live database and then insert the changes rather than doing an update. not great, but shouldnt be a problem. The problem comes when i do this: mms_post=# BEGIN; BEGIN mms_post=# DELETE from client WHERE order_id = 22; DELETE 1 mms_post=# INSERT INTO client mms_post-#(cli_business_name,cli_chain_id,cli_business_type,cli_short_name,cl i_sic,order_id,client_id,cli_agent_bank_id,cli_operating_name,creation_id,cli_ web_page,cli_tcc,creation_date) mms_post-# VALUES ('STEFS','100-0333',1,'FHASDLKJH HFAKSDJ HKALSDJ',2534,22,'100-555',230,'FHASDLKJH HFAKSDJ HKALSDJFH','jack','[EMAIL PROTECTED]','R','2001-06-18 13:46:45-04'); INSERT 24211 1 mms_post=# COMMIT; ERROR: referential integrity violation - key in client still referenced from order_details Now. the way i understand it, shouldnt the integrity of any foreign keys be checked at the -end- of the transaction, after all the commands have been processed ? it seems that the DELETE is being processed and rejected, but the foreign key would be 'okay' due to the following INSERT. I have tried SET CONSTRAINTS as well with no difference :\ Does this make any sense or am i completely mad ? (more than likely) regards, Stefs. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster