Re: [SQL] transaction and triggers
2008/1/18, Gerardo Herzig <[EMAIL PROTECTED]>: > Hi all. Im puzzled again. Just thinking: > > As im having fun trying to make my own replication system, im stuck in > this situation: > Consider a simple table with a unique index on the `id' field, and a > function who will fail, such as > > insert into test (id) values (1); > insert into test (id) values (1); > > This will fail and the transaction will be rollback'ed, but as the basis > of my replication system is on row level triggers, the first time the > insert is called, the trigger will be executed, and i will like to be > able to stack the triggers in some way, in order to be fired only after > a succesfull execution of the hole function. If the transaction is rolled back, changes made by your trigger to local database will be also canceled. Unless you make any manipulation on remote databases, you have no problem. Any changes made to remote databases, for example if you call some dblink functions, are not transactional, and will not be rolled back. In this case you have to rethink your design, as there is no "ON COMMIT" trigger (yet?) -- Filip Rembiałkowski ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] transaction and triggers
Gerardo Herzig escribió: > Right. But today, that trigger do some other work, wich includes writing > some files to disk, so there is my problem. Crap, i guess i will have to > review the main logic. Probably it's better to move the actual file writing to a listener external process -- the transaction only does a NOTIFY, which is certain to be delivered only when the transaction commits. So if it aborts, no spurious write occurs. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] transaction and triggers
Filip Rembiałkowski wrote: 2008/1/18, Gerardo Herzig <[EMAIL PROTECTED]>: Hi all. Im puzzled again. Just thinking: As im having fun trying to make my own replication system, im stuck in this situation: Consider a simple table with a unique index on the `id' field, and a function who will fail, such as insert into test (id) values (1); insert into test (id) values (1); This will fail and the transaction will be rollback'ed, but as the basis of my replication system is on row level triggers, the first time the insert is called, the trigger will be executed, and i will like to be able to stack the triggers in some way, in order to be fired only after a succesfull execution of the hole function. If the transaction is rolled back, changes made by your trigger to local database will be also canceled. Unless you make any manipulation on remote databases, you have no problem. Any changes made to remote databases, for example if you call some dblink functions, are not transactional, and will not be rolled back. In this case you have to rethink your design, as there is no "ON COMMIT" trigger (yet?) Right. But today, that trigger do some other work, wich includes writing some files to disk, so there is my problem. Crap, i guess i will have to review the main logic. Thanks! Gerardo ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[SQL] transaction and triggers
Hi all. Im puzzled again. Just thinking: As im having fun trying to make my own replication system, im stuck in this situation: Consider a simple table with a unique index on the `id' field, and a function who will fail, such as insert into test (id) values (1); insert into test (id) values (1); This will fail and the transaction will be rollback'ed, but as the basis of my replication system is on row level triggers, the first time the insert is called, the trigger will be executed, and i will like to be able to stack the triggers in some way, in order to be fired only after a succesfull execution of the hole function. Im also reading the NOTIFY/LISTEN mechanism and the rule system as a workarround on this, but the fact is that there is a lot of client code, and will take a big amount of time to change it. Any sugestions? Thanks! Gerardo ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SQL] transaction and triggers
Alvaro Herrera wrote: Gerardo Herzig escribió: Right. But today, that trigger do some other work, wich includes writing some files to disk, so there is my problem. Crap, i guess i will have to review the main logic. Probably it's better to move the actual file writing to a listener external process -- the transaction only does a NOTIFY, which is certain to be delivered only when the transaction commits. So if it aborts, no spurious write occurs. Mmmhmm, sounds good...I will give it a try on monday. Now its beer time :) Thanks all. Gerardo ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] transaction and triggers
On Fri, 18 Jan 2008 12:16:04 -0300 Gerardo Herzig <[EMAIL PROTECTED]> wrote: > Right. But today, that trigger do some other work, wich includes > writing some files to disk, so there is my problem. Crap, i guess i will > have to review the main logic. I built a replication system that syncs up dozens of systems in a multi-master environment spanning multiple continents in almost real-time and it works flawlessly so don't give up hope. It is doable. I can't give you the code because it was written under contract and it was based heavily on our specific business requirements but I can give you a few pointers. You have discovered the basic problem of trying to replicate in full real time. You'll probably have to give up on that. Instead, focus on making updates to the local database. Create a replication table or tables that you update with triggers. Basically this needs to be a log of every change to the database in a structured way. Once you have the replication table(s) you can create external programs that connect to the master and update the slave. In the slave you can track the last ID that completed. Do the insert/update/delete in a transaction so that you have a guarantee that your database is up to date to a very specific point. Note that you can have multiple slaves in this scenario and, in fact, the slaves can have slaves using the exact same scheme giving you a hierarchy. If you need multi-master you just need to have another process to feed your local changes up to the master. This is not just a matter of making the master a slave though. If you do that you get into a feedback loop. Also, if you need multi-master, you have to think about your sequencing. If you need unique IDs on some tables you will have to think about setting up ranges of sequences based on server or have a central sequence server. We used a combination of both as well as specifying that certain tables could only be inserted to on one system. Of course, this system doesn't need to be the same as the top of the hierarchy and, in fact, different tables can have different generator systems. Hope this gets you started. There's still lots of gotchas on the way. -- D'Arcy J.M. Cain <[EMAIL PROTECTED]> | Democracy is three wolves http://www.druid.net/darcy/| and a sheep voting on +1 416 425 1212 (DoD#0082)(eNTP) | what's for dinner. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq