Thank you for the feedback. I am considering just centralizing pk generation to the HQ. Basically, every so often, the branch will connect back to HQ to get back a block of pk for each table. This way, performance impact is limited and maintainence is simple.


thanks.


From: Joe Conway <[EMAIL PROTECTED]>
To: anon permutation <[EMAIL PROTECTED]>
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Merging Data from Multiple DB
Date: Mon, 03 Jan 2005 07:20:54 -0800

anon permutation wrote:
For performance reasons, each branch must has its own database and a centralized transactional system is not an option.

I was considering just centralizing primary keys generation, but that seems very slow too.

Segmenting primary keys among the branches is doable, but it is too much of a maintainence nightmare.

What do you suggest?

We have a similar application. What we did is this:

1. Each database instance is assigned a unique identifier, stored in a 1 row, 1 column table (with a trigger to ensure it stays that way).

2. Write a function that can take two integers, convert them to text, and concatenate them. In our case we convert to hex and concatenate with a delimiter character.

3. Write another function, called something like 'nextrowid', that takes a sequence name as its argument. Use the sequence name to get the next value from the sequence, lookup the local unique identifier from the table defined in #1, and pass them both to the function defined in #2.

4. Use nextrowid('seq_name') to generate your primary keys.

HTH,

Joe



---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to