Hi all;

I have a 3 node cluster as follows:

SLONY 2.0.7 on all three nodes, each install of slony was built on each 
node via source

nodes 1 & 2 = postgres 9.1.0

node3 = postgres 9.0.4

I did this:


1) I used pgbench -i to initialize the master database before I setup & 
started replication

2) I created a new column as a primary key on the pgbench_history table 
like so

alter table pgbench_history add column pgb_hist_id serial;
alter table pgbench_history add primary key (pgb_hist_id);


4) then I ran pgbench like this:
  pgbench -t 800 rep_bench_node1


5) after replication was started both slaves sync'd up fine

at this point all 3 nodes have 800 rows in the pgbench_history table

6) then I run pgbench again, reducing the number of tx thus reducing the 
overall number of rows in the pgbench_history table

$ pgbench -t 300 rep_bench_node1
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 300
number of transactions actually processed: 300/300
tps = 265.106193 (including connections establishing)
tps = 285.114730 (excluding connections establishing)


7) the master (node1) now has 300 rows in the pgbench_history table
    however both slaves now have 1100 rows (it didn't do the delete on 
the slaves)

    at this point each subsequent run of  "pgbench -t 300 
rep_bench_node1" replaces the rows in node1 but appends another 300 rows 
to each slave.




Thoughts?

Am I doing something wrong?


Thanks in advance


-- 
---------------------------------------------
Kevin Kempter       -       Constent State
A PostgreSQL Professional Services Company
           www.consistentstate.com
---------------------------------------------

_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to