*Hi, trying to initialize a new cluster using the alt-perl scripts provided in the install Here are the specs
MASTER: linux installed on a VM setup. (The VM is installed on a Windows 7 OS running on a HP Elitebook laptop) 8.0.4 build-744019 the uname -a command: Linux localhost.localdomain 2.6.18-308.13.1.el5 #1 SMP Tue Aug 21 17:10:18 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux **total shared memory available: 2 GB postgres installed: 8.3.21 (with enable_thread_safety) slony installed: 2.1.1 *** *SLAVE: linux installed on a dell desktop (The primary OS on this machine is linux) the uname -a command: Linux msteben-centos.autorevenue.com 2.6.1**8-194.11.4.el5 #1 SMP Tue Sep 21 05:04:09 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux **total shared memory available: 2 GB **postgres installed: 8.3.21 (with enable_thread_safety) slony installed: 2.1.1 PROBLEM: I attempt to run the altperl script ./slonik_init_cluster from the slave the connection gets through to the master, but all it does on the master is print the replication schema (all the sl tables) over and over and over again in the postgres logs. It never gets to creating the _reptest schema, then defining the sl_tables. When I do a query on pg_stat_activity on the master the one query actually running is the CREATE TABLE sl_nodes statement. It never finishes. I finally have to kill the connection and nothing gets done. I've attached the postgresql.conf from both servers as well as the slon_tools.conf from the slave. The admin user referenced in the slon_tools is slony and is defined as a superuser. Any help appreciated. Any other info you need, please let me know. Mark Steben DBA at AutoRevenue *
#
# Author: Christopher Browne
# Copyright 2004-2009 Afilias Canada
# Revised extensively by Steve Simms
# Keeping the following three lines for backwards compatibility in
# case this gets incorporated into a 1.0.6 release.
#
# TODO: The scripts should check for an environment variable
# containing the location of a configuration file. That would
# simplify this configuration file and allow Slony-I tools to still work
# in situations where it doesn't exist.
#
if ($ENV{"SLONYNODES"}) {
require $ENV{"SLONYNODES"};
} else {
# The name of the replication cluster. This will be used to
# create a schema named _$CLUSTER_NAME in the database which will
# contain Slony-related data.
$CLUSTER_NAME = 'reptest';
# The directory where Slony should record log messages. This
# directory will need to be writable by the user that invokes
# Slony.
# $LOGDIR = '/var/log/slony1';
$LOGDIR = '/home/postgres/slony1';
# (Optional) If you would like to use Apache's rotatelogs tool to
# manage log output, uncomment the following line and ensure that
# it points to the executable.
#
# $APACHE_ROTATOR = '/usr/local/apache/bin/rotatelogs';
# Log line suffix for Slony-I log. For options, look at date(1)
# man page.
#
# $LOG_NAME_SUFFIX = '%a';
# SYNC check interval (slon -s option)
# $SYNC_CHECK_INTERVAL = 1000;
# Which node is the default master for all sets?
$MASTERNODE = 1;
# Which debugging level to use? [0-4]
$DEBUGLEVEL = 0;
# Include add_node lines for each node in the cluster. Be sure to
# use host names that will resolve properly on all nodes
# (i.e. only use 'localhost' if all nodes are on the same host).
# Also, note that the user must be a superuser account.
add_node(node => 1,
host => '10.10.6.136',
dbname => 'slonytst',
port => 5432,
user => 'slony',
password => 'elephant1234');
add_node(node => 2,
host => '10.10.4.34',
dbname => 'slonytst',
port => 5432,
user => 'slony',
password => 'elephant1234');
# add_node(node => 3,
# host => 'server3',
# dbname => 'database',
# port => 5432,
# user => 'postgres',
# password => '');
# If the node should only receive event notifications from a
# single node (e.g. if it can't access the other nodes), you can
# specify a single parent. The downside to this approach is that
# if the parent goes down, your node becomes stranded.
# add_node(node => 4,
# parent => 3,
# host => 'server4',
# dbname => 'database',
# port => 5432,
# user => 'postgres',
# password => '');
}
# The $SLONY_SETS variable contains information about all of the sets
# in your cluster.
$SLONY_SETS = {
"seta" => {
"set_id" => 1,
"table_id" => 1,
"sequence_id" => 1,
"pkeyedtables" => [
"public.clients",
"public.queues",
"public.email_history",
"public.mailer_queue",
"public.queuenodes",
"public.states",
"public.emailrcpt_mileage_history",
],
"sequences" =>
[
"mmseq",
"mailpollseq",
"mailer_queue_mailer_queue_pk_seq",
"email_history_id_seq",
"emailrcpt_mileage_history_pk_id_seq",
]
},
# "set2" => {
# "set_id" => 2,
# "table_id" => 389,
# "sequence_id" => 204,
# "pkeyedtables" => ["public.queuenodes"],
# "keyedtables" => {},
# "sequences" => [],
# },
};
# Keeping the following three lines for backwards compatibility in
# case this gets incorporated into a 1.0.6 release.
#
# TODO: The scripts should check for an environment variable
# containing the location of a configuration file. That would
# simplify this configuration file and allow Slony tools to still work
# in situations where it doesn't exist.
#
if ($ENV{"SLONYSET"}) {
require $ENV{"SLONYSET"};
}
# Please do not add or change anything below this point.
1;
postgresql.conf.slave
Description: Binary data
postgresql.conf.master
Description: Binary data
_______________________________________________ Slony1-general mailing list [email protected] http://lists.slony.info/mailman/listinfo/slony1-general
