OK, in the spirit of giving something back, first i'm gonna thank anyone who has been helping me on teh lists and second, this may not be great and I'm currently using root for everything, but it seems to work at first glance.
a) I was wondering if someone with more advanced eyes than me would take a look and let me know if they see any problems. b) I was thinking this should really make it into the documentation as a guide to basic clustering, maybe with a few tweaks if anyone has time. Regards ManicD =========== CENTOS 5.7 Basic install no extras!! ON BOTH SERVERS RUN: rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm rpm -Uvh http://repo.baruwa.org/el5/i386/baruwa-release-5-0.noarch.rpm yum install nano -y nano /etc/sysconfig/selinux ==========and change SELINUX value. If you don't want to check selinux log, set "disabled". yum update -y ==========(DO NOT REMOVE THE FIREWALL ON PRODUCTION SERVERS)========== ====================yum erase iptables -y ============================ ====================================================================== yum install puppet -y yum install git -y cd /usr/local/src git clone git://github.com/akissa/baruwa-puppet.git mv baruwa-puppet/modules /etc/puppet/ mv baruwa-puppet/manifests /etc/puppet/ nano /etc/puppet/manifests/toasters/baruwa/init.pp ==============================EDIT PUPPET FIRST================================= puppet -v /etc/puppet/manifests/toasters/baruwa/init.pp baruwa-admin changepassword <username that was set in init.pp> ================================================================================= ================================================================================= ==========================Clustering Start Rabbit MQ============================= ================================================================================= ================================================================================= ==========Clustering ==========First of all we have to cluster the Message Queue Broker (RabbitMQ): ==========Both servers also require to have the same ‘cookie’ which by choosing a server and copy the cookie through scp to the other server. ==========On Server1: cp /var/lib/rabbitmq/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie.old ==========On Server2: scp /var/lib/rabbitmq/.erlang.cookie [email protected]:/var/lib/rabbitmq/.erlang.cookie ==========say yes to continue connecting ==========enter server1’s password (to allow the connection) ==========Then on both servers allocate ownership of the file: (e.g. run this on each) chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie ==========After you have run the above, run this on each: /etc/init.d/rabbitmq-server restart ==========On server1: nano /etc/rabbitmq/rabbitmq.config ==========Put this in, looking at some online literature, you will need to put ‘ around the names if using a FQDN (server.domain.com). [ {rabbit, [ {tcp_listeners, [{"0.0.0.0", 5672}]}, {cluster_nodes, [ '[email protected]', '[email protected]' ]} ]} ]. ==========You then need to restart RabbitMQ with: /etc/init.d/rabbitmq-server restart ==========If it fails, have a look at the logs (‘cat /var/log/rabbitmq/startup_*’) and if you see something ==========like “"Bad cookie in table definition rabbit_user_permission: rabbit@<ServerName>” you need to rename 2 directories like below: mv /var/lib/rabbitmq/mnesia/rabbit\@<ServerName> /var/lib/rabbitmq/mnesia/rabbit.old mv /var/lib/rabbitmq/mnesia/rabbit\@<ServerName>-plugins-expand /var/lib/rabbitmq/mnesia/rabbitplugins.old ==========It doesn’t really matter what you name them as long as you change the name. You also need to include the ‘\’ as it makes ==========the ‘@’ afterwards register properly! ==========You should then see if comes up with: ==========Restarting rabbitmq-server: SUCCESS ==========rabbitmq-server. ==========After this, return to the previous point and continue from there (you only have to add the host once! It will appear on the ==========other server). ================================================================================= ================================================================================= =============================Now we move onto MySQL============================== ================================================================================= ================================================================================= =========SERVER 1 =========First Configure Master Node 1 nano /etc/my.cnf =========Edit the file and change the lines to the following [mysqld] server-id = 1 log_bin = /var/lib/mysql/ib_logfile expire_logs_days = 10 max_binlog_size = 100M binlog_do_db = baruwa binlog_ignore_db = mysql binlog_ignore_db = sa_bayes =========then restart mysql service mysql restart =========login to mysql mysql -u root –p ygRBZiyiauh83Zry =========create the baruwa database and assign permissions to a replication user create database baruwa; grant replication slave on *.* to 'root'@’server1.domain.com’ identified by 'ygRBZiyiauh83Zry'; FLUSH PRIVILEGES; show master status; quit =========Note the output of the Master Status as we will need that in a moment =========SERVER 2 =========Again we need to edit the config to setup replication nano /etc/mysql/my.cnf =========Edit the file and change the lines to the following [mysqld] server-id = 2 log_bin = /var/lib/mysql/ib_logfile expire_logs_days = 10 max_binlog_size = 100M binlog_do_db = baruwa binlog_ignore_db = mysql binlog_ignore_db = sa_bayes =========then restart mysql service mysql restart =========and login with mysql -u root –p ygRBZiyiauh83Zry create database baruwa; =========Set the following with the output of the show master status from server 1 CHANGE MASTER TO MASTER_HOST='server1.domain.com', MASTER_USER='root', MASTER_PASSWORD='ygRBZiyiauh83Zry', MASTER_LOG_FILE='ib_logfile.000003', MASTER_LOG_POS=60540; start slave; show slave status\G =========The output your looking for here is: =========Slave_IO_Running: Yes =========Slave_SQL_Running: Yes =========If this fails wait 5 mins and run the command again, if it fails a second time reboot, wait 5 mins and run the command again quit =========SERVER 1 =========Login to mysql mysql -u root –p ygRBZiyiauh83Zry =========Set database to be baruwa use baruwa; =========SERVER 2 =========Login to MySQL mysql -u root –p ygRBZiyiauh83Zry =========View the tables to see if the domain35 table has replicated – if it has not, reboot both servers and wait 5 mins ==========before trying to view the tables again. show tables; =========Setup a replication user on server 2: grant replication slave on *.* to 'root'@’server2.domain.com’ identified by 'ygRBZiyiauh83Zry'; show master status; =========Again note the output of the Master Status as we will need that in a moment =========SERVER 1 =========Set the following with the output of the show master status from server 2: CHANGE MASTER TO MASTER_HOST='server2.domain.com', MASTER_USER='root', MASTER_PASSWORD='ygRBZiyiauh83Zry', MASTER_LOG_FILE='ib_logfile.000003', MASTER_LOG_POS=1759; start slave; show slave status\G =========Again, the output your looking for here is: =========Slave_IO_Running: Yes =========Slave_SQL_Running: Yes =========If this fails wait 5 mins and run the command again, if it fails a second time reboot, wait 5 mins and run the command again ================================================================================= ================================================================================= ================================End Clustering=================================== ================================================================================= ================================================================================= -- View this message in context: http://baruwa-users-list.963389.n3.nabble.com/Brauwa-Clustering-on-CENTOS-tp3957508.html Sent from the Baruwa users list mailing list archive at Nabble.com. _______________________________________________ Keep Baruwa FREE - http://pledgie.com/campaigns/12056

