hello junko-san! i think might spotted an error in your patch. please correct me if i'm wrong!
consider this code using * node01 = master * node02 = slave * replication_hostname_suffix = "-mysqlrep" > set_master() { > local new_master_host master_log_file master_log_pos > local master_params > > new_master_host=$1 new_master_host=node01 > > # Keep replication position > get_slave_info master_host=node01-mysqlrep (parsed in get_slave_info from "SHOW SLAVE STATUS\G") > > if [ "$master_log_file" -a "$new_master_host" = "$master_host" ]; then this method should leave the slave be if the master did not change since the last sync. consider: crm node standby node02; crm node online node02 the slave should pick up where it left using mysql's own way of saving the last replication information to master.info. so this must somehow be adapted too right? > # master_params=", MASTER_LOG_FILE='$master_log_file', \ > # MASTER_LOG_POS=$master_log_pos" > ocf_log info "Kept master pos for $master_host : > $master_log_file:$master_log_pos" > rm -f $tmpfile > return > else > master_log_file=`$CRM_ATTR -n > $new_master_host-log-file-${INSTANCE_ATTR_NAME} -q -G` > master_log_pos=`$CRM_ATTR -n > $new_master_host-log-pos-${INSTANCE_ATTR_NAME} -q -G` > > if [ -n "$master_log_file" -a -n "$master_log_pos" ]; then > master_params=", MASTER_LOG_FILE='$master_log_file', \ > MASTER_LOG_POS=$master_log_pos" > ocf_log info "Restored master pos for $new_master_host : > $master_log_file:$master_log_pos" here the ra tries to restore the master information from the cib. (this information is put there via unset_master(), see below ) depending how we store the master information in the cib (see below) there might be node01 or node01-mysqlrep in these variables. > fi > fi > > # Informs the MySQL server of the master to replicate > # from. Accepts one mandatory argument which must contain the host > # name of the new master host. The master must either be unchanged > # from the laste master the slave replicated from, or freshly > # reset with RESET MASTER. > > master_host="${new_master_host}${OCF_RESKEY_replication_hostname_suffix}" i think this part has to be moved up so that the outlined issues can be handled. moreover, setting the variable master_host might influence how the script works outside of the method (master_host is a global variable, if i'm not mistaken) > ocf_run $MYSQL $MYSQL_OPTIONS_LOCAL $MYSQL_OPTIONS_REPL \ > -e "CHANGE MASTER TO MASTER_HOST='$master_host', \ > MASTER_USER='$OCF_RESKEY_replication_user', \ > MASTER_PASSWORD='$OCF_RESKEY_replication_passwd' > $master_params" > > rm -f $tmpfile > } consider this code: > unset_master(){ ... > #Save current state > get_slave_info > $CRM_ATTR -n $master_host-log-file-${INSTANCE_ATTR_NAME} -v > $master_log_file > $CRM_ATTR -n $master_host-log-pos-${INSTANCE_ATTR_NAME} -v $master_log_pos > rm -f $tmpfile in our case, this would save node01-mysqlrepl into the cib, right? this is the first shot at trying out the proposed patches. maybe it is sufficient to a) restructure set_master() a little bit and b) to modify get_slave_info() to strip the replication suffix from the running config? feedback appreciated! cheers, raoul -- ____________________________________________________________________ DI (FH) Raoul Bhatia M.Sc. email. r.bha...@ipax.at Technischer Leiter IPAX - Aloy Bhatia Hava OG web. http://www.ipax.at Barawitzkagasse 10/2/2/11 email. off...@ipax.at 1190 Wien tel. +43 1 3670030 FN 277995t HG Wien fax. +43 1 3670030 15 ____________________________________________________________________ _______________________________________________________ Linux-HA-Dev: Linux-HA-Dev@lists.linux-ha.org http://lists.linux-ha.org/mailman/listinfo/linux-ha-dev Home Page: http://linux-ha.org/