On Thu, Jan 26, 2023 at 7:39 AM Thomas CAS <t...@ikoula.com> wrote:

> Hello,
>
>
>
> I'm having trouble with a MariaDB cluster (2 nodes, master-slave) on
> Debian 11.
>
> I don't know what to do anymore.
>
>
>
> *Environment:*
>
>
>
> Node1:
>
>  OS: Debian 11
>
> Kernel: 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21)
>
> Versions: resource-agents (4.7.0-1), pacemaker (2.0.5-2), corosync
> (3.1.2-2), mariadb (10.5.18-0+deb11u1)
>
>
>
> Node2:
>
>  OS: Debian 11
>
> Kernel: 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21)
>
> Versions: resource-agents (4.7.0-1), pacemaker (2.0.5-2), corosync
> (3.1.2-2), mariadb (10.5.18-0+deb11u1)
>
>
>
> crm configure show as attachment.
>
>
>
> *Problem: *
>
>
>
> When I restart Node2 (which is a slave), it goes up correctly in the
> cluster:
>
>
>
> $ crm status
>
> Cluster Summary:
>
>   * Stack: corosync
>
>   * Current DC: Node1 (version 2.0.5-ba59be7122) - partition with quorum
>
>   * Last updated: Thu Jan 26 12:04:57 2023
>
>   * Last change:  Thu Jan 26 11:39:58 2023 by root via cibadmin on Node2
>
>   * 2 nodes configured
>
>   * 3 resource instances configured
>
>
>
> Node List:
>
>   * Online: [ Node1 Node2 ]
>
>
>
> Full List of Resources:
>
>   * VIP (ocf::heartbeat:IPaddr2):        Started Node1
>
>   * Clone Set: MYSQLREPLICATOR [MYSQL] (promotable):
>
>     * Masters: [ Node1 ]
>
>     * Slaves: [ Node2 ]
>
>
>
> But it does not retrieve the replication information. (SHOW SLAVE STATUS;
> returns nothing)
>
> In the Node2 logs, I can see this message that explains that replication
> is not taking place:
>
>
>
> Jan 25 16:29:38  mysql(MYSQL)[22862]:    INFO: No MySQL master present -
> clearing replication state
>
> Jan 25 16:29:39  mysql(MYSQL)[22862]:    WARNING: MySQL Slave IO threads
> currently not running.
>
> Jan 25 16:29:39  mysql(MYSQL)[22862]:    ERROR: MySQL Slave SQL threads
> currently not running.
>
> Jan 25 16:29:39  mysql(MYSQL)[22862]:    ERROR: See  for details
>
> Jan 25 16:29:39  mysql(MYSQL)[22862]:    ERROR: ERROR 1200 (HY000) at line
> 1: Misconfigured slave: MASTER_HOST was not set; Fix in config file or with
> CHANGE MASTER TO
>
>
>
> From what I see in the following file, Node2 does not seem to find the
> master name. So it clears its replication information:
>
>
>
> /usr/lib/ocf/resource.d/heartbeat/mysql
>
>
>
>         master_host=`echo $OCF_RESKEY_CRM_meta_notify_master_uname|tr -d "
> "`
>
>         if [ "$master_host" -a "$master_host" != ${NODENAME} ]; then
>
>             ocf_log info "Changing MySQL configuration to replicate from
> $master_host."
>
>             set_master
>
>             start_slave
>
>             if [ $? -ne 0 ]; then
>
>                 ocf_exit_reason "Failed to start slave"
>
>                 return $OCF_ERR_GENERIC
>
>             fi
>
>         else
>
>             ocf_log info "No MySQL master present - clearing replication
> state"
>
>             unset_master
>
>         fi
>
>
>
> As it is a production environment, I performed a bare metal restore of
> these machines on 2 LAB machines and I have no problem…
>
> In production, there is a lot of writing but the servers are far from
> being saturated.
>
>
>
> Thank you in advance for all the help you can give me.
>
>
> Best regards,
>

I'm sorry you've encountered this.

I don't understand why the resource agent checks
$OCF_RESKEY_CRM_meta_notify_master_uname during the start operation. That
value gets set only during a notify operation. That looks like a bug in the
resource agent.

I've filed an issue against it here:
https://github.com/ClusterLabs/resource-agents/issues/1839



>
>
> Thomas Cas  |  Technicien du support infogérance
>
> PHONE : +33 3 51 25 23 26       WEB : www.ikoula.com/en
>
> IKOULA Data Center 34 rue Pont Assy - 51100 Reims - FRANCE
>
> Before printing this letter, think about the impact on the environment!
>
>
>
> [image: Ikoula] <https://www.ikoula.com/en>
>
> [image: Twitter] <https://twitter.com/ikoula_en> [image: Linkedin]
> <https://www.linkedin.com/company/ikoula> [image: Youtube]
> <http://www.youtube.fr/ikoulanet> [image: Pressroom]
> <https://pressroom.ikoula.com/> [image: Blog] <https://blog.ikoula.com/en>
>
>
>
>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>


-- 
Regards,

Reid Wahl (He/Him)
Senior Software Engineer, Red Hat
RHEL High Availability - Pacemaker
_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to