This article will explain how to setup HA for MySQL Router as described in a previous article about where should the router stand.
For this setup, I will use Pacemaker (part of RedHat High Availability Add-on and available on RHEL, CentOS, Oracle Linux, …).
Of course we need a MySQL InnoDB Cluster but we won’t really use it for the HA setup of the MySQL Router.
Installing Pacemaker
The first step is to install pacemaker on all the machines we will use for our “MySQL Router Cluster”:
# yum install pacemaker pcs resource-agents
Now we need to start the pcsd
service and enable it at boot (on all machines):
# systemctl start pcsd.service # systemctl enable pcsd.service
It’s time now to setup authentication, this operation is again executed on all machines part of the MySQL Router Cluster:
# echo MyStrongPassw0rd | passwd --stdin hacluster Changing password for user hacluster. passwd: all authentication tokens updated successfully.
And on one of the nodes, we can now execute the following command:
[root@mysql1 ~]# pcs cluster auth mysql1 mysql2 mysql3 -u hacluster -p MyStrongPassw0rd --force mysql1: Authorized mysql2: Authorized mysql3: Authorized
Note that mysql1, mysql2 and mysql3 are the 3 nodes that will be used for the MySQL Router Cluster.
We will now create the cluster (the name cannot exceed 15 chars) by launching the following command on one node:
[root@mysql1 ~]# pcs cluster setup --force --name routercluster mysql1 mysql2 mysql3
The output of the command should looks like this:
Destroying cluster on nodes: mysql1, mysql2, mysql3... mysql3: Stopping Cluster (pacemaker)... mysql1: Stopping Cluster (pacemaker)... mysql2: Stopping Cluster (pacemaker)... mysql1: Successfully destroyed cluster mysql2: Successfully destroyed cluster mysql3: Successfully destroyed cluster Sending 'pacemaker_remote authkey' to 'mysql1', 'mysql2', 'mysql3' mysql1: successful distribution of the file 'pacemaker_remote authkey' mysql3: successful distribution of the file 'pacemaker_remote authkey' mysql2: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes... mysql1: Succeeded mysql2: Succeeded mysql3: Succeeded Synchronizing pcsd certificates on nodes mysql1, mysql2, mysql3... mysql1: Success mysql2: Success mysql3: Success Restarting pcsd on the nodes in order to reload the certificates... mysql3: Success mysql1: Success mysql2: Success
Starting the cluster
We can now start the cluster (running the following command on one node):
[root@mysql1 ~]# pcs cluster start --all mysql3: Starting Cluster... mysql1: Starting Cluster... mysql2: Starting Cluster...
And we can already check its status:
[root@mysql1 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 10:00:36 2018 Last change: Fri Jul 20 10:00:08 2018 by hacluster via crmd on mysql1 3 nodes configured 0 resources configured Online: [ mysql1 mysql2 mysql3 ] No active resources
Properties
Now we can set some properties to our cluster:
[root@mysql1 ~]# pcs property set stonith-enabled=false [root@mysql1 ~]# pcs property set no-quorum-policy=ignore [root@mysql1 ~]# pcs resource defaults migration-threshold=1
Adding resources
VIP
The first resource we will use is the VIP (Virtual IP) that the application will use to connect to MySQL (to the router):
[root@mysql1 ~]# pcs resource create Router_VIP ocf:heartbeat:IPaddr2 \ ip=192.168.87.5 cidr_netmask=24 op monitor interval=5s
We can check if our resource runs in the cluster now:
[root@mysql1 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 10:20:43 2018 Last change: Fri Jul 20 10:19:44 2018 by root via cibadmin on mysql1 3 nodes configured 1 resource configured Online: [ mysql1 mysql2 mysql3 ] Active resources: Router_VIP (ocf::heartbeat:IPaddr2): Started mysql1
This is perfect !
MySQL Router
Before adding the MySQL Router as resource, we need to be sure that the router is installed and bootstrapped on each nodes. As reminder, this is how to proceed:
[root@mysql1 ~]# mysqlrouter --bootstrap clusteradmin@mysql1:3306 --user=mysqlrouter --force Please enter MySQL password for clusteradmin: Bootstrapping system MySQL Router instance... MySQL Router has now been configured for the InnoDB cluster 'mycluster'. The following connection information can be used to connect to the cluster. Classic MySQL protocol connections to cluster 'mycluster': - Read/Write Connections: localhost:6446 - Read/Only Connections: localhost:6447 X protocol connections to cluster 'mycluster': - Read/Write Connections: localhost:64460 - Read/Only Connections: localhost:64470
Now we can add the resource that will manage MySQL Router in our pacemaker cluster. We will use systemd
.
You can verify if systemd service for mysqlrouter is available using the following command:
[root@mysql1 ~]# pcs resource list | grep router service:mysqlrouter - systemd unit file for mysqlrouter systemd:mysqlrouter - systemd unit file for mysqlrouter
OK, so let’s add the resource and let’s it run on all nodes (clone), no need to start mysqlrouter only if the running one has issue:
[root@mysql1 ~]# pcs resource create mysqlrouter systemd:mysqlrouter clone
Let’s verify if the resource is now added:
[root@mysql1 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 12:12:39 2018 Last change: Fri Jul 20 12:12:31 2018 by root via cibadmin on mysql1 3 nodes configured 4 resources configured Online: [ mysql1 mysql2 mysql3 ] Active resources: Router_VIP (ocf::heartbeat:IPaddr2): Started mysql1 Clone Set: mysqlrouter-clone [mysqlrouter] Started: [ mysql1 mysql2 mysql3 ]
Now we need to tell the cluster that the VIP needs to stand where a router instance also runs:
[root@mysql1 ~]# pcs constraint colocation add Router_VIP \ with mysqlrouter-clone score=INFINITY
Test
Let’s verify that the VIP is located on mysql1
as crm shows it:
[root@mysql2 ~]# ip add sh eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:f9:23:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.87.3/24 brd 192.168.87.255 scope global eth1 valid_lft forever preferred_lft forever inet 192.168.87.5/24 brd 192.168.87.255 scope global secondary eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fef9:23f1/64 scope link valid_lft forever preferred_lft forever
Perfect !
Now we will kill mysqlrouter
process on mysql1
and see what the cluster does:
[root@mysql2 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 12:37:29 2018 Last change: Fri Jul 20 12:36:51 2018 by hacluster via crmd on mysql1 3 nodes configured 4 resources configured Online: [ mysql1 mysql2 mysql3 ] Active resources: Router_VIP (ocf::heartbeat:IPaddr2): Started mysql2 Clone Set: mysqlrouter-clone [mysqlrouter] Started: [ mysql1 mysql2 mysql3 ]
Excellent, now we see that the VIP is located on mysql2
, let’s verify:
[root@mysql2 ~]# ip add sh eth1 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:f9:23:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.87.3/24 brd 192.168.87.255 scope global eth1 valid_lft forever preferred_lft forever inet 192.168.87.5/24 brd 192.168.87.255 scope global secondary eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fef9:23f1/64 scope link valid_lft forever preferred_lft forever
So now that the VIP runs on mysql2
we will shutdown the machine and see what happens:
root@mysql3 ~]# crm_mon -1 Stack: corosync Current DC: mysql1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum Last updated: Fri Jul 20 12:38:09 2018 Last change: Fri Jul 20 12:36:51 2018 by hacluster via crmd on mysql1 3 nodes configured 4 resources configured Online: [ mysql1 mysql3 ] OFFLINE: [ mysql2 ] Active resources: Router_VIP (ocf::heartbeat:IPaddr2): Starting mysql1 Clone Set: mysqlrouter-clone [mysqlrouter] Started: [ mysql1 mysql3 ]
We can see that the cluster does its job and that MySQL Router even if moved out of the application server is not anymore a single point of failure !
[…] and I will try to illustrate some of them on this blog. The next article on the topic will be How to setup MySQL Router HA with Pacemaker, dont’ miss it […]
Should you not set start/stop and promote/demote serialization.
Ordering Constraints:
stop mysqlrouter then demote Router_VIP (score:INFINITY) (id:order-mysqlrouter-Router_VIP-INFINITY)
promote Router_VIP then start mysqlrouter (score:INFINITY) (id:order-Router_VIP-mysqlrouter-INFINITY)
start Router_VIP then start mysqlrouter (kind:Serialize) (id:order-Router_VIP-mysqlrouter-Serialize)
stop mysqlrouter then stop Router_VIP (kind:Serialize) (id:order-mysqlrouter-Router_VIP-Serialize)
Colocation Constraints:
Router_VIP with mysqlrouter (score:INFINITY) (id:colocation-Router_VIP-mysqlrouter-INFINITY)
I set this up using your article with these added ordering constraints, and this looks like a solid solution not only for mysqlrouter single point of failure issues (with innodb cluster) but also when used with a single instance master master setup to enforce no concurrent writes.
I used the following too in my configuration:
Ordering Constraints:
stop mysqlrouter then demote Router_VIP (score:INFINITY) (id:order-mysqlrouter-Router_VIP-INFINITY)
promote Router_VIP then start mysqlrouter (score:INFINITY) (id:order-Router_VIP-mysqlrouter-INFINITY)
start Router_VIP then start mysqlrouter (kind:Serialize) (id:order-Router_VIP-mysqlrouter-Serialize)
stop mysqlrouter then stop Router_VIP (kind:Serialize) (id:order-mysqlrouter-Router_VIP-Serialize)
Colocation Constraints:
Router_VIP with mysqlrouter (score:INFINITY) (id:colocation-Router_VIP-mysqlrouter-INFINITY)
I have been using this setup over the past month and am considering using it outside of innodb clusters too for handling simple GEO DR scenarios between master masters since the VIP will enforce single writes at all times.
Hi,
Very useful information and this is what I have been searching for.
I have installed InnoDB cluster (node 1, node 2, node 3), and installed MySQL Router HA with Pacemaker (node 4, node 5) as explained in this article.
I am not finding information how the application connects to the above remote router. We have 300+ applications that we are planning to connect through the above router (node 4, node 5) and I am not sure how to setup on each application to look for the router. Could you please help me?
thanks
Raj
You need to use the virtual IP managed by Pacemaker.
However, it’s recommended to see MySQL Router as an extension of the connector and therefore use a router on each application server.
Cheers