MySQL Group Replication as HA solution

MySQL Group Replication is just one component of the upcoming MySQL HA Solution as Matt Lord explained it in his GR quick start guide. So while mysql-router is back to school to become smarter and learn how to take the right decision we can already use existing 3rd party solutions. To be honest, if people are already using a load balancer/proxy with MySQL asynchronous, semi-sync or other virtual synchronous solution and want to evaluate and migrate to MySQL’s native Group Replication, they might not migrate load balancer at the same time as they move to GR. Step-by-step migration, one component at the time is always recommended in complex architecture.

So in this post, I want to cover HAProxy and a dedicated health check  related to what might be the more common architecture. Matt already explained in his start guide how to use GR with HA.  He also provided and healthcheck scripts and a HAProxy configuration when using only one member (recommended for WRITEs) and the other ones stay as backup.

What is then that most common architecture ?

So due to the GR limitations (DDL, FK, …) it is recommended to send the WRITEs to one node (unless you really know your workload and what you are doing). For the READs, we can use any of the group members (unless they still have a large queue to process).

Usually people use then 2 different ports, one to send the WRITEs and one to send the READs. Of course the application should be able to define which type of query is needed and use the right connection (db handler).

This is an illustration of the architecture:

InnoDB_Cluster_HA_Proxy

 

So HAProxy is listening on :3307 to handle all writes and send them only on one dedicated member. Failover for writes will happen only is MySQL has a problem or becomes read-only. Reads are sent to port :3308 and HAProxy will round-robin those reads to all members of the group.

This is the HAProxy configuration:

frontend mysql-gr-front_write
bind *:3307
mode tcp
default_backend mysql-gr-back_write

backend mysql-gr-back_write
mode tcp
balance leastconn
option httpchk
server mysql1 192.168.90.2:3306 check port 6446 inter 1000 rise 1 fall 2 on-marked-up shutdown-backup-sessions
server mysql2 192.168.90.3:3306 check port 6446 inter 1000 rise 1 fall 2 backup
server mysql3 192.168.90.4:3306 check port 6446 inter 1000 rise 1 fall 2 backup

frontend mysql-gr-front_read
bind *:3308
mode tcp
default_backend mysql-gr-back_read

backend mysql-gr-back_read
mode tcp
balance leastconn
option httpchk
server mysql1 192.168.90.2:3306 check port 6447 inter 1000 rise 1 fall 2
server mysql2 192.168.90.3:3306 check port 6447 inter 1000 rise 1 fall 2
server mysql3 192.168.90.4:3306 check port 6447 inter 1000 rise 1 fall 2

We will then setup 2 different xinetd services to check the status of the MySQL Group Member and decide if the node can be used for reads and or writes.

To perform such check, I’m using Matt Lord’s script that I modified to accept two arguments:

  1. the amount of max transactions behind into a member’s queue (default 100)
  2. the role : WRITE or READ

This script is available on github : https://github.com/lefred/mysql_gr_routing_check

We have then:

/etc/xinetd.d/mysql_gr_routing_check_write

# default: on
# description: check to see if the node is a viable routing candidate
service mysql_gr_routing_check_write
{
disable = no
flags = REUSE
socket_type = stream
port = 6446
wait = no
user = mysql
server = /usr/local/bin/mysql_gr_routing_check.sh
server_args = 100 write
log_on_failure += USERID
only_from = localhost 192.168.90.0/24
per_source = UNLIMITED
}

/etc/xinetd.d/mysql_gr_routing_check_read

# default: on
# description: check to see if the node is a viable routing candidate
service mysql_gr_routing_check_read
{
disable = no
flags = REUSE
socket_type = stream
port = 6447
wait = no
user = mysql
server = /usr/local/bin/mysql_gr_routing_check.sh
server_args = 100 read
log_on_failure += USERID
only_from = localhost 192.168.90.0/24
per_source = UNLIMITED
}

Don’t forget to update /etc/services with:

mysql_gr_routing_check_write 6446/tcp # MySQL Group Replication
mysql_gr_routing_check_read  6447/tcp  # MySQL Group Replication

So now we can test the health check on both ports:

[root@mysql1 bin]# telnet 192.168.90.2 6446
Trying 192.168.90.2...
Connected to 192.168.90.2.
Escape character is '^]'.
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40

Group Replication member is a viable routing candidate for write.
Connection closed by foreign host.


[root@mysql1 bin]# telnet 192.168.90.2 6447
Trying 192.168.90.2...
Connected to 192.168.90.2.
Escape character is '^]'.
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40

Group Replication member is a viable routing candidate for read.
Connection closed by foreign host.

Perfect 😉

This is an illustration of this setup in HAProxy Web frontend:

haproxy_stats

Let’s check this in action:

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

4 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.