ProxySQL is the leader in proxy and load balancing solution for MySQL. It has great features like query caching, multiplexing, mirroring, read/write splitting, routing, etc… The latest enhancement in ProxySQL is the native support of MySQL Group Replication. No more need to use an external script within the scheduler like I explained in this previous post.
This implementation supports Groups in Single-Primary and in Multi-Primary mode. It is even possible to setup a Multi-Primary Group but dedicate writes on only one member.
René, the main developer of ProxySQL, went even further. For example in a 7 nodes clusters (Group of 7 members) where all nodes are writers (Multi-Primary mode), it’s possible to decide to have only 2 writers, 3 readers and 2 backup-writers. This mean that ProxySQL will see all the nodes as possible writers but will only route writes on 2 nodes (add them in the writer hostgroup, because we decided to limit it to 2 writers for example), then it will add the others in the backup-writers group, this group defines the other writer candidates. An finally add 2 in the readers hostgroup.
It’s also possible to limit the access to a member that is slower in applying the replicated transactions (applying queue reaching a threshold).
It is time to have a look at this new ProxySQL version. The version supporting MySQL Group Replication is 1.4.0 and currently is only available on github (but stay tuned for a new release soon).
So let’s have a look at what is new for users. When you connect to the admin interface of ProxySQL, you can see a new table: mysql_group_replication_hostgroups
ProxySQL> show tables ; +--------------------------------------------+ | tables | +--------------------------------------------+ | global_variables | | mysql_collations | | mysql_group_replication_hostgroups | | mysql_query_rules | | mysql_replication_hostgroups | | mysql_servers | | mysql_users | ... | scheduler | +--------------------------------------------+ 15 rows in set (0.00 sec)
This is the table we will use to setup in which hostgroup a node will belongs.
To illustrate how ProxySQL supports MySQL Group Replication, I will use a cluster of 3 nodes:
name | ip |
---|---|
mysql1 | 192.168.90.2 |
mysql2 | 192.168.90.3 |
mysql3 | 192.168.90.4 |
So first, as usual we need to add our 3 members into the mysql_servers
table:
mysql> insert into mysql_servers (hostgroup_id,hostname,port) values (2,'192.168.90.2',3306); Query OK, 1 row affected (0.00 sec) mysql> insert into mysql_servers (hostgroup_id,hostname,port) values (2,'192.168.90.3',3306); Query OK, 1 row affected (0.00 sec) mysql> insert into mysql_servers (hostgroup_id,hostname,port) values (2,'192.168.90.4',3306); Query OK, 1 row affected (0.00 sec) mysql> select * from mysql_servers; +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 2 | 192.168.90.2 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 192.168.90.3 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 2 | 192.168.90.4 | 3306 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+--------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
Now we can setup ProxySQL’s behavior with our Group Replication cluster, but before let’s check the definition of the new mysql_group_replication_hostgroups
table:
ProxySQL> show create table mysql_group_replication_hostgroups\G *************************** 1. row *************************** table: mysql_group_replication_hostgroups Create Table: CREATE TABLE mysql_group_replication_hostgroups ( writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY, backup_writer_hostgroup INT CHECK (backup_writer_hostgroup>=0 AND backup_writer_hostgroup<>writer_hostgroup) NOT NULL, reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND backup_writer_hostgroup<>reader_hostgroup AND reader_hostgroup>0), offline_hostgroup INT NOT NULL CHECK (offline_hostgroup<>writer_hostgroup AND offline_hostgroup<>reader_hostgroup AND backup_writer_hostgroup<>offline_hostgroup AND offline_hostgroup>=0), active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1, max_writers INT NOT NULL CHECK (max_writers >= 0) DEFAULT 1, writer_is_also_reader INT CHECK (writer_is_also_reader IN (0,1)) NOT NULL DEFAULT 0, max_transactions_behind INT CHECK (max_transactions_behind>=0) NOT NULL DEFAULT 0, comment VARCHAR, UNIQUE (reader_hostgroup), UNIQUE (offline_hostgroup), UNIQUE (backup_writer_hostgroup))
There are many new columns, let’s have a look at their meaning:
Column Name | Description |
---|---|
writer_hostgroup | the id of the hostgroup that will contain all the members that are writer |
backup_writer_hostgroup | if the group is running in multi-primary mode, there are multi writers (read_only=0) but if the amount of these writer is larger than the max_writers, the extra nodes are located in that backup writer group |
reader_hostgroup | the id of the hostgroup that will contain all the members in read_only |
offline_hostgroup | the id of the hostgroup that will contain the host not being online or not being part of the Group |
active | when enabled, ProxySQL monitors the Group and move the server according in the appropriate hostgroups |
max_writers | limit the amount of nodes in the writer hostgroup in case of group in multi-primary mode |
writer_is_also_reader | boolean value, 0 or 1, when enabled, a node in the writer hostgroup will also belongs the the reader hostgroup |
max_transactions_behind | if the value is greater than 0, it defines how much a node can be lagging in applying the transactions from the Group, see this post for more info |
Now that we are (or should be) more familiar with that table, we will set it up like this:
So let’s add this:
ProxySQL> insert into mysql_group_replication_hostgroups (writer_hostgroup,backup_writer_hostgroup, reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) values (2,4,3,1,1,1,0,100);
We should not forget to save our mysql servers to disk and load them on runtime:
ProxySQL> save mysql servers to disk; Query OK, 0 rows affected (0.01 sec) ProxySQL> load mysql servers to runtime; Query OK, 0 rows affected (0.00 sec)
It’s also important with the current version of MySQL Group Replication to add a view and its dependencies in sys schema: addition_to_sys.sql:
# mysql -p < addition_to_sys.sql
So now from every members of the group, we can run the following statement. ProxySQL based its internal monitoring this same view:
mysql> select * from gr_member_routing_candidate_status; +------------------+-----------+---------------------+----------------------+ | viable_candidate | read_only | transactions_behind | transactions_to_cert | +------------------+-----------+---------------------+----------------------+ | YES | YES | 40 | 0 | +------------------+-----------+---------------------+----------------------+
We also must not forget to create in our cluster the monitor user needed by ProxySQL:
mysql> GRANT SELECT on sys.* to 'monitor'@'%' identified by 'monitor';
We can immediately check how ProxySQL has distributed the servers in the hostgroups :
ProxySQL> select hostgroup_id, hostname, status from runtime_mysql_servers; +--------------+--------------+--------+ | hostgroup_id | hostname | status | +--------------+--------------+--------+ | 2 | 192.168.90.2 | ONLINE | | 3 | 192.168.90.3 | ONLINE | | 3 | 192.168.90.4 | ONLINE | +--------------+--------------+--------+
The Writer (Primary-Master) is mysql1 (192.168.90.2 in hostgroup 2) and the others are in the read hostgroup (id=3).
As you can see, there is no more need to create a scheduler calling an external script with complex rules to move the servers in the right hostgroup.
Now to use the proxy, it’s exactly as usual, you need to create users associated to default hostgroup or add routing rules.
An extra table has also been added for monitoring:
ProxySQL> SHOW TABLES FROM monitor ; +------------------------------------+ | tables | +------------------------------------+ | mysql_server_connect | | mysql_server_connect_log | | mysql_server_group_replication_log | | mysql_server_ping | | mysql_server_ping_log | | mysql_server_read_only_log | | mysql_server_replication_lag_log | +------------------------------------+ 7 rows in set (0.00 sec) ProxySQL> select * from mysql_server_group_replication_log order by time_start_us desc limit 5 ; +--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+ | hostname | port | time_start_us | success_time_us | viable_candidate | read_only | transactions_behind | error | +--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+ | 192.168.90.4 | 3306 | 1490187314429511 | 1887 | YES | NO | 0 | NULL | | 192.168.90.3 | 3306 | 1490187314429141 | 1378 | YES | YES | 0 | NULL | | 192.168.90.2 | 3306 | 1490187314428743 | 1478 | NO | NO | 0 | NULL | | 192.168.90.4 | 3306 | 1490187309406886 | 3639 | YES | NO | 0 | NULL | | 192.168.90.3 | 3306 | 1490187309406486 | 2444 | YES | YES | 0 | NULL | +--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+
Enjoy MySQL Group Replication & ProxySQL !
[…] ProxySQL is generally replication agnostic, therefore is suitable for any sort of replication implementation: asynchronous replication, NDB Cluster, Galera / XtraDB Cluster, Group Replication .For NDB Cluster no special support is required: HA is provided by MySQL itself.ProxySQL is able to perform some basic checks to support asynchronous replication like checking Seconds_Behind_Master : this implementation has a lot of limitations, especially in cases of parallel replication or multi masters, but it generally works for a lot of scenarios.For all other cases, ProxySQL support for clustering solution can be extended with the use of ProxySQL’s Scheduler: a feature that allows to call custom scripts at regular interval and perform action, for example monitoring the backend and react to events.proxysql-admin is a very good example of how the Scheduler can be used to perform extensive checks in XtraDB Cluster and reconfigure ProxySQL. On this topic, I strongly recommend to attend the session on Percona XtraDB Cluster 5.7 with ProxySQL.Another use case was the use of the Scheduler to support MySQL Group Replication.In version 1.4.0, support for MySQL Group Replication doesn’t require anymore the use of the Scheduler, and it is now a built-in feature. […]
Just to be clear: for group replication we do not need query rules anymore, because they are implemented by default?
Hi Wolf,
You still need to have some rules if your Primary-Master node is not part of the default hostgroup assigned to the user.
Cheers,
Hi, and thanks for the answer!
The question is, if the connections are routed in a read-write splitted way by default, or if it’s just a primary-only connection. 🙂
Kind regards,
Tobias
[…] if you are using ProxySQL with its native MySQL Group Replication support, you have to use the following addition to sys: […]
[…] if you are using ProxySQL with its native MySQL Group Replication support, you have to use the following addition to sys: […]
mysql> SELECT * FROM performance_schema.replication_group_members\G
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: ac41a42c-9e03-11e7-b155-00163e0038ad
MEMBER_HOST: hadoop-master-ppe
MEMBER_PORT: 3307
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: bb872c4c-9e03-11e7-b31b-00163e0038ad
MEMBER_HOST: hadoop-master-ppe
MEMBER_PORT: 3308
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: d7a278c0-9dd6-11e7-bdeb-00163e0038ad
MEMBER_HOST: hadoop-master-ppe
MEMBER_PORT: 3309
MEMBER_STATE: ONLINE
3 rows in set (0.00 sec)
mysql> select * from mysql_group_replication_hostgroups;
+——————+————————-+——————+——————-+——–+————-+———————–+————————-+———+
| writer_hostgroup | backup_writer_hostgroup | reader_hostgroup | offline_hostgroup | active | max_writers | writer_is_also_reader | max_transactions_behind | comment |
+——————+————————-+——————+——————-+——–+————-+———————–+————————-+———+
| 2 | 4 | 3 | 1 | 1 | 1 | 0 | 100 | NULL |
+——————+————————-+——————+——————-+——–+————-+———————–+————————-+———+
1 row in set (0.00 sec)
mysql> select hostgroup_id, hostname, port,status from runtime_mysql_servers;
+————–+———–+——+——–+
| hostgroup_id | hostname | port | status |
+————–+———–+——+——–+
| 2 | 127.0.0.1 | 3307 | ONLINE |
| 4 | 127.0.0.1 | 3309 | ONLINE |
| 4 | 127.0.0.1 | 3308 | ONLINE |
+————–+———–+——+——–+
mysql> select * from sys.gr_member_routing_candidate_status;
+——————+———–+———————+———————-+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+——————+———–+———————+———————-+
| YES | NO | 0 | 0 |
+——————+———–+———————+———————-+
1 row in set (0.01 sec)
why i just got the back_write hostgroup.
Hi,
Like that I can’t really tell you why… please check proxySQL’s log file.
ok, Thanks.
The node of MGR variable read_only is OFF, If I need to adjust it.
and the follow step, if I need execute it at every node of MGR.
mysql -p < addition_to_sys.sql
GRANT SELECT on sys.* to 'monitor'@'%' identified by 'monitor';
now I do it at every node. I think if i made mistake.
Hi, Lefred:
I found if I modify the read_only at MGR , the runtime_mysql_servers hostgroup list can be changed.
mysql> select hostgroup_id, hostname, port, status from runtime_mysql_servers;
+————–+——————-+——+——–+
| hostgroup_id | hostname | port | status |
+————–+——————-+——+——–+
| 2 | hadoop-master-ppe | 3309 | ONLINE |
| 3 | hadoop-master-ppe | 3308 | ONLINE |
| 3 | hadoop-master-ppe | 3307 | ONLINE |
+————–+——————-+——+——–+
3 rows in set (0.00 sec)
I think it’s automatic. There is no need to modify.
[…] you are using ProxySQL as router for your MySQL InnoDB Cluster, as soon as the threshold of transactions_behind is […]
[…] Define and insert ProxySQL HostGroup functionality, as defined at the Lefred’s blog entry: […]
Hi, Lefred:
we use ProxySQL version 1.4.9-percona-1.1, MySQL5.7.22
when we try to execute stop group_replication, we found that all queries by proxysql to check mgr status are hang , and cannot be killed. can’t upload the pic. We have encountered this situation more than once. proxysql seems not support to set timeout for the check sql.
So the current situation is that we are afraid to use proxysql
[…] ProxySQL uses to see if a node is online, partitioned or not, and if it’s lagging or not (see link1 and […]
[…] ProxySQL uses to see if a node is online, partitioned or not, and if it’s lagging or not (see link1 and […]
[…] MySQL Group Replication ProxySQL中的本机支持 […]
[…] MySQL Group Replication ProxySQL中的本机支持 ProxySQL代理MySQL组复制 […]
Can you add this config to you github?
Which config ?
Thanks lefred!
You helped me so much. I was stuck in the external script for so long which even can’t run succefully.
Hi lefred,
is the creation of the special view in sys schema with addition_to_sys.sql still needed with ProxysQL v2.5.5 and MySQL 8.0.33?
Hello,
It’s been a while since I’ve not used ProxySQL. I didn’t test yet the latest version so I won´t be able to answer.
I’m sure you can find people able to answer you on the mysqlcommunity slack.
Cheers.
I am currently testing ProxySQL with MySQL GR and stumble upon this blog. And based on what I have experienced, there is no need to add the additional views and functions. Although, it appears that it is necessary to add SELECT grants to the monitor user on `performance_schema`.`replication_group_member_stats` and `performance_schema`.`replication_group_members`