MySQL InnoDB Cluster is the official High Availability solution for and from MySQL.
MySQL InnoDB Cluster is composed by MySQL Server, MySQL Group Replication, MySQL Router and MySQL Shell.
Before InnoDB Cluster there was no standard solution for MySQL Availability and many custom solutions were used (some better than the others). But there was a good solution using some similar principles of MySQL Group Replication: Galera.
Now that MySQL InnoDB Cluster is mature and easier to orchestrate than galera, I receive a lot of requests on how to migrate from Galera (or Percona XtraDB Cluster) to MySQL InnoDB Cluster.
I already wrote some time ago an article on this process: how to migrate from Galera to MySQL Group Replication.
In this article we will see how we can migrate from Percona XtraDB Cluster 8.0 to MySQL InnoDB Cluster 8.0 in 10 steps!
Initial Situation
The initial situation is illustrated in the picture below, 3 nodes, a load balancer (like HA Proxy) and applications connecting to it:
Just some checks to see which version we are using and that all nodes are online and in sync:
mysql> show status like 'wsrep%provider_v%'; +------------------------+-----------------------------------+ | Variable_name | Value | +------------------------+-----------------------------------+ | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 4.7(r752664d) | +------------------------+-----------------------------------+ mysql> select @@version, @@version_comment\G *************************** 1. row *************************** @@version: 8.0.23-14.1 @@version_comment: Percona XtraDB Cluster (GPL), Release rel14, Revision d3b9a1d, WSREP version 26.4.3 mysql> select host_name, status from performance_schema.pxc_cluster_view; +-----------+--------+ | host_name | status | +-----------+--------+ | pxc2 | SYNCED | | pxc3 | SYNCED | | pxc1 | SYNCED | +-----------+--------+
We also need to check that GTIDs are enabled, if not, it’s highly recommended to enable them:
mysql> select @@gtid_mode; +-------------+ | @@gtid_mode | +-------------+ | ON | +-------------+
The 10 steps to migrate from Galera PXC to MySQL InnoDB Cluster are:
- Remove one node from the cluster and install MySQL on it
- Create an InnoDB Cluster from that single node
- Setup Asynchronous Replication
- Remove a second node from the cluster and install MySQL on it
- Join MySQL InnoDB Cluster
- Install MySQL Router & Bootstrap it
- Point application to MySQL Router
- Stop Asynchronous Replication
- Stop last PXC node and install MySQL on it
- Join the last node to MySQL InnoDB Cluster
Step 1: remove one node
We will remove pxc3 from the cluster (we always remove a Secondary node). Then we install MySQL on it:
[root@pxc3 ~]# systemctl stop mysql
We can see that pxc3 is not part of the cluster anymore, from another node:
mysql> select host_name, status from performance_schema.pxc_cluster_view; +-----------+--------+ | host_name | status | +-----------+--------+ | pxc2 | SYNCED | | pxc1 | SYNCED | +-----------+--------+
Before installing MySQL on it, we need to remove all Galera dedicated configuration settings in my.cnf.
In my case I commented all lines starting with wsrep_ and the line staring with pxc_.
Now we can install MySQL on it:
[root@pxc3 ~]# rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [root@pxc3 ~]# yum swap percona-xtradb-cluster-* mysql-community-server ... ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: mysql-community-server x86_64 8.0.27-1.el7 mysql80-community 448 M Removing: percona-xtradb-cluster x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 0.0 percona-xtradb-cluster-client x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 62 M percona-xtradb-cluster-server x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 421 M percona-xtradb-cluster-shared x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 7.1 M percona-xtradb-cluster-shared-compat x86_64 8.0.23-14.1.el7 @pxc-80-release-x86_64 6.7 M Installing for dependencies: mysql-community-client x86_64 8.0.27-1.el7 mysql80-community 53 M mysql-community-client-plugins x86_64 8.0.27-1.el7 mysql80-community 5.7 M mysql-community-libs x86_64 8.0.27-1.el7 mysql80-community 4.7 M Updating for dependencies: mysql-community-common x86_64 8.0.27-1.el7 mysql80-community 627 k Removing for dependencies: perl-DBD-MySQL x86_64 4.023-6.0.1.el7 @ol7_latest 323 k Transaction Summary ================================================================================ Install 1 Package (+3 Dependent packages) Upgrade ( 1 Dependent package) Remove 5 Packages (+1 Dependent package) Total download size: 512 M Is this ok [y/d/N]:
When installed, we can start MySQL:
[root@pxc3 ~]# systemctl start mysqld [root@pxc3 ~]# systemctl status mysqld ● mysqld.service - MySQL Server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-19 15:39:58 GMT; 5s ago Docs: man:mysqld(8) http://dev.mysql.com/doc/refman/en/using-systemd.html Process: 9446 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS) Main PID: 9474 (mysqld) Status: "Server is operational" Memory: 422.9M CGroup: /system.slice/mysqld.service └─9474 /usr/sbin/mysqld Oct 19 15:39:56 pxc3 systemd[1]: Starting MySQL Server... Oct 19 15:39:58 pxc3 systemd[1]: Started MySQL Server.
This is what we have at the send of step 1:
for readability, I changed the hostname of the nodes. So pxc3 became mysql1.
Step 2: MySQL InnoDB Cluster Creation
To start this part, we will first install MySQL Shell:
[root@mysql1 ~]# yum install mysql-shell
As you can see above, the hostname is changed. If you never used MySQL Shell, I recommend you to read this article: Reminder When Using MySQL Shell.
We will now connect to our MySQL instance (using localhost) and configure it to be ready for InnoDB Cluster:
[root@mysql1 ~]# mysqlsh root@localhost
This how it will look like:
JS > dba.configureInstance()
MySQL Shell might complain about the user used to connect (in my case root that can only connect from localhost). I highly recommend to create a dedicated user to manage the cluster (option 2):
I created a user called clusteradmin and agreed with all the eventual changes requested by MySQL Shell:
Then I connect back to my instance using the new created user:
JS > \c clusteradmin@mysql1 Creating a session to 'clusteradmin@mysql1' Please provide the password for 'clusteradmin@mysql1': ********* Save password for 'clusteradmin@mysql1'? [Y]es/[N]o/Ne[v]er (default No): yes Fetching schema names for autocompletion... Press ^C to stop. Closing old connection... Your MySQL connection id is 13 (X protocol) Server version: 8.0.27 MySQL Community Server - GPL No default schema selected; type \use <schema> to set one.
And we can create our new MySQL InnoDB Cluster:
JS > cluster=dba.createCluster("MyCluster")
A new InnoDB cluster will be created on instance 'mysql1:3306'.
Validating instance configuration at mysql1:3306...
This instance reports its own address as mysql1:3306
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'mysql1:33061'. Use the localAddress option to override.
Creating InnoDB cluster 'MyCluster' on 'mysql1:3306'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to one server failure.
<Cluster:MyCluster>
And we can check our (current) single node InnoDB Cluster:
JS > cluster.status()
{
"clusterName": "MyCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql1:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"mysql1:3306": {
"address": "mysql1:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.27"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "mysql1:3306"
}
This is the situation a the end of step 2:
Step 3: Setup Asynchronous Replication
Since we removed the server from the Galera cluster, our applications are still writing to PXC… it’s time to catch up !
To achieve this task, we will setup asynchronous replication from the Primary node of PXC and the Primary node (it’s the only one for the moment) of our InnoDB Cluster.
On pxc1 (the Primary), we will create a dedicated user for this replication task:
pxc1-mysql> CREATE USER repl@'%' IDENTIFIED BY 'Rpl001#!' REQUIRE SSL; pxc1-mysql> GRANT REPLICATION SLAVE on *.* to repl@'%';
On pxc1 we can check the value of GTID_EXECUTED and compare it the one of mysql1:
pxc1-mysql> select @@gtid_executed\G *************************** 1. row *************************** @@gtid_executed: a18338f8-30cc-11ec-85df-53e2b3eb859d:1-888175 mysql1> select @@gtid_executed\G *************************** 1. row *************************** @@gtid_executed: a18338f8-30cc-11ec-85df-53e2b3eb859d:1-555107, fc4f6b42-30f5-11ec-b31e-02001708e20d:1-59, fc4f6f48-30f5-11ec-b31e-02001708e20d:1
We can see that we are missing several transactions from the current cluster in production (PXC). The cluster’s uuid is a18338f8-30cc-11ec-85df-53e2b3eb859d.
It’s time to finish the replication setup and start it. On mysql1, we do:
mysql1> CHANGE REPLICATION SOURCE TO SOURCE_HOST="pxc1", SOURCE_USER="repl", SOURCE_PASSWORD="Rpl001#!", SOURCE_AUTO_POSITION=1, SOURCE_SSL=1 FOR CHANNEL "async_from_pxc"; mysql1> START REPLICA FOR CHANNEL "async_from_pxc";
Now mysql1 will start to catch up…
mysql1> show replica status for channel "async_from_pxc"\G *************************** 1. row *************************** Replica_IO_State: Waiting for source to send event Source_Host: pxc1 Source_User: repl Source_Port: 3306 Connect_Retry: 60 Source_Log_File: binlog.000002 Read_Source_Log_Pos: 569358564 Relay_Log_File: mysql1-relay-bin-async_from_pxc.000002 Relay_Log_Pos: 47544213 Relay_Source_Log_File: binlog.000002 Replica_IO_Running: Yes Replica_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Source_Log_Pos: 392505871 Relay_Log_Space: 224397131 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Source_SSL_Allowed: Yes Source_SSL_CA_File: Source_SSL_CA_Path: Source_SSL_Cert: Source_SSL_Cipher: Source_SSL_Key: Seconds_Behind_Source: 9020 Source_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Source_Server_Id: 1 Source_UUID: 9ddb7553-30cc-11ec-bae3-00001701a2bb Source_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Replica_SQL_Running_State: Waiting for dependent transaction to commit Source_Retry_Count: 86400 Source_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Source_SSL_Crl: Source_SSL_Crlpath: Retrieved_Gtid_Set: a18338f8-30cc-11ec-85df-53e2b3eb859d:555108-1018737 Executed_Gtid_Set: a18338f8-30cc-11ec-85df-53e2b3eb859d:1-653378, fc4f6b42-30f5-11ec-b31e-02001708e20d:1-59, fc4f6f48-30f5-11ec-b31e-02001708e20d:1 Auto_Position: 1 Replicate_Rewrite_DB: Channel_Name: async_from_pxc Source_TLS_Version: Source_public_key_path: Get_Source_public_key: 0 Network_Namespace:
This is the situation at the end of step 3:
Step 4: Remove the second Primary
We will now stop mysqld on pxc2 and install MySQL on it like we did on step 1.
[root@pxc2 ~]# systemctl stop mysql [root@pxc2 ~]# rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [root@pxc2 ~]# yum swap percona-xtradb-cluster-* mysql-community-server [root@pxc2 ~]# yum install mysql-shell -y
And here the situation:
Step 5: Add a new instance to MySQL InnoDB Cluster
We will configure mysql2 (previously called pxc2) and add it to the cluster.
We need to first remove all galera and pxc related entries from my.cnf. When done we can start mysqld:
[root@pxc2 ~]# vi /etc/my.cnf [root@pxc2 ~]# systemctl start mysqld
We use MySQL Shell to configure the instance (like we did on Step 2) and we create the same user (clusteradmin) with the same password.
When done, from mysql1, we can add the new instance to MySQL InnoDB Cluster like this:
JS > cluster.addInstance("mysql2")
If we check the status of the cluster we will notice an error:
JS > cluster.status()
{
"clusterName": "MyCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql1:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"mysql1:3306": {
"address": "mysql1:3306",
"instanceErrors": [
"ERROR: Unrecognized replication channel 'async_from_pxc' found. Unmanaged replication channels are not supported."
],
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.27"
},
"mysql2:3306": {
"address": "mysql2:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.27"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "mysql1:3306"
}
We can ignore this issue for the moment.
This is the situation at the end of Step 5:
Step 6: MySQL Router
We need to install MySQL Router. The recommended location is on the application server(s). MySQL Router should be seen as an extension to the MySQL Connector. I only use one on this schema.
On the app server we do:
[root@app1 ~]# rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [root@app1 ~]# yum install -y mysql-router
We can bootstrap MySQL Router:
[root@app11 ~]# mysqlrouter --bootstrap clusteradmin@mysql1 \ --user mysqlrouter --conf-use-gr-notifications Please enter MySQL password for clusteradmin: # Bootstrapping system MySQL Router instance... - Creating account(s) (only those that are needed, if any) - Verifying account (using it to run SQL queries that would be run by Router) - Storing account in keyring - Adjusting permissions of generated files - Creating configuration /etc/mysqlrouter/mysqlrouter.conf Existing configuration backed up to '/etc/mysqlrouter/mysqlrouter.conf.bak' # MySQL Router configured for the InnoDB Cluster 'MyCluster' After this MySQL Router has been started with the generated configuration $ /etc/init.d/mysqlrouter restart or $ systemctl start mysqlrouter or $ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf InnoDB Cluster 'MyCluster' can be reached by connecting to: # MySQL Classic protocol - Read/Write Connections: localhost:6446 - Read/Only Connections: localhost:6447 # MySQL X protocol - Read/Write Connections: localhost:6448 - Read/Only Connections: localhost:6449
We can start MySQL Router:
[root@app1 ~]# systemctl start mysqlrouter [root@app1 ~]# systemctl status mysqlrouter ● mysqlrouter.service - MySQL Router Loaded: loaded (/usr/lib/systemd/system/mysqlrouter.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2021-10-19 18:15:57 GMT; 4s ago Main PID: 18352 (mysqlrouter) Status: "running" Memory: 13.7M CGroup: /system.slice/mysqlrouter.service └─18352 /usr/bin/mysqlrouter Oct 19 18:15:57 pxc1 systemd[1]: Starting MySQL Router... Oct 19 18:15:57 pxc1 mysqlrouter[18352]: logging facility initialized, switching l...on Oct 19 18:15:57 pxc1 systemd[1]: Started MySQL Router. Hint: Some lines were ellipsized, use -l to show in full.
This is the situation at the end of Step 6:
Step 7: Switch application connections
We point the application to MySQL Router (localhost:6446 for Read/Write traffic) and we can see that now GTIDs using MySQL InnoDB Cluster’s UUID are processed:
This is the actual situation:
Step 8: Stop Asynchronous Replication
We can now stop the asynchronous replication and forget about it:
mysql1> stop replica for channel "async_from_pxc"; mysql1> reset replica all for channel "async_from_pxc";
Step 9: Bye Bye Galera and PXC
Now we can stop PXC on pxc1 and install MySQL Server like we did on Step 1:
[root@pxc1 ~]# systemctl stop mysql [root@pxc1 ~]# rpm -ivh https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [root@pxc1 ~]# yum swap percona-xtradb-cluster-* mysql-community-server
And we are ready for the last step !
Step 10: MySQL InnoDB Cluster with 3 nodes
We need to configure my.cnf and configure the instance like in Step 5, I also changed the name from px1 to mysql3:
[root@mysql3 ~]# vi /etc/my.cnf [root@mysql3 ~]# systemctl start mysqld
We can configure the instance:
[root@mysql3 ~]# mysqlsh root@localhost
JS > dba.configureInstance()
Now, on mysql1 (or connecting to it), we can add the new instance to the cluster:
[root@mysql1 ~]# mysqlsh clusteradmin@mysql1 JS > cluster=dba.getCluster() JS > cluster.addInstance("mysql3")
Finally we can check the status of our new MySQL InnoDB Cluster:
JS > cluster.status()
{
"clusterName": "MyCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql1:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"mysql1:3306": {
"address": "mysql1:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.27"
},
"mysql2:3306": {
"address": "mysql2:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.27"
},
"mysql3:3306": {
"address": "mysql3:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"replicationLag": null,
"role": "HA",
"status": "ONLINE",
"version": "8.0.27"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "mysql1:3306"
}
Conclusion
As you can see it’s very easy to migrate from Galera to MySQL InnoDB Cluster.
Additionally, creating and managing the Cluster is much easier with MySQL Shell. It’s even possible to link multiple InnoDB Clusters and manage everything easily. Check MySQL InnoDB ClusterSet.
Enjoy MySQL, MySQL Shell and MySQL InnoDB Cluster !
I think this is because percona is very compatible with mysql. What should I do if it is MariaDB Galera Cluster?
Then you need to mix how to migrate from MariaDB to MySQL 8 too. see https://lefred.be/content/how-to-migrate-from-mariadb-to-mysql-8-0/