Migration from MySQL Source-Replica pair to MySQL InnoDB Cluster: howto

MySQL InnoDB Cluster (or only Group Replication) becomes more and more popular. This solution doesn’t attract only experts anymore. On social medias, forums and other discussions, people are asking me what it the best way to migrate a running environment using traditional asynchronous replication [Source -> Replica(s)] to InnoDB Cluster.

The following procedure is what I’m actually recommending. These steps have for objective to reduce the downtime to the minimum for the database service.

We can divide the procedure in 9 steps:

  1. the current situation
  2. preparing the future cluster
  3. data transfert
  4. replication from current system
  5. creation of the cluster with a single instance
  6. adding instances to the cluster
  7. configure the router
  8. test phase
  9. pointing the application to the new solution

1. the current situation

Our application connects to mysql1 which also acts as replication source for mysql2. mysql3 and mysql4 are spare servers that will be used for the new MySQL InnoDB Cluster.

The final architecture will be having a MySQL InnoDB Cluster group of 3 machines: mysql2, mysql3 and mysql4.

2. preparing the cluster

The current Source-Replica setup musts use GTID. (so at least using 5.6)

MySQL >= 5.7.17 must be used for the InnoDB Cluster group members.

Read the Group Replication’s requirements & limitations:

So, on mysql3 and mysql4, we only need to install MySQL >=5.7.17.

There are also two different approaches to create such cluster:

  1. create it manually and then use MySQL Shell to create the metadata needed by MySQL-router
  2. let’s do everything using MySQL Shell

We will of course use the second option.

3. data transfer

As now, the provisioning of a new member is like any other type of MySQL Replication when you need to provision a new replica, a manual operation. Use a backup !

Group Replication is “just” another type of MySQL Replication, therefor we need to use the same concept. Of course we understand that everybody would benefit from an automatic provisioning process, but we don’t have such solution.

The backup must be consistent and provide the GTID of the last transaction being part of the backup.

You can use any option you want, logical backup with mysqldump, physical backup with MEB or Xtrabackup, etc..

I will use MEB to illustrate the different operations.

backup:

Let’s take a backup on mysql1:

[mysql1 ~]# mysqlbackup --host=127.0.0.1 --backup-dir=/tmp/backup \
                        --user=root --password=X backup-and-apply-log

Of course we could have taken the backup from mysql2 too.

transfer:

We need to copy the backup from mysql1 to mysql3:

[mysql1 ~]# scp -r /tmp/backup mysql3:/tmp

restore:

Be sure that mysqld is not running on mysql3:

[mysql3 ~]# systemctl stop mysqld

Now, it’s time to restore the backup on mysql3, this consists to a simple copy-back and ownership change:

[mysql3 ~]# mysqlbackup --backup-dir=/tmp/backup --force copy-back
[mysql3 ~]# rm /var/lib/mysql/mysql*-bin*  # just some cleanup
[mysql3 ~]# chown -R mysql. /var/lib/mysql

4. replication from current system

At the end of this section we will then have just a normal asynchronous replica.

We need to verify MySQL’s configuration to be sure that my.cnf is configured properly to act as a replica:

[mysqld]
...
server_id=3
enforce_gtid_consistency = on
gtid_mode = on
log_bin
log_slave_updates

Let’s start mysqld:

[mysql3 ~]# systemctl start mysqld

Now, we need to find the latest GTIDs purged from the backup and set it. Then we will have to setup asynchronous replication and start it. We will then have live data from production replicated to this new replica.

The location where to find this information will depend of your backup solution.

Using MEB, the latest purged GTIDs are found in the file called backup_gtid_executed.sql :

[mysql3 ~]# cat /tmp/backup/meta/backup_gtid_executed.sql
# On a new slave, issue the following command if GTIDs are enabled:
  SET @@GLOBAL.GTID_PURGED='33351000-3fe8-11e7-80b3-08002718d305:1-1002';

# Use the following command if you want to use the GTID handshake protocol:
# CHANGE MASTER TO MASTER_AUTO_POSITION=1;

Let’s connect to mysql on mysql3 and setup replication:

mysql> CHANGE MASTER TO MASTER_HOST="mysql1",
       MASTER_USER="repl_async", MASTER_PASSWORD='Xslave',
       MASTER_AUTO_POSITION=1;
mysql> RESET MASTER;
mysql> SET global gtid_purged="33351000-3fe8-11e7-80b3-08002718d305:1-1002";
mysql> START SLAVE;

The credentials used for replication must be present in mysql1.

Using SHOW SLAVE STATUS; you should be able to see the new replica is replicating.

5. creation of the cluster with a single instance

It’s finally time to use MySQL Shell ! 😉

You can run THE shell using mysqlsh:

[mysql3 ~]# mysqlsh

Now we can verify if our server is ready to become a member of a new cluster:

mysql-js> dba.checkInstanceConfiguration('root@mysql3:3306')
Please provide the password for 'root@mysql3:3306': 
Validating instance...

The instance 'mysql3:3306' is not valid for Cluster usage.

The following issues were encountered:

 - Some configuration options need to be fixed.

+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                         | Current Value | Required Value | Note                                             |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum                  | CRC32         | NONE           | Update the server variable or restart the server |
| master_info_repository           | FILE          | TABLE          | Restart the server                               |
| relay_log_info_repository        | FILE          | TABLE          | Restart the server                               |
| transaction_write_set_extraction | OFF           | XXHASH64       | Restart the server                               |
+----------------------------------+---------------+----------------+--------------------------------------------------+


Please fix these issues , restart the serverand try again.

{
    "config_errors": [
        {
            "action": "server_update", 
            "current": "CRC32", 
            "option": "binlog_checksum", 
            "required": "NONE"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "master_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "relay_log_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "transaction_write_set_extraction", 
            "required": "XXHASH64"
        }
    ], 
    "errors": [], 
    "restart_required": true, 
    "status": "error"
}

during this process, the configuration if parsed to see if all required settings are present.

By default, some settings are missing or need to be changed, we can ask the shell to perform the changes for us:

mysql-js> dba.configureLocalInstance()
Please provide the password for 'root@localhost:3306': 

Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf

Do you want to modify this file? [Y|n]: y
Validating instance...

The configuration has been updated but it is required to restart the server.

{
    "config_errors": [
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "master_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "relay_log_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "transaction_write_set_extraction", 
            "required": "XXHASH64"
        }
    ], 
    "errors": [], 
    "restart_required": true, 
    "status": "error"
}

This command works only if you plan to modify the configuration of the local instance (as the name of the function tells it). So when you need to configure multiple members of a cluster, you need to connect to each nodes independently and locally to them.

As the command returned it, we need now to restart mysqld to enable the new configuration settings:

[mysql3 ~]# systemctl restart mysqld

We can now connect again with the shell, verify again the configuration and finally create the cluster:

mysql-js> \c root@mysql3:3306
Creating a Session to 'root@mysql3:3306'
Enter password: 
Your MySQL connection id is 6
Server version: 5.7.18-log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
mysql-js> dba.checkInstanceConfiguration('root@mysql3:3306')
Please provide the password for 'root@mysql3:3306': 
Validating instance...

The instance 'mysql3:3306' is valid for Cluster usage
{
    "status": "ok"
}
mysql-js> cluster = dba.createCluster('MyInnoDBCluster')
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.

Creating InnoDB cluster 'MyInnoDBCluster' on 'root@mysql3:3306'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

<Cluster:MyInnoDBCluster>

We can verify the status of our single node cluster using once again the shell:

mysql-js> cluster.status()
{
    "clusterName": "MyInnoDBCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql3:3306", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

We have one cluster that is running but which is of course not tolerant to any failure.

This is how our architecture looks like now:

6. adding instances to the cluster

The goal is to have a Cluster of 3 nodes (or a Group of 3 Members). Now we will add mysql4 using the same backup we used for mysql3 and using the same procedure.

transfer:

[mysql1 ~]# scp -r /tmp/backup mysql4:/tmp

restore:

[mysql4 ~]# systemctl stop mysqld
[mysql4 ~]# mysqlbackup --backup-dir=/tmp/backup --force copy-back
[mysql4 ~]# rm /var/lib/mysql/mysql*-bin*  # just some cleanup
[mysql4 ~]# chown -R mysql. /var/lib/mysql

This time, no need to modify the configuration manually, we will later use the shell for that. So we can simply start mysqld:

[mysql4 ~]# systemctl start mysqld

Let’s use the shell to join the Group:

mysql-js> \c root@mysql3:3306
Creating a Session to 'root@mysql3:3306'
Enter password: 
Your MySQL connection id is 27
Server version: 5.7.18-log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set on
mysql-js> dba.checkInstanceConfiguration('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Validating instance...
The instance 'mysql4:3306' is not valid for Cluster usage.
The following issues were encountered:
- Some configuration options need to be fixed.
+----------------------------------+---------------+----------------+--------------------------------------------------+
| Variable                         | Current Value | Required Value | Note                                             |
+----------------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum                  | CRC32         | NONE           | Update the server variable or restart the server |
| enforce_gtid_consistency         | OFF           | ON             | Restart the server                               |
| gtid_mode                        | OFF           | ON             | Restart the server                               |
| log_bin                          | 0             | 1              | Restart the server                               |
| log_slave_updates                | 0             | ON             | Restart the server                               |
| master_info_repository           | FILE          | TABLE          | Restart the server                               |
| relay_log_info_repository        | FILE          | TABLE          | Restart the server                               |
| transaction_write_set_extraction | OFF           | XXHASH64       | Restart the server                               |
+----------------------------------+---------------+----------------+--------------------------------------------------+

Please fix these issues , restart the server and try again.
{
"config_errors": [
{
"action": "server_update",
"current": "CRC32",
"option": "binlog_checksum",
"required": "NONE"
},
{
"action": "restart",
"current": "OFF",
"option": "enforce_gtid_consistency",
"required": "ON"
},
{
"action": "restart",
"current": "OFF",
"option": "gtid_mode",
"required": "ON"
},
{
"action": "restart",
"current": "0",
"option": "log_bin",
"required": "1"
},
{
"action": "restart",
"current": "0",
"option": "log_slave_updates",
"required": "ON"
},
{
"action": "restart",
"current": "FILE",
"option": "master_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "FILE",
"option": "relay_log_info_repository",
"required": "TABLE"
},
{
"action": "restart",
"current": "OFF",
"option": "transaction_write_set_extraction",
"required": "XXHASH64"
}
],
"errors": [],
"restart_required": true,
"status": "error"
}

We can let the shell configure it:

mysql-js> dba.configureLocalInstance()
Please provide the password for 'root@localhost:3306': 

Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf

Do you want to modify this file? [Y|n]: y
Validating instance...

The configuration has been updated but it is required to restart the server.

{
    "config_errors": [
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "enforce_gtid_consistency", 
            "required": "ON"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "gtid_mode", 
            "required": "ON"
        },
        {
            "action": "restart", 
            "current": "0", 
            "option": "log_bin", 
            "required": "1"
        },
        {
            "action": "restart", 
            "current": "0", 
            "option": "log_slave_updates", 
            "required": "ON"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "master_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "FILE", 
            "option": "relay_log_info_repository", 
            "required": "TABLE"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "transaction_write_set_extraction", 
            "required": "XXHASH64"
        }
    ], 
    "errors": [], 
    "restart_required": true, 
    "status": "error"
}

Restart the service to enable the changes:

[mysql4 ~]# systemctl restart mysqld

We will use again the same purged GTIDs as previously (remember it’s in /tmp/backup/meta/backup_gtid_executed.sql):

mysql-js> \c root@mysql4:3306
mysql-js> \sql
mysql-sql> RESET MASTER;
mysql-sql> SET global gtid_purged="33351000-3fe8-11e7-80b3-08002718d305:1-1002";

We used the shell to illustrate how SQL mode can be used with it too.

We are ready to add mysql4 to the Group:

mysql-sql> \js
mysql-js> dba.checkInstanceConfiguration('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Validating instance...

The instance 'mysql4:3306' is valid for Cluster usage
{
"status": "ok"
}

Now we need to connect to a node that is already member of the group to load the cluster object (get the metadata of the cluster):

mysql-js> \c root@mysql3:3306
 Creating a Session to 'root@mysql3:3306'
 Enter password:
 Your MySQL connection id is 29
 Server version: 5.7.18-log MySQL Community Server (GPL)
 No default schema selected; type \use <schema> to set one.
 mysql-js> cluster = dba.getCluster()
 <Cluster:MyInnoDBCluster>

Now we can check if the node that we want to add is consistent with the transactions that have been applied (verify the GTIDs):

mysql-js> cluster.checkInstanceState('root@mysql4:3306')
Please provide the password for 'root@mysql4:3306':
Analyzing the instance replication state...

The instance 'mysql4:3306' is valid for the cluster.
The instance is fully recoverable.

{
"reason": "recoverable",
"state": "ok"
}

This is perfect, we can add the new member (mysql4):

mysql-js> cluster.addInstance("root@mysql4:3306")
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@mysql4:3306': 
Adding instance to the cluster ...

The instance 'root@mysql4:3306' was successfully added to the cluster.

Let’s verify this:

mysql-js> cluster.status()
{
    "clusterName": "MyInnoDBCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql3:3306", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql4:3306": {
                "address": "mysql4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

Great !

Now let’s add mysql2, where we don’t need any backup as the data is already present.

The first step on mysql2 is to stop the running replica thread(s) (io and sql) and then forget completely about this asynchronous replication:

mysql2 mysql> stop slave;
mysql2 mysql> reset slave all;

It’s the right moment to add it on the cluster using mysql shell, first we need to check the configuration:

[mysql2 ~]# mysqlsh
mysql-js> \c root@mysql2:3306
mysql-js> dba.checkInstanceConfiguration('root@mysql2:3306')
mysql-js> dba.configureLocalInstance()
[mysql2 ~]# systemctl restart mysqld

and now we can add mysql2 to the cluster:

[mysql2 ~]# mysqlsh
mysql-js> \c root@mysql3:3306
mysql-js> dba.checkInstanceConfiguration('root@mysql2:3306')
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.addInstance("root@mysql2:3306")
mysql-js> cluster.status()
{
    "clusterName": "MyInnoDBCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql3:3306", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "mysql2:3306": {
                "address": "mysql2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql4:3306": {
                "address": "mysql4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }
}

It’s possible to run dba.configureLocalInstance() on a running node to add the group replication settings to my.cnf

7. configure the router

At this point the architecture looks like this:

Let’s configure mysql-router on mysql1, in fact, the router has the capability to bootstrap itself using the cluster’s metadata. It only needs to access one of the member:

[root@mysql1 ~]# mysqlrouter --bootstrap mysql3:3306 --user mysqlrouter
Please enter MySQL password for root: 
WARNING: The MySQL server does not have SSL configured and metadata used by the router may be transmitted unencrypted.

Bootstrapping system MySQL Router instance...
MySQL Router  has now been configured for the InnoDB cluster 'MyInnoDBCluster'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'MyInnoDBCluster':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447

X protocol connections to cluster 'MyInnoDBCluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470

The configuration is done and the router will listen on 4 ports after starting it:

[mysql1 ~]# systemctl start mysqlrouter

8. test phase

Now you can check your cluster, how fast it can process the replication, test some read queries, etc…

As soon as you are happy with your test, you just need to point the application to the router and it’s done ! 😉

9. pointing the application to the new solution

Pointing the application to the router is the only downtime in our story which is very fast.

This is the final architecture:

Enjoy MySQL InnoDB Cluster !

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

25 Comments

  1. Hi, lefred:

    I have mysql master + slave custer. and MGR cluster.

    I want to make the two culsters as Master +Salve + Slave(MGR). that’s is ok?

    But I didn’t configure successfully, Can you tell me how to configure the cluster?

    • Hi,
      Do you mean you want to have the following:
      M –asynchronous–> S –asynchronous–> GROUP

      If that’s the case you, need from the current Async Slave, take a backup to provision the 3 nodes in the cluster, then create the group.

      The Primary member in the group then needs to be configured as Slave of the Asynchronous Slave (or you can do in the same order as explained in the blog post).

      So what is needed or mandatory is:
      – the current slave MUST have log-slave-updates enabled
      – only the Primary member of the group can be an asynchronous slave of the current async slave.

      Let me know if you have more questions.

      Regards,

  2. Thanks, lefred.

    Just have single mysql node on line, so I want to upgrade the single mysql node to MGR cluster by the smooth way;

    According to your answer, I give my steps, please help confirm

    1. create replication account at slave node;
    2. dump from slave , and restore at 3 node.
    3. create MGR group(Multi-Primary)
    4. do change master operation at a node of MGR cluster

    but I have error when do change master operation as follow:

    slave ip is 10.168.22.140

    mgr_mysql1> CHANGE MASTER TO MASTER_HOST=’10.168.22.140′, MASTER_USER=’slave’, MASTER_PORT=3306, MASTER_PASSWORD=’123456′, MASTER_AUTO_POSITION=1;
    ERROR 3081 (HY000): This operation cannot be performed with running replication threads; run STOP SLAVE FOR CHANNEL ” first

    • Hi,
      I will need to test that scenario, usually I setup asynchronous and then the Group.
      I’ll be traveling, so my test will be delayed but I will try your scenario and come back with the solution.

  3. Hi lefred:

    I have another question about data synchronization delay, whether to write data, the cluster of each node will be read immediately to get the data that is just write, if not, how to solve the problem.

  4. Hi lefred
    given scenario works perfect when we have gtid_mode=ON on mysql 5.6 node.
    How do we Migrate the same when we have gtid_mode=OFF(default) on mysql 5.6.
    How do we set up a Innodb cluster that acts as a slave node to mysql 5.6 Node with gtid_mode=off.
    Can you please help on this scenario.

      • Hi Fred,
        I have tried the same approach having an intermediate node which is acting as a master to innodb cluster.
        The issue i am facing here is mysql 5.6 is with gtid_mode=off and the transactions that ran on mysql 5.6 are not generating GTIDs.The intermediate node should be either OFF_PERMISSIVE/ON_PERMISSIVE for master-slave to work.
        for either case, as the transactions from 5.6 is non -GTIDs, intermediate node treats them as anonymous transactions and they will be replicated in the same way to the cluster.
        Cluster which is acting as slave is unable to read the anonymous transactions as it must have GTID on.
        Also, i was wondering like how do we make a backward compatability i.e., from a cluster to mysql 5.6 with GTID off.

        Thanks
        Vinay

        • Hi Fred,
          To keep you posted, We were able to acheive this scenario with the help of streaming binary logs.
          From the Master(which has GTID=off), we had set up a asynchronous slave of 5.7(with GTID=ON_PERMISSIVE).
          From the 5.7 asynchronous slave, we started streaming of binarylogs to the CLUSTER instance. As this streaming inserts data into the cluster node as a client, we didn’t face any issue with the process.
          We had used this intermediate node for streaming the binary logs. We used intermediate node just to identify if there are any statements that are violating GTID’s and 5.7 upgrade.

    • Hi Amit,
      By default it’s not and even if you can enable certification of tables without PK, this is absolutely not recommended. It can lead to some minor issues like different orders of the records in the different nodes and delete operations might fail.

  5. Hi Fred,
    that was ok to set up my InnoDB Cluster starting from a MySQL source single installation.
    Now, with my InnoDB Cluster running, I need to add to it more databases coming from other MySQL sources.
    What’s the correct way to act ?
    mysqldump to export from sources and than import to InnoDB Cluster could do it?
    or mysqlbackup on sources using –tts and –include-tables and restore to InnoDB Cluster ?

  6. I want to make the two culsters as Master +Salve + Slave(MGR). that’s is ok?
    But I didn’t configure successfully, Can you tell me how to configure the cluster?

  7. Hi Fred,

    I a m facing below issues while trying to add the second node2 to the cluster below is the error in my error.log in primary node1.

    Please let me know, if you have came across this issue and what is the fix.
    Mysql version mysql Ver 14.14 Distrib 5.7.28, for Linux (x86_64) using EditLine wrapper
    mysqlsh version mysqlsh Ver 8.0.12 for Linux on x86_64 – for MySQL 8.0.12 (MySQL Community Server (GPL))

    Success message while running add instance command
    cluster.addInstance(“admin1@10.148.X.X.,:3306”)
    A new instance will be added to the InnoDB cluster. Depending on the amount of
    data on the cluster this might take from a few seconds to several hours.
    Adding instance to the cluster …
    Validating instance at 10.148.X.X:3306…
    This instance reports its own address as 10.148.X.X
    Instance configuration is suitable.
    WARNING: On instance ‘10.148.X.X:3306’ membership change cannot be persisted since MySQL version 5.7.28 does not s
    upport the SET PERSIST command (MySQL version >= 8.0.11 required). Please use the .configureLocalInstance comm
    and locally to persist the changes.
    WARNING: On instance ‘10.148.0.8:3306’ membership change cannot be persisted since MySQL version 5.7.28 does not su
    pport the SET PERSIST command (MySQL version >= 8.0.11 required). Please use the .configureLocalInstance comma
    nd locally to persist the changes.
    The instance ‘admin1@10.148.0.12:3306’ was successfully added to the cluster.

    Error in primary node1 error.log file
    2019-11-30T05:33:54.639103Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: 10.148.0.12:3306’
    2019-11-30T05:33:54.639280Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to 10.148.0.12:3306, master-rep-1:33
    06 on view 15750838418789926:18.’
    2019-11-30T05:33:54.706030Z 306 [Note] Start binlog_dump to master_thread_id(306) slave_server(7), pos(, 4)
    2019-11-30T05:34:54.706875Z 306 [Note] Aborted connection 306 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r1723103588’ host: ‘mas
    ter-rep-4.c.aliz-tiket-sandbox.internal’ (failed on flush_net())
    2019-11-30T05:34:54.886743Z 308 [Note] Start binlog_dump to master_thread_id(308) slave_server(7), pos(, 4)
    2019-11-30T05:35:24.887365Z 308 [Note] Aborted connection 308 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r1723103588’ host: ‘mas
    ter-rep-4.c.aliz-tiket-sandbox.internal’ (failed on flush_net())
    2019-11-30T05:35:32.925308Z 310 [Note] Access denied for user ‘admin1’@’localhost’ (using password: YES)
    2019-11-30T05:35:55.059077Z 312 [Note] Start binlog_dump to master_thread_id(312) slave_server(7), pos(, 4)
    2019-11-30T05:36:25.059622Z 312 [Note] Aborted connection 312 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r1723103588’ host: ‘mas
    ter-rep-4.c.aliz-tiket-sandbox.internal’ (failed on flush_net())
    2019-11-30T05:36:55.229646Z 314 [Note] Start binlog_dump to master_thread_id(314) slave_server(7), pos(, 4)
    2019-11-30T05:37:50.005140Z 314 [Note] Aborted connection 314 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r1723103588’ host: ‘mas
    ter-rep-4.c.aliz-tiket-sandbox.internal’ (failed on flush_net())
    2019-11-30T05:37:55.371359Z 315 [Note] Start binlog_dump to master_thread_id(315) slave_server(7), pos(, 4)

    • Hi, please try to use the latest shell version (8.0.18) and I encourage you to use also the latest version of MySQL. InnoDB cluster is much better in MySQL 8.0 than 5.7. In case you want to say in 5.7 anyway, how do you setup your instances?

      • Hi Fred,

        Thank you, for the response.
        I have manage to setup the MySQL group replication with 3 nodes on MySQL 5.7.
        But when I try to reboot one of the nodes, it does not join to the group automatically, even though the parameter to join on boot is set to ON in my .cnf file.
        As advised will give an try to setup the GR on latest SQL version

        Cheers
        Yogesh

  8. I’m hoping you can help me. I’ve reached out to Oracle and they don’t seem to be as familiar as you seem with InnoDB cluster. I have two environments; I’ll call them currentprod and newprod.

    currentprod is being used and is running the following replication setup:
    Svr1 (Master) – Svr2 (setup as a slave to Svr1 and a Master to Svr3) – Svr3 (setup as a slave to Svr2 and a Master to Svr4) – Svr4 (setup as a slave to Svr3 and serves as a backup server so it is also writing to binary logs). GTID_MODE is set to OFF in currentprod on all servers.

    newprod I set up as an InnoDB cluster. It is running great!…except that I need to update it with prod data until we switch over, which led me to your article.

    I’m running into error 1782 when I try to run a binary log on the R/W node of the cluster indicating that the gtid_next cannot be anonymous….so I’m assuming the differences in the binary logs are the problem.

    I’m wondering if you have suggestions on how I can recover from this without starting from scratch??

    Do you think the following could work?
    I was thinking I could remove the cluster. Process the node I’ll use to clone the dbs with all the currentprod binary logs to catch it up….Then on the currentprod backup server, stop replication, change the gtid_mode to On, start replication on the backup server. Then follow the guidance in your article for setting up asynchronous replication to the cluster…set up the cluster again and clone the dbs.

    Sorry for the long comment, but I’m at a loss on what I can do.

    • Oracle MySQL support is very well informed and trained about MySQL InnoDB Cluster… I don’t know what was your contact.
      However, InnoDB Cluster needs GTID. If you are not able to enable GTID on your source, currently you don’t have any solution… however, the next release of MySQL will be able to handle this… be patient, MySQL 8.0.23 is around the corner 😉

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.