MySQL InnoDB Cluster with 2 Data Centers for Disaster Recovery: howto

on

As you know, MySQL InnoDB Cluster is a High Availability solution for MySQL. However more and more people are trying to use it as a Disaster Recovery solution with 2 data centers. Natively, this is not yet supported. But it’s already possible to realize such setup if we agree with the following points:

  •  a human interaction is required in case of Disaster Recovery which, by my own experience, is often acceptable
  • a human interaction is required if the any Primary-Master acting as asynchronous slave leaves its group  (crash, network problem, …) or becomes secondary

These are not big constraints and it’s relatively easily to deal with them.

The Architecture

The situation is as follow:

  • 2 data centers (one active, one inactive, only used for disaster recovery)
  • 2 MySQL InnoDB Clusters (one in each DC)
  • 3 members in each cluster (to deal with local failure)
  • 1 router in each DC used for asynchronous replication
  • application server(s) with local router in each DC (only those in the active DC should be available, once again, this is a disaster recovery solution)

Setup

The first step is to take a backup from the current production to restore on the cluster (if you need live migration with minimal downtime, please check this post and/or this tutorial)

Example of backup using MySQL Enterprise Backup, aka meb:

/opt/meb/bin/mysqlbackup \
  --host 127.0.0.1 \
  --backup-dir /vagrant/backup/ \
    backup-and-apply-log

This machine had the following set of GTID executed:

mysql> show master status\G
*************************** 1. row ***************************
             File: binlog.000003
         Position: 2019696
     Binlog_Do_DB: 
 Binlog_Ignore_DB: 
Executed_Gtid_Set: aa410ab6-edb1-11e8-9e34-08002718d305:1-30

Now we will restore the backup on all the servers/nodes in DC1 (mysql1, mysql2 and mysql3) and DC2 (mysql4, mysql5 and mysql6):

# systemctl stop mysqld
# rm -rf /var/lib/mysql/*
# /opt/meb/bin/mysqlbackup --backup-dir=/vagrant/backup/ copy-back
# chown -R mysql. /var/lib/mysql
# systemctl start mysqld

Let’s start MySQL-Shell (this can be remote or on one of the server, it doesn’t matter):

# mysqlsh
MySQL>JS> dba.configureInstance('clusteradmin@mysql1')
MySQL>JS> dba.configureInstance('clusteradmin@mysql2')
MySQL>JS> dba.configureInstance('clusteradmin@mysql3')
MySQL>JS> dba.configureInstance('clusteradmin@mysql4')
MySQL>JS> dba.configureInstance('clusteradmin@mysql5')
MySQL>JS> dba.configureInstance('clusteradmin@mysql6')

clusteradmin is a user that was created already in the production server from which we took the backup. The user was created with the following statements:

mysql> create user clusteradmin identified by 'MySQL8isGreat';
mysql> grant all privileges on *.* to 'clusteradmin'@'%' with grant option;

The dba.configureInstance( ) method will ask you to modify 4 variables on a fresh installed MySQL 8.0.13, please confirm and let the shell restart mysqld.

The 4 variables are:

+--------------------------+---------------+----------------+
| Variable                 | Current Value | Required Value |
+--------------------------+---------------+----------------+
| binlog_checksum          | CRC32         | NONE           |
| enforce_gtid_consistency | OFF           | ON             |
| gtid_mode                | OFF           | ON             |
| server_id                | 1             |                |
+--------------------------+---------------+----------------+

Cluster Creation

When the restart operation is performed, it’s time to connect to one of the node (I use mysql1) and create the first cluster:

MYSQL>JS> \c clusteradmin@mysql1
Creating a session to 'clusteradmin@mysql1'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 8 (X protocol)
Server version: 8.0.13 MySQL Community Server - GPL
No default schema selected; type \use  to set one.

MySQL>JS> cluster=dba.createCluster('clusterDC1')
A new InnoDB cluster will be created on instance 'clusteradmin@mysql1:3306'.

Validating instance at mysql1:3306...

This instance reports its own address as mysql1

Instance configuration is suitable.
Creating InnoDB cluster 'clusterDC1' on 'clusteradmin@mysql1:3306'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

Let’s verify:

JS> cluster.status()
{
    "clusterName": "clusterDC1", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "mysql1:3306": {
                "address": "mysql1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306"
}

Adding Instances

We can now add the other members to our cluster:

JS> cluster.addInstance('clusteradmin@mysql2')
JS> cluster.addInstance('clusteradmin@mysql3')

Now we have our first cluster ready (the one in DC1):

JS> cluster.status()
{
    "clusterName": "clusterDC1", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "mysql1:3306": {
                "address": "mysql1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql2:3306": {
                "address": "mysql2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306"
}

Adding members of DC2

Now we will cover the little trick, we must add also the future nodes of DC2 in the actual cluster of DC1. This is only temporary but it’s needed to avoid to play with replication filters for InnoDB Cluster Metadata (that won’t work completely anyway).
So let’s add mysql4, mysql5 and mysql6 too:

JS> cluster.addInstance('clusteradmin@mysql4')
JS> cluster.addInstance('clusteradmin@mysql5')
JS> cluster.addInstance('clusteradmin@mysql6')

And we can verify the status:

JS> cluster.status()
{
    "clusterName": "clusterDC1", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to 2 failures.", 
        "topology": {
            "mysql1:3306": {
                "address": "mysql1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql2:3306": {
                "address": "mysql2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql4:3306": {
                "address": "mysql4:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql5:3306": {
                "address": "mysql5:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql6:3306": {
                "address": "mysql6:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306"
}

Prepare the replication user

On the primary master, in our case mysql1, we need to create a user that will be used for asynchronous replication between the clusters:

mysql> create user 'repl'@'%' identified by 'replication' REQUIRE SSL
mysql> grant replication slave on *.* to 'repl'@'%';

MysQL Router Bootstrap

As the MySQL  Router doesn’t yet allow the bootstrap method when more than one cluster is available in the InnoDB Cluster metadata, this is the right time to perform this operation (don’t try to perform it later as it will fail):

# mysqlrouter --bootstrap clusteradmin@mysql1 --user mysqlrouter

Note: I've entered a Feature Request #93302.

Second InnoDB Cluster Creation

Before the creation of the second cluster, we need  to remove all the 3 nodes from the current cluster (clusterDC1):

JS> cluster.removeInstance('clusteradmin@mysql4')
JS> cluster.removeInstance('clusteradmin@mysql5')
JS> cluster.removeInstance('clusteradmin@mysql6')

Now clusterDC1 is back at the requested state:

JS> cluster.status()
{
    "clusterName": "clusterDC1", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "mysql1:3306": {
                "address": "mysql1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql2:3306": {
                "address": "mysql2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql3:3306": {
                "address": "mysql3:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306"
}

But on the nodes of DC2,  we still have the innodb metadata information:

SQL> select * from mysql_innodb_cluster_metadata.clusters\G
*************************** 1. row ***************************
         cluster_id: 1
       cluster_name: clusterDC1
 default_replicaset: 1
        description: Default Cluster
mysql_user_accounts: NULL
            options: null
         attributes: {"default": true}

It’s time to connect via MySQL Shell to mysql4 and create the second cluster (clusterDC2):

JS> \c clusteradmin@mysql4
Creating a session to 'clusteradmin@mysql4'
Fetching schema names for autocompletion... Press ^C to stop.
Closing old connection...
Your MySQL connection id is 34 (X protocol)
Server version: 8.0.13 MySQL Community Server - GPL
No default schema selected; type \use  to set one.

MySQL>JS> cluster2=dba.createCluster('clusterDC2')
A new InnoDB cluster will be created on instance 'clusteradmin@mysql4:3306'.

The MySQL instance at 'mysql4:3306' currently has the super_read_only 
system variable set to protect it from inadvertent updates from applications. 
You must first unset it to be able to perform any changes to this instance. 
For more information see: https://dev.mysql.com/doc/refman/en/server-system-variables.html#sysvar_super_read_only.

Note: there are open sessions to 'mysql4:3306'.
You may want to kill these sessions to prevent them from performing unexpected updates: 

1 open session(s) of 'root@localhost'. 

Do you want to disable super_read_only and continue? [y/N]: y

Validating instance at mysql4:3306...

This instance reports its own address as mysql4

Instance configuration is suitable.
Creating InnoDB cluster 'clusterDC2' on 'clusteradmin@mysql4:3306'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

As you can see, mysql4 was read only !

We have our second cluster in the cluster2 object:

JS> cluster2.status()
{
    "clusterName": "clusterDC2", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql4:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "mysql4:3306": {
                "address": "mysql4:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://clusteradmin@mysql4:3306"
}

We can verify in the InnoDB Cluster Metadata that we have 2 clusters now:

SQL> select * from mysql_innodb_cluster_metadata.clusters\G
*************************** 1. row ***************************
         cluster_id: 1
       cluster_name: clusterDC1
 default_replicaset: 1
        description: Default Cluster
mysql_user_accounts: NULL
            options: null
         attributes: {"default": true}
*************************** 2. row ***************************
         cluster_id: 2
       cluster_name: clusterDC2
 default_replicaset: 2
        description: Default Cluster
mysql_user_accounts: NULL
            options: null
         attributes: {"default": true}
2 rows in set (0.0026 sec)

All good !

Before being able to add mysql5 and mysql6to clusterDC2, we need first to setup and start asynchronous replication from clusterDC1 to mysql4. This is required because for the moment, mysql5 and mysql6 have extra transactions (GTID) that mysql4 is not yet aware of. Indeed, when mysql4 was removed from the cluster, the meta data had been modified and this modification was also replicated via the group to mysql5 and mysql6. Same when mysql5 was removed.

Setup & start asynchronous replication from DC1 to DC2

Setup asynchronous replication is very easy when GTID are used. We only need to run the following 2 commands:

mysql> CHANGE MASTER TO MASTER_HOST='mysql1', MASTER_PORT=3306, MASTER_USER='repl', 
       MASTER_PASSWORD='replication', MASTER_AUTO_POSITION=1, MASTER_SSL=1; 
mysql> START SLAVE;

And we can check that everything is working as expected:

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: mysql1
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000006
          Read_Master_Log_Pos: 54330
               Relay_Log_File: mysql4-relay-bin.000002
                Relay_Log_Pos: 2875
        Relay_Master_Log_File: binlog.000006
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 54330
              Relay_Log_Space: 3076
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: Yes
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1247156063
                  Master_UUID: 5208b04f-edb2-11e8-b3a1-08002718d305
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 9927c511-edb2-11e8-8e54-08002718d305:51-58
            Executed_Gtid_Set: 4ac4ffd9-edb2-11e8-a836-08002718d305:1-4,
5208b04f-edb2-11e8-b3a1-08002718d305:1-12,
9927c511-edb2-11e8-8e54-08002718d305:1-58,
aa410ab6-edb1-11e8-9e34-08002718d305:1-30,
ba9c231d-edb3-11e8-8d74-08002718d305:1-2
                Auto_Position: 1
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
       Master_public_key_path: 
        Get_master_public_key: 0

mysql4 retrieved all the changes and now has also the missing GTIDs.

Adding the other nodes

We can now add mysql5 and mysql6 to clusterDC2:

JS> cluster2.addInstance('clusteradmin@mysql5')
JS> cluster2.addInstance('clusteradmin@mysql6')

And we can check clusterDC2‘s status:

JS> cluster2.status()
{
    "clusterName": "clusterDC2", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql4:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "mysql4:3306": {
                "address": "mysql4:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql5:3306": {
                "address": "mysql5:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql6:3306": {
                "address": "mysql6:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://clusteradmin@mysql4:3306"
}

Asynchronous replication to clusterDC1

We have now our 2 clusters running. It’s time to replicate to clusterDC1 the metadata of clusterDC2.

On mysql1 (the Primary-Master, see status()),  we need to run the following statements:

mysql> CHANGE MASTER TO MASTER_HOST='mysql4', MASTER_PORT=3306, MASTER_USER='repl', 
       MASTER_PASSWORD='replication', MASTER_AUTO_POSITION=1, MASTER_SSL=1; 
mysql> START SLAVE;

That slave thread can be stopped, certainly as you SHOULD NOT WRITE in DC2. But if you have a lot of metadata changes and you rotate your binary logs, you might end up with errands transaction in case of failover. So I recommend to let it on.

MySQL Router Extra Configuration

We still have to manually modify the MySQL Router. We bootstrapped it earlier. Now it’s time to change the configuration.

We will have to create two configurations for the router, 1 for all the application servers in DC1 and 1 for those in DC2.

/etc/mysqlrouter/mysqlrouter.conf in DC1

# File automatically generated during MySQL Router bootstrap
[DEFAULT]
name=system
user=mysqlrouter
keyring_path=/var/lib/mysqlrouter/keyring
master_key_path=/etc/mysqlrouter/mysqlrouter.key
connect_timeout=30
read_timeout=30

[logger]
level = INFO

[metadata_cache:clusterDC1]
router_id=1
bootstrap_server_addresses=mysql://mysql1:3306,mysql://mysql2:3306,mysql://mysql3:3306
user=mysql_router1_8qm3jeohgy4e
metadata_cluster=clusterDC1
ttl=0.5

[routing:clusterDC1_default_rw]
bind_address=0.0.0.0
bind_port=6446
destinations=metadata-cache://clusterDC1/default?role=PRIMARY
routing_strategy=round-robin
protocol=classic

[routing:clusterDC1_default_ro]
bind_address=0.0.0.0
bind_port=6447
destinations=metadata-cache://clusterDC1/default?role=SECONDARY
routing_strategy=round-robin
protocol=classic

[routing:clusterDC1_default_x_rw]
bind_address=0.0.0.0
bind_port=64460
destinations=metadata-cache://clusterDC1/default?role=PRIMARY
routing_strategy=round-robin
protocol=x

[routing:clusterDC1_default_x_ro]
bind_address=0.0.0.0
bind_port=64470
destinations=metadata-cache://clusterDC1/default?role=SECONDARY
routing_strategy=round-robin
protocol=x

/etc/mysqlrouter/mysqlrouter.conf in DC2

# File automatically generated during MySQL Router bootstrap
[DEFAULT]
name=system
user=mysqlrouter
keyring_path=/var/lib/mysqlrouter/keyring
master_key_path=/etc/mysqlrouter/mysqlrouter.key
connect_timeout=30
read_timeout=30

[logger]
level = INFO

[metadata_cache:clusterDC2]
router_id=1
bootstrap_server_addresses=mysql://mysql4:3306,mysql://mysql5:3306,mysql://mysql6:3306
user=mysql_router1_8qm3jeohgy4e
metadata_cluster=clusterDC2
ttl=0.5

[routing:clusterDC2_default_rw]
bind_address=0.0.0.0
bind_port=6446
destinations=metadata-cache://clusterDC2/default?role=PRIMARY
routing_strategy=round-robin
protocol=classic

[routing:clusterDC2_default_ro]
bind_address=0.0.0.0
bind_port=6447
destinations=metadata-cache://clusterDC2/default?role=SECONDARY
routing_strategy=round-robin
protocol=classic

[routing:clusterDC2_default_x_rw]
bind_address=0.0.0.0
bind_port=64460
destinations=metadata-cache://clusterDC2/default?role=PRIMARY
routing_strategy=round-robin
protocol=x

[routing:clusterDC2_default_x_ro]
bind_address=0.0.0.0
bind_port=64470
destinations=metadata-cache://clusterDC2/default?role=SECONDARY
routing_strategy=round-robin
protocol=x

Of course the configuration file, the key (/etc/mysqlrouter/mysqlrouter.key) and the keyring file (/var/lib/mysqlrouter/keyring) must be copied on all routers and the ownership changed to mysqlrouter’s user (mysqlrouter).

MySQL Router for Replication

As explained in this post, we can also use the MySQL Router in case the Primary-Master acting as asynchronous master dies. If it’s the asynchronous slave that dies, you need to promote the new Primary-Master as asynchronous slave manually.

On a dedicated machine or on all nodes you need to setup the MySQL Router to point to the other DC. So in DC1, you need to have the same router configuration as the application servers in DC2. And the opposite in DC2.

For example, I used mysql1 as router for clusterDC2 and mysql4 as router for clusterDC1.

[root@mysql1 vagrant]# cp mysqlrouter.key /etc/mysqlrouter/
[root@mysql1 vagrant]# cp dc2_mysqlrouter.conf /etc/mysqlrouter/mysqlrouter.conf 
cp: overwrite ‘/etc/mysqlrouter/mysqlrouter.conf’? y
[root@mysql1 vagrant]# mkdir /var/lib/mysqlrouter
[root@mysql1 vagrant]# cp keyring /var/lib/mysqlrouter/
[root@mysql1 vagrant]# chown mysqlrouter -R /var/lib/mysqlrouter/
[root@mysql1 vagrant]# chown mysqlrouter /etc/mysqlrouter/mysqlrouter.*
[root@mysql1 vagrant]# systemctl start mysqlrouter
[root@mysql1 vagrant]# systemctl status mysqlrouter
● mysqlrouter.service - MySQL Router
   Loaded: loaded (/usr/lib/systemd/system/mysqlrouter.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2018-11-22 12:32:50 UTC; 4s ago
 Main PID: 15403 (main)
   CGroup: /system.slice/mysqlrouter.service
           └─15403 /usr/bin/mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf

Nov 22 12:32:50 mysql1 systemd[1]: Started MySQL Router.
Nov 22 12:32:50 mysql1 systemd[1]: Starting MySQL Router...

And the same on mysql4 using the config of DC1.

We can test, we need to use port 6446 yo reach the Primary-Master of each cluster:

[root@mysql1 ~]# mysql -h mysql1 -P 6446 -u clusteradmin -p -e "select @@hostname"
Enter password: 
+------------+
| @@hostname |
+------------+
| mysql4     |
+------------+
[root@mysql1 ~]# mysql -h mysql4 -P 6446 -u clusteradmin -p -e "select @@hostname"
Enter password: 
+------------+
| @@hostname |
+------------+
| mysql1     |
+------------+

Perfect, now we just need to change the asynchronous replication.

Let’s start in mysql1:

mysql> STOP SLAVE;
mysql> CHANGE MASTER TO MASTER_HOST='mysql1', MASTER_PORT=6446, MASTER_USER='repl', 
       MASTER_PASSWORD='replication', MASTER_AUTO_POSITION=1, MASTER_SSL=1; 
mysql> START SLAVE;
mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: mysql1
                  Master_User: repl
                  Master_Port: 6446
                Connect_Retry: 60
              Master_Log_File: binlog.000006
          Read_Master_Log_Pos: 37341
               Relay_Log_File: mysql1-relay-bin.000002
                Relay_Log_Pos: 391
        Relay_Master_Log_File: binlog.000006
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
                          ...

We need to perform the change in mysql4 that needs to use mysql4 as master using port 6446.

Conclusion

As you can see, it’s possible to use MySQL InnoDB Cluster as High Availability solution (HA) and extend it for Disaster Recovery (DR) using Asynchronous Replication.

Please keep in mind, that it is highly recommended to write only in one DC at the time.

10 thoughts on “MySQL InnoDB Cluster with 2 Data Centers for Disaster Recovery: howto

  1. Hi lefred,

    Can the synchronous replication between the datacenters replaced by semisynchronous replication ?

    Thanks
    Youcef HILEM

    1. Hi lefred,
      (correction)
      Can the asynchronous replication between the datacenters replaced by semisynchronous replication ?

      Thanks
      Youcef HILEM

        1. Thank you very much. Good news.
          Yes we have use cases where reliability is required in place of performances.
          Data written on DC1 must be visible “immediately” on DC2.

          Youcef HILEM

          1. Semi sync doing guarantee you that! It grarantees that the change has been sent and received on the async slave… Not that it’s already applied.

          2. Yes, I know that’s why I put in quotes: visible “immediately” on DC2.
            The bottom line is that crash of the DC1 and failover to DC2 is lossless because the slave is up to date.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

recent

Last Tweets

Locations of visitors to this page
categories