MySQL InnoDB Cluster: MySQL Shell starter guide

 Earlier this week,  MySQL Shell 1.08 has been released. This is the first Release Candidate of this major piece of MySQL InnoDB Cluster.

Some commands have been changed and some new ones were added.

For example the following useful commands were added:

  • dba.checkInstanceConfiguration()
  • cluster.checkInstanceState()
  • dba.rebootClusterFromCompleteOutage()

So let’s have a look on how to use the new MySQL Shell to create a MySQL InnoDB Cluster.

Action Plan

We have 3 blank Linux servers: mysql1, mysql2 and mysql3 all running rpm based Linux version 7 (Oracle Linux 7, CentOS 7, …).

We will install the required MySQL yum repositories and install the needed packages

We will use MySQL Shell to setup our MySQL InnoDB Cluster.

Packages

To be able to install our cluster, we will first install the repository from the MySQL release package. For more information related to MySQL’s installation or if you are using another OS, please check our online documentation.

On all 3 servers, we do:

# rpm -ivh https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm
# yum install -y mysql-community-server

The commands above, will install the MySQL Community yum repositories and install MySQL Community Server 5.7.17, being the latest GA version at the date of this post.
Now we will have to install the Shell. As this tool is not yet GA, we need to use another repository that has been installed but not enabled: mysql-tool-preview

# yum install -y mysql-shell --enablerepo=mysql-tools-preview

We are done with the installation. Now let’s initialize MySQL and start it.

Starting MySQL

Before being able to start MySQL, we need to create all necessary folders and system tables. This process is called MySQL Initialization. Let’s proceed without generating a temporary root password as it will be easier and faster for the demonstration. However, I highly recommend you to use a strong root password.

When the initialization is done, we can start MySQL. So on all the future nodes, you can proceeds like this:

# mysqld --initialize-insecure -u mysql --datadir /var/lib/mysql/
# systemctl start mysqld
# systemctl status mysqld

MySQL InnoDB Cluster Instances Configuration

We have now everything we need to start working in the MySQL Shell to configure all the members of our InnoDB Cluster.

First, we will check the configuration of one of our MySQL server. Some changes are required, we will perform them using the Shell and we will restart mysqld:

# mysqlsh
mysql-js> dba.checkInstanceConfiguration('root@localhost:3306')
...
mysql-js> dba.configureLocalInstance()
... here please create a dedicated user and password to admin the cluster (option 2) ...
mysql-js> \q

# systemctl restart mysqld

Now MySQL has all the required mandatory settings to run Group Replication. We can verify the configuration again in the Shell with dba.checkInstanceConfiguration() function.

We have now to proceed the same way on all the other nodes, please use the same credentials when you create the user to manage your cluster,  I used ‘fred@%’  as example.  As you can’t configure remotely a MySQL Server, you will have to run the Shell locally on every node to be able to run dba.configureLocalInstance() and restart mysqld.

MySQL InnoDB Cluster Creation

Now that all the nodes have been restarted with the correct configuration, we can create the cluster. On one of the instances, we will connect and create the cluster using again the Shell, I did it on mysql1 and I used its ip as it’s name resolves also on the loopback interface:

# mysqlsh
mysql-js> var i1='fred@192.168.90.2:3306'
mysql-js> var i2='fred@mysql2:3306'
mysql-js> var i3='fred@mysql3:3306'
mysql-js> shell.connect(i1)
mysql-js> var cluster=dba.createCluster('mycluster')
mysql-js> cluster.status()
...

We can now validate that the dataset on the other instances is correct (no extra transactions executed). This is done by validating the GTIDs. This can be done remotely, so I will still use the MySQL Shell session I’ve open on mysql1:

mysql-js> cluster.checkInstanceState(i2)
mysql-js> cluster.checkInstanceState(i3)

When the validation is passed successfully, it’s time to add the two other nodes to our cluster:

mysql-js> cluster.addInstance(i2)
mysql-js> cluster.addInstance(i3)
mysql-js> cluster.status()

Perfect ! We used MySQL Shell to create this MySQL InnoDB Cluster.

Now let’s see this on video with all  the output of the commands:

In the next post, I will show you how to use the Shell to automate the creation of a MySQL InnoDB Cluster using Puppet.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

51 Comments

  1. Hi Fred!

    AFAIK, a default installation of RHEL / OEL / CentOS 7 will include a “mariadb-libs-5.5” package, and the MySQL 5.7 packages will conflict with it.
    In your video, I’m missing the signs of this conflict as well as your actions to handle it.
    I know the conflict can be solved in a clean way, but I’d like to see your way of doing that.

    Regards,
    Jörg

    • Hi Jörg,

      Yum sees the conflict and deals with it:

      [root@mysql1 ~]# yum install mysql-community-server
      Loaded plugins: fastestmirror
      mysql-connectors-community | 2.5 kB 00:00:00
      mysql-tools-community | 2.5 kB 00:00:00
      mysql57-community | 2.5 kB 00:00:00
      (1/3): mysql-tools-community/x86_64/primary_db | 32 kB 00:00:00
      (2/3): mysql-connectors-community/x86_64/primary_db | 13 kB 00:00:00
      (3/3): mysql57-community/x86_64/primary_db | 96 kB 00:00:00
      Loading mirror speeds from cached hostfile
      * base: centos.mirror.nucleus.be
      * epel: nl.mirror.babylon.network
      * extras: mirrors.ircam.fr
      * updates: mirrors.ircam.fr
      Resolving Dependencies
      --> Running transaction check
      ---> Package mysql-community-server.x86_64 0:5.7.17-1.el7 will be installed
      --> Processing Dependency: mysql-community-common(x86-64) = 5.7.17-1.el7 for package: mysql-community-server-5.7.17-1.el7.x86_64
      --> Processing Dependency: mysql-community-client(x86-64) >= 5.7.9 for package: mysql-community-server-5.7.17-1.el7.x86_64
      --> Running transaction check
      ---> Package mysql-community-client.x86_64 0:5.7.17-1.el7 will be installed
      --> Processing Dependency: mysql-community-libs(x86-64) >= 5.7.9 for package: mysql-community-client-5.7.17-1.el7.x86_64
      ---> Package mysql-community-common.x86_64 0:5.7.17-1.el7 will be installed
      --> Running transaction check
      ---> Package mariadb-libs.x86_64 1:5.5.50-1.el7_2 will be obsoleted
      --> Processing Dependency: libmysqlclient.so.18()(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64
      --> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64
      ---> Package mysql-community-libs.x86_64 0:5.7.17-1.el7 will be obsoleting
      --> Running transaction check
      ---> Package mysql-community-libs-compat.x86_64 0:5.7.17-1.el7 will be obsoleting
      --> Finished Dependency Resolution

      Dependencies Resolved

      ============================================================================================================================================================================
      Package Arch Version Repository Size
      ============================================================================================================================================================================
      Installing:
      mysql-community-libs x86_64 5.7.17-1.el7 mysql57-community 2.1 M
      replacing mariadb-libs.x86_64 1:5.5.50-1.el7_2
      mysql-community-libs-compat x86_64 5.7.17-1.el7 mysql57-community 2.0 M
      replacing mariadb-libs.x86_64 1:5.5.50-1.el7_2
      mysql-community-server x86_64 5.7.17-1.el7 mysql57-community 162 M
      Installing for dependencies:
      mysql-community-client x86_64 5.7.17-1.el7 mysql57-community 24 M
      mysql-community-common x86_64 5.7.17-1.el7 mysql57-community 271 k

      Transaction Summary
      ============================================================================================================================================================================
      Install 3 Packages (+2 Dependent packages)

      Total download size: 190 M
      Is this ok [y/d/N]:

      However, you can still use the swap command:

      [root@mysql1 ~]# yum swap mariadb-libs mysql-community-libs

      If you have am issue with the swap command of yum, you can still use yum’s shell to create a transaction like this:

      [root@mysql1 ~]# yum -q -q shell
      > remove mariadb-libs
      > install mysql-community-libs
      > transaction run

      I hope this helps,

      Cheers,
      Fred.

      • Hi Fred,
        thanks for this command log – it helps.
        Obviously, I don’t use yum often enough: my memory still said that yum (like rpm) will only remove other packages when called to “upgrade”, not with “install”. So that has changed.
        Regards,
        Jörg

  2. Hi Lefred,

    Having some issues with the Cluster.

    I have a normal 3 Node Cluster and Ive defined a Cluster on my Seed Successfully.

    Issue: Error’s When Joining the 2x Nodes to the Cluster. See below CheckInstance result:

    mysql-js> cluster.checkInstanceState(‘root@165.233.206.43:3306’)
    Please provide the password for ‘root@165.233.206.43:3306’:
    Analyzing the instance replication state…

    The instance ‘165.233.206.43:3306’ is invalid for the cluster.
    The instance contains additional transactions in relation to the cluster.

    {
    “reason”: “diverged”,
    “state”: “error”
    }

    Ive Set this up 20x times in my Sandbox Environment but now this issue is

    • Hi Lerato,

      This means that the data on 2nd node is not the same as on the seed. In fact this 2nd node has (at least one) more transaction (check the executed GTID). How did you provision the data on the nodes ? If those are fresh instances, did you add the credentials on each node individually ? If this is the case and the only trx you did in the second instance, you can run RESET MASTER on it and join it again.

      Thank you for testing MySQL Group Replication & InnoDB Cluster.

      Cheers,

  3. Hi,

    Yes Data was exactly the same as the 2 other Server’s are clones of the Seed Server. The Only difference was that I ran the Credentials individually on the other 2 nodes and thus the difference in GTID’s. I ran a RESET MASTER on the 2nd node and now the Node is ready to be added to cluster, Now I get a different Issue:

    mysql-js> cluster.checkInstanceState(‘root@165.233.206.40:3306’)
    Please provide the password for ‘root@165.233.206.40:3306’:
    Analyzing the instance replication state…

    The instance ‘165.233.206.40:3306’ is valid for the cluster.
    The instance is fully recoverable.

    {
    “reason”: “recoverable”,
    “state”: “ok”
    }

    Then Addin gthe Node to Cluster:

    mysql-js> cluster.addInstance(‘root@165.233.206.40:3306’);
    A new instance will be added to the InnoDB cluster. Depending on the amount of
    data on the cluster this might take from a few seconds to several hours.

    Please provide the password for ‘root@165.233.206.40:3306’:
    Adding instance to the cluster …

    Cluster.addInstance: ERROR:
    Group Replication join failed.
    ERROR: Group Replication plugin failed to start. Server error log contains the following errors:
    2017-09-14T10:24:46.749189Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] Error connecting to the local group communication engine instance.’
    2017-09-14T10:24:47.774681Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] The member was unable to join the group. Local port: 13306’
    2017-09-14T10:25:46.748722Z 37 [ERROR] Plugin group_replication reported: ‘Timeout on wait for view after joining group’
    2017-09-14T10:25:46.748968Z 37 [ERROR] Plugin group_replication reported: ‘[GCS] The member is leaving a group without being on one.’

    ERROR: Error joining instance to cluster: ‘165.233.206.40:3306’ – Query failed. 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.. Query: START group_replication (RuntimeError)

    • Hi Lerato,

      Could you check the error log on the 2nd node ? It might also try to add the credential he’s getting from node1 that you already added manually.

      If this is the case you should also peform the reset master on node 1.

      Next time, it’s preferable to add the credentials before the backup or using SET SQL_LOG_BIN=0 in the session you create those credentials (those dedicated for the cluster authentication).

      Cheers,

      • Ohk Thanks, Added Same Credentials accross all 3 Nodes.

        The 2nd Node Error Log as it was being added to Cluster:

        2017-09-14T10:24:46.708780Z 37 [Note] Plugin group_replication reported: ‘Initialized group communication with configuration: group_replication_group_name: “9c99070e-9931-11e7-9744-005056830ccb”; group_replication_local_address: “165.233.206.40:13306”; group_replication_group_seeds: “165.233.206.43:13306”; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: “AUTOMATIC”‘
        2017-09-14T10:24:46.709380Z 39 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
        2017-09-14T10:24:46.748556Z 42 [Note] Slave SQL thread for channel ‘group_replication_applier’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./host-db03-relay-bin-group_replication_applier.000003’ position: 4
        2017-09-14T10:24:46.748568Z 37 [Note] Plugin group_replication reported: ‘Group Replication applier module successfully initialized!’
        2017-09-14T10:24:46.748688Z 0 [Note] Plugin group_replication reported: ‘state 4257 action xa_init’
        2017-09-14T10:24:46.748744Z 0 [Note] Plugin group_replication reported: ‘Successfully bound to 0.0.0.0:13306 (socket=82).’
        2017-09-14T10:24:46.748771Z 0 [Note] Plugin group_replication reported: ‘Successfully set listen backlog to 32 (socket=82)!’
        2017-09-14T10:24:46.748780Z 0 [Note] Plugin group_replication reported: ‘Successfully unblocked socket (socket=82)!’
        2017-09-14T10:24:46.748961Z 0 [Note] Plugin group_replication reported: ‘connecting to 165.233.206.40 13306’
        2017-09-14T10:24:46.748985Z 0 [Note] Plugin group_replication reported: ‘Ready to accept incoming connections on 0.0.0.0:13306 (socket=82)!’
        2017-09-14T10:24:46.749074Z 0 [Note] Plugin group_replication reported: ‘client connected to 165.233.206.40 13306 fd 92’
        2017-09-14T10:24:46.749114Z 0 [Warning] Plugin group_replication reported: ‘[GCS] Connection attempt from IP address 165.233.206.40 refused. Address is not in the IP whitelist.’
        2017-09-14T10:24:46.749189Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] Error connecting to the local group communication engine instance.’
        2017-09-14T10:24:46.749208Z 0 [Note] Plugin group_replication reported: ‘state 4257 action xa_exit’
        2017-09-14T10:24:46.749401Z 0 [Note] Plugin group_replication reported: ‘Exiting xcom thread’
        2017-09-14T10:24:47.774681Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] The member was unable to join the group. Local port: 13306’
        2017-09-14T10:25:46.748722Z 37 [ERROR] Plugin group_replication reported: ‘Timeout on wait for view after joining group’
        2017-09-14T10:25:46.748925Z 37 [Note] Plugin group_replication reported: ‘Requesting to leave the group despite of not being a member’
        2017-09-14T10:25:46.748968Z 37 [ERROR] Plugin group_replication reported: ‘[GCS] The member is leaving a group without being on one.’
        2017-09-14T10:25:46.749450Z 42 [Note] Error reading relay log event for channel ‘group_replication_applier’: slave SQL thread was killed
        2017-09-14T10:25:46.750215Z 39 [Note] Plugin group_replication reported: ‘The group replication applier thread was killed’

        Unsure about the Whitelist Warning as I created the Cluster with the all 3 Nodes IP addresses Exclusively in the Whitelist:

        mysql> show variables like ‘group_replication_ip_whitelist’;
        +——————————–+————————————————+
        | Variable_name | Value |
        +——————————–+————————————————+
        | group_replication_ip_whitelist | 165.233.206.40, 165.233.206.41, 165.233.206.43 |
        +——————————–+————————————————+

  4. No IP tables.

    All 3 Nodes on same Vlan.

    Tested Manually(mysqlsh) From Seed –> Node 1 and that works 100%

    Kind Regards
    LT

      • Hi Lefred,

        Yes I changed the Seed.

        I hadnt tested that as yet, and im using version 5.7.17.

        I had a breakthrough yesterday evening, Since I was seeing this warning before all the Error’s I decided to follow it into the Rabit hole:

        2017-09-14T10:24:46.749114Z 0 [Warning] Plugin group_replication reported: ‘[GCS] Connection attempt from IP address 165.233.206.40 refused. Address is not in the IP whitelist.’

        So on each of the Node’s I defined the following:

        SET GLOBAL group_replication_ip_whitelist = ‘165.233.206.40, 165.233.206.41, 165.233.206.43’;

        This solved the Issue as I was able to successfully add the 2 Nodes to the Cluster.

        I think it may have something to do with my cluster definition:

        var cluster = dba.createCluster(‘My_Cluster’, {ipWhitelist: “165.233.206.40, 165.233.206.41, 165.233.206.43”});

        Thank you for your help again, will let you know how it goes

        • Hi Lefred,

          Cluster is alive and running perfectly, I just have one potential Issue, I see this one line in the Error log:

          [ERROR] Plugin group_replication reported: ‘Group contains 2 members which is greater than group_replication_auto_increment_increment value of 1. This can lead to an higher rate of transactional aborts.’

          Should I be concerened with this Error as The Cluster is Online and succesfully switching between nodes with no issues, This came up when I added the Second Node to the Cluster.

          Tx!

  5. Dear lefred

    I was able to add 3 instance after lot of researching your blog and mysql dev website. Now,

    I can add 3 instance but the state are showing missing. It means it tried to recovering after 10 attempt its showing the “Missing state”. Attached the error log for you reference.

    mysql-js> cluster.status()
    {
    “clusterName”: “mycluster”,
    “defaultReplicaSet”: {
    “name”: “default”,
    “primary”: “mysql1:3306”,
    “status”: “OK_NO_TOLERANCE”,
    “statusText”: “Cluster is NOT tolerant to any failures. 2 members are not active”,
    “topology”: {
    “162.219.27.252:3306”: {
    “address”: “162.219.27.252:3306”,
    “mode”: “R/O”,
    “readReplicas”: {},
    “role”: “HA”,
    “status”: “RECOVERING”
    },
    “162.219.27.253:3306”: {
    “address”: “162.219.27.253:3306”,
    “mode”: “R/O”,
    “readReplicas”: {},
    “role”: “HA”,
    “status”: “(MISSING)”
    },
    “mysql1:3306”: {
    “address”: “mysql1:3306”,
    “mode”: “R/W”,
    “readReplicas”: {},
    “role”: “HA”,
    “status”: “ONLINE”
    }
    }
    }

    2017-12-17T17:03:50.277582Z 21 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=’162-219-27-251.alnitech.com’, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=’162-219-27-251.alnitech.com’, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”.
    2017-12-17T17:03:50.282422Z 21 [Note] Plugin group_replication reported: ‘Establishing connection to a group replication recovery donor 3f16c9a9-e30a-11e7-a8c1-000c2910cdea at 162-219-27-251.alnitech.com port: 3306.’
    2017-12-17T17:03:50.282639Z 27 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the ‘START SLAVE Syntax’ in the MySQL Manual for more information.
    2017-12-17T17:03:50.300335Z 27 [ERROR] Slave I/O for channel ‘group_replication_recovery’: error connecting to master ‘mysql_innodb_cluster_rp447960878@162-219-27-251.alnitech.com:3306’ – retry-time: 60 retries: 1, Error_code: 2005
    2017-12-17T17:03:50.300353Z 27 [Note] Slave I/O thread for channel ‘group_replication_recovery’ killed while connecting to master
    2017-12-17T17:03:50.300358Z 27 [Note] Slave I/O thread exiting for channel ‘group_replication_recovery’, read up to log ‘FIRST’, position 4
    2017-12-17T17:03:50.300530Z 21 [ERROR] Plugin group_replication reported: ‘There was an error when connecting to the donor server. Please check that group_replication_recovery channel credentials and all MEMBER_HOST column values of performance_schema.replication_group_members table are correct and DNS resolvable.’
    2017-12-17T17:03:50.300547Z 21 [ERROR] Plugin group_replication reported: ‘For details please check performance_schema.replication_connection_status table and error log messages of Slave I/O for channel group_replication_recovery.’
    2017-12-17T17:03:50.300717Z 21 [Note] Plugin group_replication reported: ‘Retrying group recovery connection with another donor. Attempt 3/10’

    • Hi Saravana,

      Could you also paste the error log of the server called 162-219-27-251.alnitech.com ?

      It was acting as donor of the server you pasted the error log.
      Additionally, what’s the MySQL version you are testing ?
      Cheers.

      • Thanks for your reply.
        Mysql version is mysqld Ver 5.7.20 for Linux on x86_64 (MySQL Community Server (GPL))

        2017-12-18T06:02:02.984084Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 2e0493c8’
        2017-12-18T06:02:07.085957Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: mysql2:3306’
        2017-12-18T06:02:07.086067Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to mysql2:3306, 162-219-27-251.alnitech.com:3306 on view 15135188824968611:18.’
        2017-12-18T06:05:50.069841Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 2e0493c8’
        2017-12-18T06:05:52.154997Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: 162-219-27-253.alnitech.com:3306’
        2017-12-18T06:05:52.155133Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to mysql2:3306, 162-219-27-251.alnitech.com:3306, 162-219-27-253.alnitech.com:3306 on view 15135188824968611:19.’

        This is the log messages.

        • Once all attempts are done. This is the final log

          2017-12-18T06:02:02.984084Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 2e0493c8’
          2017-12-18T06:02:07.085957Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: mysql2:3306’
          2017-12-18T06:02:07.086067Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to mysql2:3306, 162-219-27-251.alnitech.com:3306 on view 15135188824968611:18.’
          2017-12-18T06:05:50.069841Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 2e0493c8’
          2017-12-18T06:05:52.154997Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: 162-219-27-253.alnitech.com:3306’
          2017-12-18T06:05:52.155133Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to mysql2:3306, 162-219-27-251.alnitech.com:3306, 162-219-27-253.alnitech.com:3306 on view 15135188824968611:19.’
          2017-12-18T06:11:07.412593Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 2e0493c8’
          2017-12-18T06:11:08.064886Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: mysql2:3306’
          2017-12-18T06:11:08.065000Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to 162-219-27-251.alnitech.com:3306, 162-219-27-253.alnitech.com:3306 on view 15135188824968611:20.’
          2017-12-18T06:14:52.444985Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 2e0493c8’
          2017-12-18T06:14:52.974000Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: 162-219-27-253.alnitech.com:3306’
          2017-12-18T06:14:52.974189Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to 162-219-27-251.alnitech.com:3306 on view 15135188824968611:21.’

  6. Hi lefred/saravan,

    I am also facing the similar issue like saravana, below is my cluster status, entry of my.cnf, /etc/hosts file. Also the error logs from my 1st and 2nd server. Kindly let me know where i am wrong.

    Cluster status:
    {
    “clusterName”: “mycluster”,
    “defaultReplicaSet”: {
    “name”: “default”,
    “primary”: “mysql01:3306”,
    “status”: “OK_NO_TOLERANCE”,
    “statusText”: “Cluster is NOT tolerant to any failures. 2 members are not active”,
    “topology”: {
    “mysql01:3306”: {
    “address”: “mysql01:3306”,
    “mode”: “R/W”,
    “readReplicas”: {},
    “role”: “HA”,
    “status”: “ONLINE”
    },
    “mysql02:3306”: {
    “address”: “mysql02:3306”,
    “mode”: “R/O”,
    “readReplicas”: {},
    “role”: “HA”,
    “status”: “(MISSING)”
    },
    “mysql03:3306”: {
    “address”: “mysql03:3306”,
    “mode”: “R/O”,
    “readReplicas”: {},
    “role”: “HA”,
    “status”: “(MISSING)”
    }
    }
    }
    }
    entry of /etc/hosts

    ip2 mysql02
    ip3 mysql03

    entry of cnf
    report_host=mysql02,mysql03 // i have added this entry in cnf along the other entries

    log of mysql01

    2017-12-30T18:54:37.366444Z 57 [Note] Access denied for user ‘cluster’@’mysql01’ (using password: YES)
    2017-12-30T19:15:14.309242Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 5f14d225’
    2017-12-30T19:15:18.251396Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: mysql02,mysql03:3306’
    2017-12-30T19:15:18.251613Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to mysql02,mysql03:3306,
    mysql02,mysql03:3306 on view 15146479517855109:6.’
    2017-12-30T19:16:53.197629Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 5f14d225’
    2017-12-30T19:16:56.197680Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: mysql02,mysql02:3306’
    2017-12-30T19:16:56.197877Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to mysql02,mysql03:3306,
    mysql02,mysql03:3306, mysql02,mysql02:3306 on view 15146479517855109:7.’
    2017-12-30T19:17:48.981659Z 59 [Note] Aborted connection 59 to db: ‘unconnected’ user: ‘cluster’ host:
    ‘mysql01’ (Got an error reading communication packets)
    2017-12-30T19:24:18.429576Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 5f14d225’
    2017-12-30T19:24:19.062117Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: mysql02,mysql03:3306’
    2017-12-30T19:24:19.062348Z 0 [Note] Plugin group_replication reported:
    ‘Group membership changed to mysql02,mysql03:3306, mysql02,mysql02:3306 on view 15146479517855109:8.’
    2017-12-30T19:25:56.650022Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 5f14d225’
    2017-12-30T19:25:57.067077Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: mysql02,mysql02:3306’
    2017-12-30T19:25:57.067276Z 0 [Note] Plugin group_replication reported:
    ‘Group membership changed to mysql02,mysql03:3306 on view 15146479517855109:9.’

    log of mysql 02

    2017-12-30T19:23:18.390087Z 52 [ERROR] Plugin group_replication reported: ‘There was an error when connecting to the donor server. Please check that gro$
    2017-12-30T19:23:18.390110Z 52 [ERROR] Plugin group_replication reported: ‘For details please check performance_schema.replication_connection_status tab$
    2017-12-30T19:23:18.390289Z 52 [Note] Plugin group_replication reported: ‘Retrying group recovery connection with another donor. Attempt 10/10’
    2017-12-30T19:24:18.390769Z 52 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=’$
    2017-12-30T19:24:18.398428Z 52 [Note] Plugin group_replication reported: ‘Establishing connection to a group replication recovery donor a23e118d-ed73-11$
    2017-12-30T19:24:18.401836Z 68 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore no$
    2017-12-30T19:24:18.404662Z 68 [ERROR] Slave I/O for channel ‘group_replication_recovery’: error connecting to master ‘mysql_innodb_cluster_rp454634638@$
    2017-12-30T19:24:18.404683Z 68 [Note] Slave I/O thread for channel ‘group_replication_recovery’ killed while connecting to master
    2017-12-30T19:24:18.404690Z 68 [Note] Slave I/O thread exiting for channel ‘group_replication_recovery’, read up to log ‘FIRST’, position 4
    2017-12-30T19:24:18.404818Z 52 [ERROR] Plugin group_replication reported: ‘There was an error when connecting to the donor server. Please check that gro$
    2017-12-30T19:24:18.404838Z 52 [ERROR] Plugin group_replication reported: ‘For details please check performance_schema.replication_connection_status tab$
    2017-12-30T19:24:18.405025Z 52 [ERROR] Plugin group_replication reported: ‘Maximum number of retries when trying to connect to a donor reached. Aborting$
    2017-12-30T19:24:18.405038Z 52 [Note] Plugin group_replication reported: ‘Terminating existing group replication donor connection and purging the corres$
    2017-12-30T19:24:18.405090Z 55 [Note] Error reading relay log event for channel ‘group_replication_recovery’: slave SQL thread was killed
    2017-12-30T19:24:18.416880Z 52 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=’$
    2017-12-30T19:24:18.428257Z 52 [ERROR] Plugin group_replication reported: ‘Fatal error during the Recovery process of Group Replication. The server will$
    2017-12-30T19:24:18.429619Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 5f14d225’
    2017-12-30T19:24:21.870550Z 0 [Note] Plugin group_replication reported: ‘state 4410 action xa_terminate’
    2017-12-30T19:24:21.871171Z 0 [Note] Plugin group_replication reported: ‘new state x_start’
    2017-12-30T19:24:21.871188Z 0 [Note] Plugin group_replication reported: ‘state 4337 action xa_exit’
    2017-12-30T19:24:21.871270Z 0 [Note] Plugin group_replication reported: ‘Exiting xcom thread’
    2017-12-30T19:24:21.871281Z 0 [Note] Plugin group_replication reported: ‘new state x_start’
    2017-12-30T19:24:21.871188Z 0 [Note] Plugin group_replication reported: ‘state 4337 action xa_exit’
    2017-12-30T19:24:21.871270Z 0 [Note] Plugin group_replication reported: ‘Exiting xcom thread’
    2017-12-30T19:24:21.871281Z 0 [Note] Plugin group_replication reported: ‘new state x_start’
    2017-12-30T19:24:21.911799Z 0 [Note] Plugin group_replication reported: ‘Group membership changed: This member has left the group.’

    • Hi Karthick,

      Did you whitelist the server IP you have used in the cluster. Probably you need to run below command in all the server

      set global group_replication_ip_whitelist=”192.168.1.1,192.1681.4″;

      • Hi Saravan,

        Thank you for your reply. I have added the whitelist as mentioned got the issue fixed.
        One think i just want to reconfirm

        I understand that it is a pre requesties in innodb Every table must have a primary key/relevent key.

        Is there any way/possibility that we can make our innodb cluster to have tables without primary key?

        • Hello,
          No you can’t ! It’s mandatory and needed to perform the certification.
          Also not having PK in InnoDB can be very bad for performance, certainly if you have many tables without PK, they will all share the same hidden counter and use a mutex on it… something you don’t really want to experience.
          Cheers.

          • Hi Lefred,

            Compliments of the New year.

            Since our talks, I have since went online with my 3 node innodb cluster, It worked for a while and all of a sudden stopped with the following behaviour:

            1. Apps report not being able to write to DB (MySQL Server went into Read Only Mode)

            mysql> show variables like ‘super%’;
            +—————–+——-+
            | Variable_name | Value |
            +—————–+——-+
            | super_read_only | ON |
            +—————–+——-+
            1 row in set (0.00 sec)

            mysql> show variables like ‘read_only’;
            +—————+——-+
            | Variable_name | Value |
            +—————+——-+
            | read_only | ON |
            +—————+——-+
            1 row in set (0.01 sec)

            2.Cluster Reports following Error:

            mysql-js> var cluster = dba.getCluster()
            WARNING: The session is on a Error instance.
            Write operations in the InnoDB cluster will not be allowed.
            The information retrieved with describe() and status() may be outdated.

            mysql-js> cluster.status()
            {
            “clusterName”: “Zabbix_Cluster”,
            “defaultReplicaSet”: {
            “name”: “default”,
            “status”: “OK_NO_TOLERANCE”,
            “statusText”: “Cluster is NOT tolerant to any failures. 3 members are not active”,
            “topology”: {
            “165.233.206.40:3306”: {
            “address”: “165.233.206.40:3306”,
            “mode”: “R/O”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “(MISSING)”
            },
            “165.233.206.41:3306”: {
            “address”: “165.233.206.41:3306”,
            “mode”: “R/O”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “(MISSING)”
            },
            “165.233.206.43:3306”: {
            “address”: “165.233.206.43:3306”,
            “mode”: “R/O”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “ERROR”
            }
            }
            },
            “warning”: “The instance status may be inaccurate as it was generated from an instance in Error state”

            3. Error Log Show’s this:
            2018-01-19T08:20:34.138842Z 0 [ERROR] Plugin group_replication reported: ‘Member was expelled from the group due to network failures, changing member status to ERROR.’
            2018-01-19T08:20:34.146573Z 0 [Note] Plugin group_replication reported: ‘getstart group_id 20880e0b’
            2018-01-19T08:20:34.910687Z 0 [Warning] Plugin group_replication reported: ‘Due to a plugin error, some transactions can’t be certified and will now rollback.’
            2018-01-19T08:20:34.914135Z 4222153 [ERROR] Plugin group_replication reported: ‘Transaction cannot be executed while Group Replication is on ERROR state. Check for errors and restart the plugin’
            2018-01-19T08:20:34.914166Z 4222153 [ERROR] Run function ‘before_commit’ in plugin ‘group_replication’ failed
            2018-01-19T08:20:34.914303Z 4222147 [ERROR] Plugin group_replication reported: ‘Transaction cannot be executed while Group Replication is on ERROR state. Check for errors and restart the plugin’

  7. I am deploying InnoDB Cluster for production environment.

    while creating did’t get any issue, but while adding instance i am facing some issues.

    mysql-js> cluster.checkInstanceState(‘root@10.10.14.50:3306’)
    Please provide the password for ‘root@10.10.14.50:3306’:
    Analyzing the instance replication state…

    The instance ‘10.10.14.50:3306’ is valid for the cluster.
    The instance is fully recoverable.

    {
    “reason”: “recoverable”,
    “state”: “ok”
    }

    The error getting is:

    mysql-js>cluster.addInstance(‘root@10.10.14.51:3306’)
    A new instance will be added to the InnoDB cluster. Depending on the amount of
    data on the cluster this might take from a few seconds to several hours.

    Please provide the password for ‘root@10.10.14.51:3306’:
    Adding instance to the cluster …

    Cluster.addInstance: WARNING: The given ‘10.10.14.51:3306’ and the peer ‘ubuntu:3306’ have duplicated server_id 1
    ERROR: Error joining instance to cluster: The operation could not continue due to the following requirements not being met:
    The server_id 1 is already used by peer ‘ubuntu:3306’
    The server_id must be different from the ones in use by the members of the GR group. (RuntimeError)

    I am missing any configuration steps? Please help me solving this issue.

    Regards,
    Ankita

    • Hi Ankita,
      Which version of MySQL and Shell are you using ?
      For the mysql-shell you should use (even with MySQL 5.7.2x) 8.0.11.
      The problem you are having is that in my.cnf you should have different server_id for each members (server_id=1, server_id=2, …)
      Regards,

      • Hi,
        Mysql version : Ver 8.0.3-rc for Linux on x86_64 (MySQL Community Server (GPL))
        Mysql shell version: Ver 1.0.11 for Linux on x86_64 – for MySQL 5.7.20

        IN my.conf file i have not done any changes:

        # Copyright (c) 2015, 2016, Oracle and/or its affiliates. All rights reserved.
        #
        # This program is free software; you can redistribute it and/or modify
        # it under the terms of the GNU General Public License as published by
        # the Free Software Foundation; version 2 of the License.
        #
        # This program is distributed in the hope that it will be useful,
        # but WITHOUT ANY WARRANTY; without even the implied warranty of
        # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
        # GNU General Public License for more details.
        #
        # You should have received a copy of the GNU General Public License
        # along with this program; if not, write to the Free Software
        # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA

        #
        # The MySQL Server configuration file.
        #
        # For explanations see
        # http://dev.mysql.com/doc/mysql/en/server-system-variables.html

        # * IMPORTANT: Additional settings that can override those from this file!
        # The files must end with ‘.cnf’, otherwise they’ll be ignored.
        #

        !includedir /etc/mysql/conf.d/
        !includedir /etc/mysql/mysql.conf.d/

        Where I have to do these changes? I followed the above steps in that it is not mentioned. I am new to this, Can you please help me to solve this?

        Thanks and Regards,
        Ankita

        • Hi,
          You should not use the Release Candidate but the Latest GA version of MySQL 8.0.11
          Same for the Shell : 8.0.11

          If you do so, it’s very easy, you don’t event need to modify the config file by yourself, the new shell will do it (but in case you want to know, you can do it in one of the file in those directories where [mysqld] section is referenced (I don’t know where ubuntu puts it).

          Check this post: http://lefred.be/content/mysql-shell-for-mysql-8-0-your-best-friends-in-the-cloud/

          Regards,

          • Hi ,
            I have installed 5.7 since i am using ubuntu 14.04.
            mysql : Ver 14.14 Distrib 5.7.22, for Linux (x86_64) using EditLine wrapper
            mysqlsh : Ver 1.0.11 for Linux on x86_64 – for MySQL 5.7.20 (MySQL Community Server (GPL))

            I have created cluster without any issue. But when i am checking cluster status it is returned below message:

            mysql-js> cluster.status()
            {
            “clusterName”: “prodCluster”,
            “defaultReplicaSet”: {
            “name”: “default”,
            “primary”: “10.10.14.50:3306”,
            “status”: “OK_NO_TOLERANCE”,
            “statusText”: “Cluster is NOT tolerant to any failures. 2 members are not active”,
            “topology”: {
            “10.10.14.50:3306”: {
            “address”: “10.10.14.50:3306”,
            “mode”: “R/W”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “ONLINE”
            },
            “10.10.14.51:3306”: {
            “address”: “10.10.14.51:3306”,
            “mode”: “R/O”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “RECOVERING”
            },
            “10.10.14.52:3306”: {
            “address”: “10.10.14.52:3306”,
            “mode”: “R/O”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “RECOVERING”
            }
            }
            }
            }

            What can be issue?

            Thanks and Regards,
            Ankita

  8. Hi Ankita,

    Even with mysql 5.7 it’s recommended to use mysql shell 8.0.11 😉

    Check in the error log of the nodes in “recovering” why group replication is not started, they might be multiple reasons.

    Regards.

  9. Hi,
    I have installed mysql 5.7.22,When I execute:var cluster=dba.createCluster(‘mycluster’),
    I get a Error:
    Dba.createCluster: ERROR: Error starting cluster: ‘10.16.44.138:3306’ – Query failed. MySQL Error (3092): ClassicSession.query: The server is not configured properly to be an active member of the group. Please see more details on error log..

    The Details Log :
    2018-05-22T03:18:58.981520Z 67 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=”, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”.
    2018-05-22T03:18:59.044577Z 67 [Note] Plugin group_replication reported: ‘Group communication SSL configuration: group_replication_ssl_mode: “DISABLED”‘
    2018-05-22T03:18:59.044712Z 67 [Note] Plugin group_replication reported: ‘[GCS] Added automatically IP ranges 10.16.44.138/24,127.0.0.1/8 to the whitelist’
    2018-05-22T03:18:59.044828Z 67 [Warning] Plugin group_replication reported: ‘[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.’
    2018-05-22T03:18:59.044869Z 67 [Note] Plugin group_replication reported: ‘[GCS] SSL was not enabled’
    2018-05-22T03:18:59.044888Z 67 [Note] Plugin group_replication reported: ‘Initialized group communication with configuration: group_replication_group_name: “de28477e-5d6e-11e8-ab11-005056ac54ec”; group_replication_local_address: “10.16.44.138:33061”; group_replication_group_seeds: “”; group_replication_bootstrap_group: true; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: “AUTOMATIC”‘
    2018-05-22T03:18:59.044913Z 67 [Note] Plugin group_replication reported: ‘[GCS] Configured number of attempts to join: 0’
    2018-05-22T03:18:59.044919Z 67 [Note] Plugin group_replication reported: ‘[GCS] Configured time between attempts to join: 5 seconds’
    2018-05-22T03:18:59.044936Z 67 [Note] Plugin group_replication reported: ‘Member configuration: member_id: 3378132610; member_uuid: “dd5fc29c-5d68-11e8-b6c4-005056ac54ec”; single-primary mode: “true”; group_replication_auto_increment_increment: 7; ‘
    2018-05-22T03:18:59.045327Z 69 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
    2018-05-22T03:18:59.127266Z 72 [Note] Slave SQL thread for channel ‘group_replication_applier’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./mysql1-relay-bin-group_replication_applier.000001’ position: 4
    2018-05-22T03:18:59.127303Z 67 [Note] Plugin group_replication reported: ‘Group Replication applier module successfully initialized!’
    2018-05-22T03:18:59.129793Z 0 [Note] Plugin group_replication reported: ‘XCom protocol version: 3’
    2018-05-22T03:18:59.129818Z 0 [Note] Plugin group_replication reported: ‘XCom initialized and ready to accept incoming connections on port 33061’
    2018-05-22T03:19:09.129685Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] Error connecting to the local group communication engine instance.’
    2018-05-22T03:19:09.154651Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] The member was unable to join the group. Local port: 33061’
    2018-05-22T03:19:59.127485Z 67 [ERROR] Plugin group_replication reported: ‘Timeout on wait for view after joining group’
    2018-05-22T03:19:59.127588Z 67 [Note] Plugin group_replication reported: ‘Requesting to leave the group despite of not being a member’
    2018-05-22T03:19:59.127641Z 67 [ERROR] Plugin group_replication reported: ‘[GCS] The member is leaving a group without being on one.’
    2018-05-22T03:19:59.128279Z 72 [Note] Error reading relay log event for channel ‘group_replication_applier’: slave SQL thread was killed
    2018-05-22T03:19:59.241607Z 69 [Note] Plugin group_replication reported: ‘The group replication applier thread was killed’
    what’s wrong?

    • You need to configure the member to be compatible with Group Replication. To do so, please use the new MySQL-Shell (8.0.11) even with MySQL 5.7.22 and try dba.ConfigureLocalInstance()

      Regards,

      • Hello I am getting the same error. When I run dba.configureLocalInstance(); it works but my dba.createCluster(‘mycluster’) fails.

        MySQL USECTSTMGTDEV01:3306 ssl JS > var cluster=dba.createCluster(‘mycluster’)
        A new InnoDB cluster will be created on instance ‘dbauser@USECTSTMGTDEV01:3306’.

        Validating instance at USECTSTMGTDEV01:3306…

        This instance reports its own address as USECTSTMGTDEV01

        Instance configuration is suitable.
        Creating InnoDB cluster ‘mycluster’ on ‘dbauser@USECTSTMGTDEV01:3306’…
        Dba.createCluster: ERROR: Error starting cluster: ‘USECTSTMGTDEV01:3306’ – Query failed. MySQL Error (3092): ClassicSession.query: The server is not configured properly to be an active member of the group. Please see more details on error log.. Query: START group_replication: MySQL Error (3092): ClassicSession.query: The server is not configured properly to be an active member of the group. Please see more details on error log. (RuntimeError)

        MySQL USECTSTMGTDEV01:3306 ssl JS > ^C
        MySQL USECTSTMGTDEV01:3306 ssl JS > ^C
        MySQL USECTSTMGTDEV01:3306 ssl JS > dba.ConfigureLocalInstance()
        Invalid object member ConfigureLocalInstance (AttributeError)

        MySQL USECTSTMGTDEV01:3306 ssl JS > \connect root@localhost:3306
        Creating a session to ‘root@localhost:3306’
        Fetching schema names for autocompletion… Press ^C to stop.
        Closing old connection…
        Your MySQL connection id is 21
        Server version: 5.7.25-log MySQL Community Server (GPL)
        No default schema selected; type \use to set one.

        MySQL localhost:3306 ssl JS > dba.ConfigureLocalInstance()
        Invalid object member ConfigureLocalInstance (AttributeError)

        MySQL localhost:3306 ssl JS > dba.configureLocalInstance();
        Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster…

        This instance reports its own address as USECTSTMGTDEV01
        Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

        The instance ‘localhost:3306’ is valid for InnoDB cluster usage.
        The instance ‘localhost:3306’ is already ready for InnoDB cluster usage.

        MySQL localhost:3306 ssl JS > ^C

        My log states:
        2019-01-30T22:43:39.040871Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use –explicit_defaults_for_timestamp server option (see documentation for more details).
        2019-01-30T22:43:39.042298Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.25-log) starting as process 22001 …
        2019-01-30T22:43:39.046646Z 0 [Note] InnoDB: PUNCH HOLE support available
        2019-01-30T22:43:39.046691Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
        2019-01-30T22:43:39.046698Z 0 [Note] InnoDB: Uses event mutexes
        2019-01-30T22:43:39.046705Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
        2019-01-30T22:43:39.046710Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
        2019-01-30T22:43:39.046718Z 0 [Note] InnoDB: Using Linux native AIO
        2019-01-30T22:43:39.047028Z 0 [Note] InnoDB: Number of pools: 1
        2019-01-30T22:43:39.047158Z 0 [Note] InnoDB: Using CPU crc32 instructions
        2019-01-30T22:43:39.049240Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
        2019-01-30T22:43:39.057692Z 0 [Note] InnoDB: Completed initialization of buffer pool
        2019-01-30T22:43:39.059938Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
        2019-01-30T22:43:39.071828Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
        2019-01-30T22:43:39.082996Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
        2019-01-30T22:43:39.083098Z 0 [Note] InnoDB: Setting file ‘./ibtmp1’ size to 12 MB. Physically writing the file full; Please wait …
        2019-01-30T22:43:39.209928Z 0 [Note] InnoDB: File ‘./ibtmp1’ size is now 12 MB.
        2019-01-30T22:43:39.211964Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
        2019-01-30T22:43:39.212009Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
        2019-01-30T22:43:39.213344Z 0 [Note] InnoDB: Waiting for purge to start
        2019-01-30T22:43:39.263588Z 0 [Note] InnoDB: 5.7.25 started; log sequence number 2712815
        2019-01-30T22:43:39.264100Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
        2019-01-30T22:43:39.264576Z 0 [Note] Plugin ‘FEDERATED’ is disabled.
        2019-01-30T22:43:39.267512Z 0 [Note] InnoDB: Buffer pool(s) load completed at 190130 14:43:39
        2019-01-30T22:43:39.278869Z 0 [ERROR] Plugin group_replication reported: ‘The group name option is mandatory’
        2019-01-30T22:43:39.278907Z 0 [ERROR] Plugin group_replication reported: ‘Unable to start Group Replication on boot’
        2019-01-30T22:43:39.292606Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
        2019-01-30T22:43:39.293292Z 0 [Warning] CA certificate ca.pem is self signed.
        2019-01-30T22:43:39.297530Z 0 [Note] Server hostname (bind-address): ‘*’; port: 3306
        2019-01-30T22:43:39.297676Z 0 [Note] IPv6 is available.
        2019-01-30T22:43:39.297713Z 0 [Note] – ‘::’ resolves to ‘::’;
        2019-01-30T22:43:39.297770Z 0 [Note] Server socket created on IP: ‘::’.
        2019-01-30T22:43:39.338361Z 0 [Note] Failed to start slave threads for channel ”
        2019-01-30T22:43:39.353407Z 0 [Note] Event Scheduler: Loaded 0 events
        2019-01-30T22:43:39.353899Z 0 [Note] /usr/sbin/mysqld: ready for connections.
        Version: ‘5.7.25-log’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MySQL Community Server (GPL)
        2019-01-30T22:44:30.489450Z 2 [Note] Got packets out of order
        2019-01-30T22:46:25.708110Z 6 [Note] Got packets out of order
        2019-01-30T22:48:33.413888Z 9 [Note] Got packets out of order
        2019-01-30T22:49:17.417808Z 14 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=”, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”.
        2019-01-30T22:49:17.438510Z 14 [Note] Plugin group_replication reported: ‘Group communication SSL configuration: group_replication_ssl_mode: “REQUIRED”; server_key_file: “server-key.pem”; server_cert_file: “server-cert.pem”; client_key_file: “server-key.pem”; client_cert_file: “server-cert.pem”; ca_file: “ca.pem”; ca_path: “”; cipher: “”; tls_version: “TLSv1,TLSv1.1”; crl_file: “”; crl_path: “”‘
        2019-01-30T22:49:17.438832Z 14 [Note] Plugin group_replication reported: ‘[GCS] Added automatically IP ranges 10.110.28.21/24,127.0.0.1/8 to the whitelist’
        2019-01-30T22:49:17.439194Z 14 [Note] Plugin group_replication reported: ‘[GCS] Translated ‘USECTSTMGTDEV01′ to 10.110.28.21’
        2019-01-30T22:49:17.439440Z 14 [Warning] Plugin group_replication reported: ‘[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.’
        2019-01-30T22:49:17.440317Z 14 [Note] Plugin group_replication reported: ‘Initialized group communication with configuration: group_replication_group_name: “461b2458-24e1-11e9-a72d-005056975259”; group_replication_local_address: “USECTSTMGTDEV01:33061”; group_replication_group_seeds: “”; group_replication_bootstrap_group: true; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: “AUTOMATIC”‘
        2019-01-30T22:49:17.440440Z 14 [Note] Plugin group_replication reported: ‘[GCS] Configured number of attempts to join: 0’
        2019-01-30T22:49:17.440463Z 14 [Note] Plugin group_replication reported: ‘[GCS] Configured time between attempts to join: 5 seconds’
        2019-01-30T22:49:17.440523Z 14 [Note] Plugin group_replication reported: ‘Member configuration: member_id: 1; member_uuid: “58836627-2351-11e9-bfae-005056975259”; single-primary mode: “true”; group_replication_auto_increment_increment: 7; ‘
        2019-01-30T22:49:17.441094Z 16 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
        2019-01-30T22:49:17.461740Z 14 [Note] Plugin group_replication reported: ‘Group Replication applier module successfully initialized!’
        2019-01-30T22:49:17.461805Z 19 [Note] Slave SQL thread for channel ‘group_replication_applier’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./USECTSTMGTDEV01-relay-bin-group_replication_applier.000006’ position: 4
        2019-01-30T22:49:17.503360Z 0 [Note] Plugin group_replication reported: ‘XCom protocol version: 3’
        2019-01-30T22:49:17.503417Z 0 [Note] Plugin group_replication reported: ‘XCom initialized and ready to accept incoming connections on port 33061’
        2019-01-30T22:49:27.508979Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] Error connecting to the local group communication engine instance.’
        2019-01-30T22:49:27.533814Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] The member was unable to join the group. Local port: 33061’
        2019-01-30T22:50:17.461985Z 14 [ERROR] Plugin group_replication reported: ‘Timeout on wait for view after joining group’
        2019-01-30T22:50:17.462085Z 14 [Note] Plugin group_replication reported: ‘Requesting to leave the group despite of not being a member’
        2019-01-30T22:50:17.462144Z 14 [ERROR] Plugin group_replication reported: ‘[GCS] The member is leaving a group without being on one.’
        2019-01-30T22:50:17.462976Z 19 [Note] Error reading relay log event for channel ‘group_replication_applier’: slave SQL thread was killed
        2019-01-30T22:50:17.463021Z 19 [Note] Slave SQL thread for channel ‘group_replication_applier’ exiting, replication stopped in log ‘FIRST’ at position 0
        2019-01-30T22:50:17.466023Z 16 [Note] Plugin group_replication reported: ‘The group replication applier thread was killed’
        2019-01-30T22:52:30.133428Z 20 [Note] Got packets out of order

          • mysql Ver 14.14 Distrib 5.7.25,

            I was able to add the instance finally. However it comes up as missing in cluster.status() ….
            —————————
            cluster.status()
            {
            “clusterName”: “mycluster”,
            “defaultReplicaSet”: {
            “name”: “default”,
            “primary”: “USECTSTMGTDEV01:3306”,
            “ssl”: “DISABLED”,
            “status”: “OK_NO_TOLERANCE”,
            “statusText”: “Cluster is NOT tolerant to any failures. 2 members are not active”,
            “topology”: {
            “USECTSTMGTDEV01:3306”: {
            “address”: “USECTSTMGTDEV01:3306”,
            “mode”: “R/W”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “ONLINE”
            },
            “USECTSTMGTDEV02:3306”: {
            “address”: “USECTSTMGTDEV02:3306”,
            “mode”: “n/a”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “(MISSING)”
            },
            “USECTSTMGTDEV03:3306”: {
            “address”: “USECTSTMGTDEV03:3306”,
            “mode”: “n/a”,
            “readReplicas”: {},
            “role”: “HA”,
            “status”: “(MISSING)”
            }
            },
            “topologyMode”: “Single-Primary”
            },
            “groupInformationSourceMember”: “USECTSTMGTDEV01:3306″

            I created the cluster then had issues adding the first instance, once I got both instance 02 and instance 03 up they come up as missing.
            I am seeing relication issues. SO I guess I will start there. system 02 had more things replicated than the master so I am going to do a stop slave, reset slave and stop master reset master. Then start master start slave and start replication to see if I can get it to work again.
            This should solve the mismatch replication numbers but what do I do for the communication problems?
            —————
            Odd replication errors. this is what the master Log says.
            —————–Master Log ———————-
            2019-01-31T20:51:59.288565Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use –explicit_defaults_for_timestamp server option (see documentation for more details).
            2019-01-31T20:51:59.292041Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.25-log) starting as process 7832 …
            2019-01-31T20:51:59.312019Z 0 [Note] InnoDB: PUNCH HOLE support available
            2019-01-31T20:51:59.312059Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
            2019-01-31T20:51:59.312066Z 0 [Note] InnoDB: Uses event mutexes
            2019-01-31T20:51:59.312073Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
            2019-01-31T20:51:59.312081Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
            2019-01-31T20:51:59.312087Z 0 [Note] InnoDB: Using Linux native AIO
            2019-01-31T20:51:59.312398Z 0 [Note] InnoDB: Number of pools: 1
            2019-01-31T20:51:59.317170Z 0 [Note] InnoDB: Using CPU crc32 instructions
            2019-01-31T20:51:59.319316Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
            2019-01-31T20:51:59.330205Z 0 [Note] InnoDB: Completed initialization of buffer pool
            2019-01-31T20:51:59.333253Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
            2019-01-31T20:51:59.388099Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
            2019-01-31T20:51:59.782640Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
            2019-01-31T20:51:59.782815Z 0 [Note] InnoDB: Setting file ‘./ibtmp1’ size to 12 MB. Physically writing the file full; Please wait …
            2019-01-31T20:51:59.927445Z 0 [Note] InnoDB: File ‘./ibtmp1’ size is now 12 MB.
            2019-01-31T20:51:59.929499Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
            2019-01-31T20:51:59.929569Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
            2019-01-31T20:51:59.931418Z 0 [Note] InnoDB: Waiting for purge to start
            2019-01-31T20:51:59.982910Z 0 [Note] InnoDB: 5.7.25 started; log sequence number 2759129
            2019-01-31T20:51:59.984453Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
            2019-01-31T20:51:59.985936Z 0 [Note] Plugin ‘FEDERATED’ is disabled.
            2019-01-31T20:52:00.292166Z 0 [ERROR] Plugin group_replication reported: ‘The group name option is mandatory’
            2019-01-31T20:52:00.292225Z 0 [ERROR] Plugin group_replication reported: ‘Unable to start Group Replication on boot’
            2019-01-31T20:52:00.334359Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
            2019-01-31T20:52:00.338784Z 0 [Warning] CA certificate ca.pem is self signed.
            2019-01-31T20:52:00.342682Z 0 [Note] Server hostname (bind-address): ‘*’; port: 3306
            2019-01-31T20:52:00.342784Z 0 [Note] IPv6 is available.
            2019-01-31T20:52:00.342809Z 0 [Note] – ‘::’ resolves to ‘::’;
            2019-01-31T20:52:00.342852Z 0 [Note] Server socket created on IP: ‘::’.
            2019-01-31T20:52:00.347479Z 0 [Note] InnoDB: Buffer pool(s) load completed at 190131 12:52:00
            2019-01-31T20:52:00.434577Z 0 [Note] Failed to start slave threads for channel ”
            2019-01-31T20:52:00.624999Z 0 [Note] Event Scheduler: Loaded 0 events
            2019-01-31T20:52:00.625196Z 0 [Note] /usr/sbin/mysqld: ready for connections.
            Version: ‘5.7.25-log’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MySQL Community Server (GPL)
            2019-01-31T21:04:46.190220Z 2 [Note] Access denied for user ‘root’@’localhost’ (using password: YES)
            2019-01-31T21:04:54.982709Z 3 [Note] Access denied for user ‘root’@’localhost’ (using password: YES)
            2019-01-31T21:08:33.187233Z 5 [Note] Got packets out of order
            2019-01-31T21:13:14.882778Z 8 [Note] Access denied for user ‘root’@’USECTSTMGTDEV01’ (using password: YES)
            2019-01-31T21:13:15.024867Z 9 [Note] Access denied for user ‘root’@’USECTSTMGTDEV01’ (using password: YES)
            2019-01-31T21:17:08.262157Z 11 [Note] Access denied for user ‘root’@’localhost’ (using password: NO)
            2019-01-31T21:21:51.798144Z 13 [Note] Got packets out of order
            2019-01-31T21:32:44.839107Z 21 [Note] Got packets out of order
            2019-01-31T21:40:37.726464Z 28 [Note] Got packets out of order
            2019-01-31T22:04:53.596510Z 35 [Note] Got packets out of order
            2019-01-31T22:06:18.515916Z 40 [Note] Plugin group_replication reported: ‘Group communication SSL configuration: group_replication_ssl_mode: “DISABLED”‘
            2019-01-31T22:06:18.516242Z 40 [Note] Plugin group_replication reported: ‘[GCS] Added automatically IP ranges 10.110.28.21/24,127.0.0.1/8 to the whitelist’
            2019-01-31T22:06:18.516687Z 40 [Note] Plugin group_replication reported: ‘[GCS] Translated ‘USECTSTMGTDEV01′ to 10.110.28.21’
            2019-01-31T22:06:18.516979Z 40 [Warning] Plugin group_replication reported: ‘[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.’
            2019-01-31T22:06:18.517167Z 40 [Note] Plugin group_replication reported: ‘[GCS] SSL was not enabled’
            2019-01-31T22:06:18.517218Z 40 [Note] Plugin group_replication reported: ‘Initialized group communication with configuration: group_replication_group_name: “a7281583-24ea-11e9-82f6-005056975259”; group_replication_local_address: “USECTSTMGTDEV01:33061”; group_replication_group_seeds: “”; group_replication_bootstrap_group: true; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: “AUTOMATIC”‘
            2019-01-31T22:06:18.517367Z 40 [Note] Plugin group_replication reported: ‘[GCS] Configured number of attempts to join: 0’
            2019-01-31T22:06:18.517388Z 40 [Note] Plugin group_replication reported: ‘[GCS] Configured time between attempts to join: 5 seconds’
            2019-01-31T22:06:18.517476Z 40 [Note] Plugin group_replication reported: ‘Member configuration: member_id: 1; member_uuid: “58836627-2351-11e9-bfae-005056975259”; single-primary mode: “true”; group_replication_auto_increment_increment: 7; ‘
            2019-01-31T22:06:18.518404Z 42 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-01-31T22:06:18.537896Z 40 [Note] Plugin group_replication reported: ‘Group Replication applier module successfully initialized!’
            2019-01-31T22:06:18.537963Z 45 [Note] Slave SQL thread for channel ‘group_replication_applier’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./USECTSTMGTDEV01-relay-bin-group_replication_applier.000014’ position: 688
            2019-01-31T22:06:18.584684Z 0 [Note] Plugin group_replication reported: ‘XCom protocol version: 3’
            2019-01-31T22:06:18.584748Z 0 [Note] Plugin group_replication reported: ‘XCom initialized and ready to accept incoming connections on port 33061’
            2019-01-31T22:06:19.591674Z 48 [Note] Plugin group_replication reported: ‘Only one server alive. Declaring this server as online within the replication group’
            2019-01-31T22:06:19.591857Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV01:3306 on view 15489723795905315:1.’
            2019-01-31T22:06:19.595708Z 0 [Note] Plugin group_replication reported: ‘This server was declared online within the replication group’
            2019-01-31T22:06:19.595835Z 0 [Note] Plugin group_replication reported: ‘A new primary with address USECTSTMGTDEV01:3306 was elected, enabling conflict detection until the new primary applies all relay logs.’
            2019-01-31T22:06:19.595931Z 50 [Note] Plugin group_replication reported: ‘This server is working as primary member.’
            2019-01-31T22:06:23.972284Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: USECTSTMGTDEV02:3306’
            2019-01-31T22:06:23.972573Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV01:3306, USECTSTMGTDEV02:3306 on view 15489723795905315:2.’
            2019-01-31T22:06:24.043492Z 57 [Note] Start binlog_dump to master_thread_id(57) slave_server(2), pos(, 4)
            2019-01-31T22:06:54.044878Z 57 [Note] Aborted connection 57 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:07:24.114325Z 59 [Note] Start binlog_dump to master_thread_id(59) slave_server(2), pos(, 4)
            2019-01-31T22:07:54.116071Z 59 [Note] Aborted connection 59 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:08:24.183563Z 61 [Note] Start binlog_dump to master_thread_id(61) slave_server(2), pos(, 4)
            2019-01-31T22:08:54.185030Z 61 [Note] Aborted connection 61 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:09:24.250866Z 63 [Note] Start binlog_dump to master_thread_id(63) slave_server(2), pos(, 4)
            2019-01-31T22:09:54.252344Z 63 [Note] Aborted connection 63 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:10:24.318082Z 65 [Note] Start binlog_dump to master_thread_id(65) slave_server(2), pos(, 4)
            2019-01-31T22:10:54.319727Z 65 [Note] Aborted connection 65 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:11:24.383390Z 67 [Note] Start binlog_dump to master_thread_id(67) slave_server(2), pos(, 4)
            2019-01-31T22:11:54.385148Z 67 [Note] Aborted connection 67 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:12:24.449206Z 69 [Note] Start binlog_dump to master_thread_id(69) slave_server(2), pos(, 4)
            2019-01-31T22:12:54.451025Z 69 [Note] Aborted connection 69 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:13:24.518289Z 71 [Note] Start binlog_dump to master_thread_id(71) slave_server(2), pos(, 4)
            2019-01-31T22:13:54.520034Z 71 [Note] Aborted connection 71 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:14:24.603202Z 73 [Note] Start binlog_dump to master_thread_id(73) slave_server(2), pos(, 4)
            2019-01-31T22:14:54.604977Z 73 [Note] Aborted connection 73 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:15:24.672946Z 75 [Note] Start binlog_dump to master_thread_id(75) slave_server(2), pos(, 4)
            2019-01-31T22:15:25.442132Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: USECTSTMGTDEV02:3306’
            2019-01-31T22:15:25.442341Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV01:3306 on view 15489723795905315:3.’
            2019-01-31T22:15:54.674701Z 75 [Note] Aborted connection 75 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0441065943’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-01-31T22:25:45.153384Z 42 [Note] Plugin group_replication reported: ‘Primary had applied all relay logs, disabled conflict detection’
            2019-01-31T23:16:14.027406Z 79 [Note] Got packets out of order
            2019-01-31T23:23:51.096030Z 92 [Note] Got packets out of order
            2019-01-31T23:24:51.181514Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: USECTSTMGTDEV02:3306’
            2019-01-31T23:24:51.181825Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV02:3306, USECTSTMGTDEV01:3306 on view 15489723795905315:4.’
            2019-01-31T23:24:51.498304Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: USECTSTMGTDEV02:3306’
            2019-01-31T23:24:51.498491Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV01:3306 on view 15489723795905315:5.’
            2019-01-31T23:28:27.510527Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: USECTSTMGTDEV02:3306’
            2019-01-31T23:28:27.510830Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV02:3306, USECTSTMGTDEV01:3306 on view 15489723795905315:6.’
            2019-01-31T23:28:28.525481Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: USECTSTMGTDEV02:3306’
            2019-01-31T23:28:28.525711Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV01:3306 on view 15489723795905315:7.’
            2019-01-31T23:50:21.610888Z 116 [Note] Got packets out of order
            2019-01-31T23:53:06.182308Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: USECTSTMGTDEV02:3306’
            2019-01-31T23:53:06.182601Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV02:3306, USECTSTMGTDEV01:3306 on view 15489723795905315:8.’
            2019-01-31T23:53:07.299037Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: USECTSTMGTDEV02:3306’
            2019-01-31T23:53:07.299241Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV01:3306 on view 15489723795905315:9.’
            2019-02-01T00:04:12.054563Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: USECTSTMGTDEV02:3306’
            2019-02-01T00:04:12.054828Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV02:3306, USECTSTMGTDEV01:3306 on view 15489723795905315:10.’
            2019-02-01T00:04:12.098391Z 137 [Note] Start binlog_dump to master_thread_id(137) slave_server(2), pos(, 4)
            2019-02-01T00:04:13.169798Z 137 [Note] Aborted connection 137 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:05:12.161405Z 140 [Note] Start binlog_dump to master_thread_id(140) slave_server(2), pos(, 4)
            2019-02-01T00:05:42.162979Z 140 [Note] Aborted connection 140 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:06:12.216365Z 147 [Note] Start binlog_dump to master_thread_id(147) slave_server(2), pos(, 4)
            2019-02-01T00:06:42.218320Z 147 [Note] Aborted connection 147 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:07:10.551834Z 0 [Note] Plugin group_replication reported: ‘Members joined the group: USECTSTMGTDEV03:3306’
            2019-02-01T00:07:10.552113Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV02:3306, USECTSTMGTDEV01:3306, USECTSTMGTDEV03:3306 on view 15489723795905315:11.’
            2019-02-01T00:07:10.605205Z 157 [Note] Start binlog_dump to master_thread_id(157) slave_server(3), pos(, 4)
            2019-02-01T00:07:11.644832Z 157 [Note] Aborted connection 157 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:07:12.291134Z 160 [Note] Start binlog_dump to master_thread_id(160) slave_server(2), pos(, 4)
            2019-02-01T00:07:42.293167Z 160 [Note] Aborted connection 160 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:08:10.727959Z 164 [Note] Start binlog_dump to master_thread_id(164) slave_server(3), pos(, 4)
            2019-02-01T00:08:12.361852Z 166 [Note] Start binlog_dump to master_thread_id(166) slave_server(2), pos(, 4)
            2019-02-01T00:08:40.729620Z 164 [Note] Aborted connection 164 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:08:42.363887Z 166 [Note] Aborted connection 166 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:09:10.852964Z 168 [Note] Start binlog_dump to master_thread_id(168) slave_server(3), pos(, 4)
            2019-02-01T00:09:12.426090Z 170 [Note] Start binlog_dump to master_thread_id(170) slave_server(2), pos(, 4)
            2019-02-01T00:09:40.855230Z 168 [Note] Aborted connection 168 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:09:42.428200Z 170 [Note] Aborted connection 170 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:10:10.980960Z 172 [Note] Start binlog_dump to master_thread_id(172) slave_server(3), pos(, 4)
            2019-02-01T00:10:12.502470Z 174 [Note] Start binlog_dump to master_thread_id(174) slave_server(2), pos(, 4)
            2019-02-01T00:10:40.983150Z 172 [Note] Aborted connection 172 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:10:42.504497Z 174 [Note] Aborted connection 174 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:11:11.100889Z 176 [Note] Start binlog_dump to master_thread_id(176) slave_server(3), pos(, 4)
            2019-02-01T00:11:12.573414Z 178 [Note] Start binlog_dump to master_thread_id(178) slave_server(2), pos(, 4)
            2019-02-01T00:11:41.103130Z 176 [Note] Aborted connection 176 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:11:42.575034Z 178 [Note] Aborted connection 178 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:12:11.225337Z 180 [Note] Start binlog_dump to master_thread_id(180) slave_server(3), pos(, 4)
            2019-02-01T00:12:12.651230Z 182 [Note] Start binlog_dump to master_thread_id(182) slave_server(2), pos(, 4)
            2019-02-01T00:12:41.227624Z 180 [Note] Aborted connection 180 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:12:42.653637Z 182 [Note] Aborted connection 182 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:13:11.351860Z 184 [Note] Start binlog_dump to master_thread_id(184) slave_server(3), pos(, 4)
            2019-02-01T00:13:12.727023Z 186 [Note] Start binlog_dump to master_thread_id(186) slave_server(2), pos(, 4)
            2019-02-01T00:13:13.314286Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: USECTSTMGTDEV02:3306’
            2019-02-01T00:13:13.314520Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV01:3306, USECTSTMGTDEV03:3306 on view 15489723795905315:12.’
            2019-02-01T00:13:41.354145Z 184 [Note] Aborted connection 184 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:13:42.728714Z 186 [Note] Aborted connection 186 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430621196’ host: ‘USECTSTMGTDEV02’ (failed on flush_net())
            2019-02-01T00:14:11.472704Z 190 [Note] Start binlog_dump to master_thread_id(190) slave_server(3), pos(, 4)
            2019-02-01T00:14:41.474929Z 190 [Note] Aborted connection 190 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:15:11.629837Z 192 [Note] Start binlog_dump to master_thread_id(192) slave_server(3), pos(, 4)
            2019-02-01T00:15:41.632042Z 192 [Note] Aborted connection 192 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:16:11.747506Z 194 [Note] Start binlog_dump to master_thread_id(194) slave_server(3), pos(, 4)
            2019-02-01T00:16:12.208015Z 0 [Warning] Plugin group_replication reported: ‘Members removed from the group: USECTSTMGTDEV03:3306’
            2019-02-01T00:16:12.208206Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV01:3306 on view 15489723795905315:13.’
            2019-02-01T00:16:41.749742Z 194 [Note] Aborted connection 194 to db: ‘unconnected’ user: ‘mysql_innodb_cluster_r0430639200’ host: ‘USECTSTMGTDEV03’ (failed on flush_net())
            2019-02-01T00:27:30.368725Z 198 [Note] Access denied for user ‘dbauser’@’USECTSTMGTDEV01’ (using password: YES)
            2019-02-01T15:51:02.684568Z 200 [Note] Got packets out of order
            2019-02-01T17:30:00.067762Z 203 [Note] Got packets out of order

            ———————————————————————-

            I dont know why I am getting Failed on flush_net() seems like a network issue?

            The slave is not any better.
            I keep getting these errors for adding user on localhost.

            ————————————————————————-
            2019-01-31T23:57:56.372680Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
            2019-01-31T23:57:56.379232Z 0 [Note] InnoDB: Waiting for purge to start
            2019-01-31T23:57:56.429755Z 0 [Note] InnoDB: 5.7.25 started; log sequence number 2551202
            2019-01-31T23:57:56.430690Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
            2019-01-31T23:57:56.432933Z 0 [Note] Plugin ‘FEDERATED’ is disabled.
            2019-01-31T23:57:56.461976Z 0 [Note] InnoDB: Buffer pool(s) load completed at 190131 15:57:56
            2019-01-31T23:57:56.507380Z 0 [ERROR] Plugin group_replication reported: ‘The group name option is mandatory’
            2019-01-31T23:57:56.507469Z 0 [ERROR] Plugin group_replication reported: ‘Unable to start Group Replication on boot’
            2019-01-31T23:57:56.517052Z 0 [Note] Found ca.pem, server-cert.pem and server-key.pem in data directory. Trying to enable SSL support using them.
            2019-01-31T23:57:56.517985Z 0 [Warning] CA certificate ca.pem is self signed.
            2019-01-31T23:57:56.523360Z 0 [Note] Server hostname (bind-address): ‘*’; port: 3306
            2019-01-31T23:57:56.523574Z 0 [Note] IPv6 is available.
            2019-01-31T23:57:56.523623Z 0 [Note] – ‘::’ resolves to ‘::’;
            2019-01-31T23:57:56.523845Z 0 [Note] Server socket created on IP: ‘::’.
            2019-01-31T23:57:56.559351Z 0 [Note] Failed to start slave threads for channel ”
            2019-01-31T23:57:56.581549Z 0 [Note] Event Scheduler: Loaded 0 events
            2019-01-31T23:57:56.582265Z 0 [Note] /usr/sbin/mysqld: ready for connections.
            Version: ‘5.7.25-log’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MySQL Community Server (GPL)
            2019-02-01T00:04:07.748981Z 3 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=”, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-02-01T00:04:07.762893Z 3 [Note] Plugin group_replication reported: ‘Group communication SSL configuration: group_replication_ssl_mode: “DISABLED”‘
            2019-02-01T00:04:07.763382Z 3 [Note] Plugin group_replication reported: ‘[GCS] Added automatically IP ranges 10.110.28.22/24,127.0.0.1/8 to the whitelist’
            2019-02-01T00:04:07.764238Z 3 [Note] Plugin group_replication reported: ‘[GCS] Translated ‘USECTSTMGTDEV02′ to 10.110.28.22’
            2019-02-01T00:04:07.764700Z 3 [Warning] Plugin group_replication reported: ‘[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.’
            2019-02-01T00:04:07.764941Z 3 [Note] Plugin group_replication reported: ‘[GCS] SSL was not enabled’
            2019-02-01T00:04:07.765001Z 3 [Note] Plugin group_replication reported: ‘Initialized group communication with configuration: group_replication_group_name: “a7281583-24ea-11e9-82f6-005056975259”; group_replication_local_address: “USECTSTMGTDEV02:33061”; group_replication_group_seeds: “USECTSTMGTDEV01:33061”; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: “AUTOMATIC”‘
            2019-02-01T00:04:07.765157Z 3 [Note] Plugin group_replication reported: ‘[GCS] Configured number of attempts to join: 0’
            2019-02-01T00:04:07.765193Z 3 [Note] Plugin group_replication reported: ‘[GCS] Configured time between attempts to join: 5 seconds’
            2019-02-01T00:04:07.765332Z 3 [Note] Plugin group_replication reported: ‘Member configuration: member_id: 2; member_uuid: “02738d90-25ac-11e9-a792-0050569760f0”; single-primary mode: “true”; group_replication_auto_increment_increment: 7; ‘
            2019-02-01T00:04:07.766791Z 5 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-02-01T00:04:07.774731Z 3 [Note] Plugin group_replication reported: ‘Group Replication applier module successfully initialized!’
            2019-02-01T00:04:07.774817Z 8 [Note] Slave SQL thread for channel ‘group_replication_applier’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./USECTSTMGTDEV01-relay-bin-group_replication_applier.000001’ position: 4
            2019-02-01T00:04:07.883106Z 0 [Note] Plugin group_replication reported: ‘XCom protocol version: 3’
            2019-02-01T00:04:07.884161Z 0 [Note] Plugin group_replication reported: ‘XCom initialized and ready to accept incoming connections on port 33061’
            2019-02-01T00:04:11.795006Z 0 [ERROR] Plugin group_replication reported: ‘This member has more executed transactions than those present in the group. Local transactions: 02738d90-25ac-11e9-a792-0050569760f0:1-3 > Group transactions: 02738d90-25ac-11e9-a792-0050569760f0:2,
            58836627-2351-11e9-bfae-005056975259:1-85,
            a7281583-24ea-11e9-82f6-005056975259:1-50’
            2019-02-01T00:04:11.795393Z 0 [Warning] Plugin group_replication reported: ‘The member contains transactions not present in the group. It is only allowed to join due to group_replication_allow_local_disjoint_gtids_join option’
            2019-02-01T00:04:11.795536Z 3 [Note] Plugin group_replication reported: ‘This server is working as secondary member with primary member address USECTSTMGTDEV01:3306.’
            2019-02-01T00:04:11.796012Z 0 [ERROR] Plugin group_replication reported: ‘Group contains 2 members which is greater than group_replication_auto_increment_increment value of 1. This can lead to an higher rate of transactional aborts.’
            2019-02-01T00:04:11.797267Z 11 [Note] Plugin group_replication reported: ‘Establishing group recovery connection with a possible donor. Attempt 1/10’
            2019-02-01T00:04:11.797340Z 0 [Note] Plugin group_replication reported: ‘Group membership changed to USECTSTMGTDEV02:3306, USECTSTMGTDEV01:3306 on view 15489723795905315:10.’
            2019-02-01T00:04:11.821968Z 11 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=”, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=’USECTSTMGTDEV01′, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-02-01T00:04:11.829818Z 11 [Note] Plugin group_replication reported: ‘Establishing connection to a group replication recovery donor 58836627-2351-11e9-bfae-005056975259 at USECTSTMGTDEV01 port: 3306.’
            2019-02-01T00:04:11.830862Z 13 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the ‘START SLAVE Syntax’ in the MySQL Manual for more information.
            2019-02-01T00:04:11.832643Z 14 [Note] Slave SQL thread for channel ‘group_replication_recovery’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./USECTSTMGTDEV01-relay-bin-group_replication_recovery.000001’ position: 4
            2019-02-01T00:04:11.834054Z 13 [Note] Slave I/O thread for channel ‘group_replication_recovery’: connected to master ‘mysql_innodb_cluster_r0430621196@USECTSTMGTDEV01:3306’,replication started in log ‘FIRST’ at position 4
            2019-02-01T00:04:11.848863Z 14 [ERROR] Slave SQL for channel ‘group_replication_recovery’: Error ‘Operation CREATE USER failed for ‘dbauser’@’localhost” on query. Default database: ”. Query: ‘CREATE USER ‘dbauser’@’localhost’ IDENTIFIED WITH ‘mysql_native_password’ AS ‘*477F69E72C56952DDBA6A2AECB835641A2EF5912”, Error_code: 1396
            2019-02-01T00:04:11.849076Z 14 [Warning] Slave: Operation CREATE USER failed for ‘dbauser’@’localhost’ Error_code: 1396
            2019-02-01T00:04:11.849150Z 14 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with “SLAVE START”. We stopped at log ‘USECTSTMGTDEV01-bin.000001’ position 150.
            2019-02-01T00:04:11.849162Z 11 [Note] Plugin group_replication reported: ‘Terminating existing group replication donor connection and purging the corresponding logs.’
            2019-02-01T00:04:11.849902Z 13 [Note] Slave I/O thread exiting for channel ‘group_replication_recovery’, read up to log ‘USECTSTMGTDEV01-bin.000002’, position 4
            2019-02-01T00:04:11.864720Z 11 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=’USECTSTMGTDEV01′, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-02-01T00:04:11.872135Z 11 [Note] Plugin group_replication reported: ‘Retrying group recovery connection with another donor. Attempt 2/10’
            2019-02-01T00:05:11.889746Z 11 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=’USECTSTMGTDEV01′, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-02-01T00:05:11.893425Z 11 [Note] Plugin group_replication reported: ‘Establishing connection to a group replication recovery donor 58836627-2351-11e9-bfae-005056975259 at USECTSTMGTDEV01 port: 3306.’
            2019-02-01T00:05:11.893968Z 16 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the ‘START SLAVE Syntax’ in the MySQL Manual for more information.
            2019-02-01T00:05:11.895271Z 17 [Note] Slave SQL thread for channel ‘group_replication_recovery’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./USECTSTMGTDEV01-relay-bin-group_replication_recovery.000001’ position: 4
            2019-02-01T00:05:11.897037Z 16 [Note] Slave I/O thread for channel ‘group_replication_recovery’: connected to master ‘mysql_innodb_cluster_r0430621196@USECTSTMGTDEV01:3306’,replication started in log ‘FIRST’ at position 4
            2019-02-01T00:05:11.908106Z 17 [ERROR] Slave SQL for channel ‘group_replication_recovery’: Error ‘Operation CREATE USER failed for ‘dbauser’@’localhost” on query. Default database: ”. Query: ‘CREATE USER ‘dbauser’@’localhost’ IDENTIFIED WITH ‘mysql_native_password’ AS ‘*477F69E72C56952DDBA6A2AECB835641A2EF5912”, Error_code: 1396
            2019-02-01T00:05:11.908230Z 17 [Warning] Slave: Operation CREATE USER failed for ‘dbauser’@’localhost’ Error_code: 1396
            2019-02-01T00:05:11.908267Z 17 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with “SLAVE START”. We stopped at log ‘USECTSTMGTDEV01-bin.000001’ position 150.
            2019-02-01T00:05:11.908280Z 11 [Note] Plugin group_replication reported: ‘Terminating existing group replication donor connection and purging the corresponding logs.’
            2019-02-01T00:05:11.908595Z 16 [Note] Slave I/O thread exiting for channel ‘group_replication_recovery’, read up to log ‘USECTSTMGTDEV01-bin.000002’, position 4
            2019-02-01T00:05:11.916422Z 11 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=’USECTSTMGTDEV01′, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-02-01T00:05:11.919987Z 11 [Note] Plugin group_replication reported: ‘Retrying group recovery connection with another donor. Attempt 3/10’
            2019-02-01T00:05:19.374538Z 18 [Note] Got packets out of order
            2019-02-01T00:05:52.594751Z 20 [Note] Got packets out of order
            2019-02-01T00:06:11.938983Z 11 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=’USECTSTMGTDEV01′, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-02-01T00:06:11.947145Z 11 [Note] Plugin group_replication reported: ‘Establishing connection to a group replication recovery donor 58836627-2351-11e9-bfae-005056975259 at USECTSTMGTDEV01 port: 3306.’
            2019-02-01T00:06:11.948105Z 26 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the ‘START SLAVE Syntax’ in the MySQL Manual for more information.
            2019-02-01T00:06:11.951075Z 27 [Note] Slave SQL thread for channel ‘group_replication_recovery’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./USECTSTMGTDEV01-relay-bin-group_replication_recovery.000001’ position: 4
            2019-02-01T00:06:11.951559Z 26 [Note] Slave I/O thread for channel ‘group_replication_recovery’: connected to master ‘mysql_innodb_cluster_r0430621196@USECTSTMGTDEV01:3306’,replication started in log ‘FIRST’ at position 4
            2019-02-01T00:06:11.967249Z 27 [ERROR] Slave SQL for channel ‘group_replication_recovery’: Error ‘Operation CREATE USER failed for ‘dbauser’@’localhost” on query. Default database: ”. Query: ‘CREATE USER ‘dbauser’@’localhost’ IDENTIFIED WITH ‘mysql_native_password’ AS ‘*477F69E72C56952DDBA6A2AECB835641A2EF5912”, Error_code: 1396
            2019-02-01T00:06:11.967453Z 27 [Warning] Slave: Operation CREATE USER failed for ‘dbauser’@’localhost’ Error_code: 1396
            2019-02-01T00:06:11.967517Z 27 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with “SLAVE START”. We stopped at log ‘USECTSTMGTDEV01-bin.000001’ position 150.
            2019-02-01T00:06:11.967535Z 11 [Note] Plugin group_replication reported: ‘Terminating existing group replication donor connection and purging the corresponding logs.’
            2019-02-01T00:06:11.968506Z 26 [Note] Slave I/O thread exiting for channel ‘group_replication_recovery’, read up to log ‘USECTSTMGTDEV01-bin.000002’, position 4
            2019-02-01T00:06:11.985494Z 11 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_recovery’ executed’. Previous state master_host=’USECTSTMGTDEV01′, master_port= 3306, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
            2019-02-01T00:06:11.994026Z 11 [Note] Plugin group_replication reported: ‘Retrying group recovery connection with another donor. Attempt 4/10’

  10. Hi Lefred,
    I am facing issue while setup Mysql 8.0.18 Innodb Cluster. All cluster nodes have selinux disabled still facing same issue.

    2019-11-07T04:17:47.087864+09:00 41 [Warning] [MY-011735] [Repl] Plugin group_replication reported: ‘[GCS] Automatically adding IPv6 localhost address to the whitelist. It is mandatory that it is added.’
    2019-11-07T04:17:47.091315+09:00 44 [System] [MY-010597] [Repl] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
    2019-11-07T04:17:47.228257+09:00 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: ‘[GCS] Unable to announce tcp port 33061. Port already in use?’
    2019-11-07T04:17:47.228456+09:00 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: ‘[GCS] Error joining the group while waiting for the network layer to become ready.’
    2019-11-07T04:17:47.228764+09:00 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: ‘[GCS] The member was unable to join the group. Local port: 33061’
    2019-11-07T04:18:47.145430+09:00 41 [ERROR] [MY-011640] [Repl] Plugin group_replication reported: ‘Timeout on wait for view after joining group’
    2019-11-07T04:18:47.145557+09:00 41 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: ‘[GCS] The member is leaving a group without being on one.’

  11. Hi Lefred,
    First, I have used your excellent tuto for creating an Innodb cluster, I want to say thank you for your work.
    Now I’m experiencing a strange issue on a 8.0.25 innodb cluster with 2 nodes on a subnet and another one on another subnet, in another datacenter, connected via IPSec (latency ~ 7ms between this DCs). Nodes are all under Win 2019, creating the cluster and adding a node on the first 2 servers (same DC, same subnet) works fine. But the third server can’t be added, with the same error as above :

    Adding instance to the cluster…

    ERROR: Unable to start Group Replication for instance ‘SRVMYSQL1:3306’.
    The MySQL error_log contains the following messages:
    2022-01-31 15:29:29.036570 [System] [MY-013587] Plugin group_replication reported: ‘Plugin ‘group_replication’ is starting.’
    2022-01-31 15:29:29.054004 [System] [MY-010597] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
    2022-01-31 15:29:29.268779 [Error] [MY-011735] Plugin group_replication reported: ‘[GCS] Error on opening a connection to SRVMYSQL2:33061 on local port: 33061.’
    […]
    2022-01-31 15:30:35.221187 [Error] [MY-011735] Plugin group_replication reported: ‘[GCS] Error on opening a connection to SRVMYSQL2:33061 on local port: 33061.’
    2022-01-31 15:30:35.241784 [Error] [MY-011735] Plugin group_replication reported: ‘[GCS] Error on opening a connection to SRVMYSQL3:33061 on local port: 33061.’
    2022-01-31 15:30:35.242766 [Error] [MY-011735] Plugin group_replication reported: ‘[GCS] Error connecting to all peers. Member join failed. Local port: 33061’
    2022-01-31 15:30:36.273717 [Error] [MY-011735] Plugin group_replication reported: ‘[GCS] The member was unable to join the group. Local port: 33061’
    Cluster.addInstance: Group Replication failed to start: MySQL Error 3092 (HY000): SRVMYSQL1:3306: The server is not configured properly to be an active member of the group. Please see more details on error log. (RuntimeError)

    Firewall are correctly configured, we have also tried without firewall, no success. We have uninstalled antivirus too, not better. There is no firewall inter-DC, some telnet show that there is no problem to open ports. In fact, the two servers (master -> node to be added) begin to talk, the group replication seems to start on the slave (with netstat, the port 33061 appears on the slave when the addInstance is launched on the master), but I have this error after time.

    I can’t find what fails. Do you see something I have not seen myself ? Have you ever heard something about IPSec maybe blocking something ? All idea will be welcome, I’m blocked…

    Thanks in advance,
    Frank

      • Hi,
        tanks for your answer, and, my bad 🙁
        I think I’ve an idea of the problem, I need to confirm it but : the subnets are not in the same range (first servers in 172.16.100.0/23, third in 172.16.200.0/23). It seems that the “AUTOMATIC” value of group_replication_ip_allowlist do not handle all ranges, just local ones 🙁
        I will test it but the configuration is not clear. Maybe do you know : must I recreate the cluster ? Must I configure this option on ALL nodes (via addInstance ?) ? And how to make it static ? In my.cnf, or just with a shell command ?
        All informations are welcome, the documentations I’ve found are unclear about that.
        Frank

      • Ok : it’s confirmed. Adding the option on the master
        mysql> STOP GROUP_REPLICATION;
        mysql> SET GLOBAL group_replication_ip_allowlist=”172.16.100.0/23, 172.16.200.0/23″;
        mysql> START GROUP_REPLICATION;

        and ipAllowlist to addInstance did the trick.
        Now I need some lights about the good way to make it static at boot or master change.
        Frank

Leave a Reply to lefredCancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.