MySQL InnoDB Cluster: MySQL Shell starter guide

on
Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

 Earlier this week,  MySQL Shell 1.08 has been released. This is the first Release Candidate of this major piece of MySQL InnoDB Cluster.

Some commands have been changed and some new ones were added.

For example the following useful commands were added:

  • dba.checkInstanceConfiguration()
  • cluster.checkInstanceState()
  • dba.rebootClusterFromCompleteOutage()

So let’s have a look on how to use the new MySQL Shell to create a MySQL InnoDB Cluster.

Action Plan

We have 3 blank Linux servers: mysql1, mysql2 and mysql3 all running rpm based Linux version 7 (Oracle Linux 7, CentOS 7, …).

We will install the required MySQL yum repositories and install the needed packages

We will use MySQL Shell to setup our MySQL InnoDB Cluster.

Packages

To be able to install our cluster, we will first install the repository from the MySQL release package. For more information related to MySQL’s installation or if you are using another OS, please check our online documentation.

On all 3 servers, we do:

# rpm -ivh https://dev.mysql.com/get/mysql57-community-release-el7-9.noarch.rpm
# yum install -y mysql-community-server

The commands above, will install the MySQL Community yum repositories and install MySQL Community Server 5.7.17, being the latest GA version at the date of this post.
Now we will have to install the Shell. As this tool is not yet GA, we need to use another repository that has been installed but not enabled: mysql-tool-preview

# yum install -y mysql-shell --enablerepo=mysql-tools-preview

We are done with the installation. Now let’s initialize MySQL and start it.

Starting MySQL

Before being able to start MySQL, we need to create all necessary folders and system tables. This process is called MySQL Initialization. Let’s proceed without generating a temporary root password as it will be easier and faster for the demonstration. However, I highly recommend you to use a strong root password.

When the initialization is done, we can start MySQL. So on all the future nodes, you can proceeds like this:

# mysqld --initialize-insecure -u mysql --datadir /var/lib/mysql/
# systemctl start mysqld
# systemctl status mysqld

MySQL InnoDB Cluster Instances Configuration

We have now everything we need to start working in the MySQL Shell to configure all the members of our InnoDB Cluster.

First, we will check the configuration of one of our MySQL server. Some changes are required, we will perform them using the Shell and we will restart mysqld:

# mysqlsh
mysql-js> dba.checkInstanceConfiguration('root@localhost:3306')
...
mysql-js> dba.configureLocalInstance()
... here please create a dedicated user and password to admin the cluster (option 2) ...
mysql-js> \q

# systemctl restart mysqld

Now MySQL has all the required mandatory settings to run Group Replication. We can verify the configuration again in the Shell with dba.checkInstanceConfiguration() function.

We have now to proceed the same way on all the other nodes, please use the same credentials when you create the user to manage your cluster,  I used ‘fred@%’  as example.  As you can’t configure remotely a MySQL Server, you will have to run the Shell locally on every node to be able to run dba.configureLocalInstance() and restart mysqld.

MySQL InnoDB Cluster Creation

Now that all the nodes have been restarted with the correct configuration, we can create the cluster. On one of the instances, we will connect and create the cluster using again the Shell, I did it on mysql1 and I used its ip as it’s name resolves also on the loopback interface:

# mysqlsh
mysql-js> var i1='fred@192.168.90.2:3306'
mysql-js> var i2='fred@mysql2:3306'
mysql-js> var i3='fred@mysql3:3306'
mysql-js> shell.connect(i1)
mysql-js> var cluster=dba.createCluster('mycluster')
mysql-js> cluster.status()
...

We can now validate that the dataset on the other instances is correct (no extra transactions executed). This is done by validating the GTIDs. This can be done remotely, so I will still use the MySQL Shell session I’ve open on mysql1:

mysql-js> cluster.checkInstanceState(i2)
mysql-js> cluster.checkInstanceState(i3)

When the validation is passed successfully, it’s time to add the two other nodes to our cluster:

mysql-js> cluster.addInstance(i2)
mysql-js> cluster.addInstance(i3)
mysql-js> cluster.status()

Perfect ! We used MySQL Shell to create this MySQL InnoDB Cluster.

Now let’s see this on video with all  the output of the commands:

In the next post, I will show you how to use the Shell to automate the creation of a MySQL InnoDB Cluster using Puppet.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

18 thoughts on “MySQL InnoDB Cluster: MySQL Shell starter guide

  1. Hi Fred!

    AFAIK, a default installation of RHEL / OEL / CentOS 7 will include a “mariadb-libs-5.5” package, and the MySQL 5.7 packages will conflict with it.
    In your video, I’m missing the signs of this conflict as well as your actions to handle it.
    I know the conflict can be solved in a clean way, but I’d like to see your way of doing that.

    Regards,
    Jörg

    1. Hi Jörg,

      Yum sees the conflict and deals with it:

      [root@mysql1 ~]# yum install mysql-community-server
      Loaded plugins: fastestmirror
      mysql-connectors-community | 2.5 kB 00:00:00
      mysql-tools-community | 2.5 kB 00:00:00
      mysql57-community | 2.5 kB 00:00:00
      (1/3): mysql-tools-community/x86_64/primary_db | 32 kB 00:00:00
      (2/3): mysql-connectors-community/x86_64/primary_db | 13 kB 00:00:00
      (3/3): mysql57-community/x86_64/primary_db | 96 kB 00:00:00
      Loading mirror speeds from cached hostfile
      * base: centos.mirror.nucleus.be
      * epel: nl.mirror.babylon.network
      * extras: mirrors.ircam.fr
      * updates: mirrors.ircam.fr
      Resolving Dependencies
      --> Running transaction check
      ---> Package mysql-community-server.x86_64 0:5.7.17-1.el7 will be installed
      --> Processing Dependency: mysql-community-common(x86-64) = 5.7.17-1.el7 for package: mysql-community-server-5.7.17-1.el7.x86_64
      --> Processing Dependency: mysql-community-client(x86-64) >= 5.7.9 for package: mysql-community-server-5.7.17-1.el7.x86_64
      --> Running transaction check
      ---> Package mysql-community-client.x86_64 0:5.7.17-1.el7 will be installed
      --> Processing Dependency: mysql-community-libs(x86-64) >= 5.7.9 for package: mysql-community-client-5.7.17-1.el7.x86_64
      ---> Package mysql-community-common.x86_64 0:5.7.17-1.el7 will be installed
      --> Running transaction check
      ---> Package mariadb-libs.x86_64 1:5.5.50-1.el7_2 will be obsoleted
      --> Processing Dependency: libmysqlclient.so.18()(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64
      --> Processing Dependency: libmysqlclient.so.18(libmysqlclient_18)(64bit) for package: 2:postfix-2.10.1-6.el7.x86_64
      ---> Package mysql-community-libs.x86_64 0:5.7.17-1.el7 will be obsoleting
      --> Running transaction check
      ---> Package mysql-community-libs-compat.x86_64 0:5.7.17-1.el7 will be obsoleting
      --> Finished Dependency Resolution

      Dependencies Resolved

      ============================================================================================================================================================================
      Package Arch Version Repository Size
      ============================================================================================================================================================================
      Installing:
      mysql-community-libs x86_64 5.7.17-1.el7 mysql57-community 2.1 M
      replacing mariadb-libs.x86_64 1:5.5.50-1.el7_2
      mysql-community-libs-compat x86_64 5.7.17-1.el7 mysql57-community 2.0 M
      replacing mariadb-libs.x86_64 1:5.5.50-1.el7_2
      mysql-community-server x86_64 5.7.17-1.el7 mysql57-community 162 M
      Installing for dependencies:
      mysql-community-client x86_64 5.7.17-1.el7 mysql57-community 24 M
      mysql-community-common x86_64 5.7.17-1.el7 mysql57-community 271 k

      Transaction Summary
      ============================================================================================================================================================================
      Install 3 Packages (+2 Dependent packages)

      Total download size: 190 M
      Is this ok [y/d/N]:

      However, you can still use the swap command:

      [root@mysql1 ~]# yum swap mariadb-libs mysql-community-libs

      If you have am issue with the swap command of yum, you can still use yum’s shell to create a transaction like this:

      [root@mysql1 ~]# yum -q -q shell
      > remove mariadb-libs
      > install mysql-community-libs
      > transaction run

      I hope this helps,

      Cheers,
      Fred.

      1. Hi Fred,
        thanks for this command log – it helps.
        Obviously, I don’t use yum often enough: my memory still said that yum (like rpm) will only remove other packages when called to “upgrade”, not with “install”. So that has changed.
        Regards,
        Jörg

  2. Hi Lefred,

    Having some issues with the Cluster.

    I have a normal 3 Node Cluster and Ive defined a Cluster on my Seed Successfully.

    Issue: Error’s When Joining the 2x Nodes to the Cluster. See below CheckInstance result:

    mysql-js> cluster.checkInstanceState(‘root@165.233.206.43:3306’)
    Please provide the password for ‘root@165.233.206.43:3306’:
    Analyzing the instance replication state…

    The instance ‘165.233.206.43:3306’ is invalid for the cluster.
    The instance contains additional transactions in relation to the cluster.

    {
    “reason”: “diverged”,
    “state”: “error”
    }

    Ive Set this up 20x times in my Sandbox Environment but now this issue is

    1. Hi Lerato,

      This means that the data on 2nd node is not the same as on the seed. In fact this 2nd node has (at least one) more transaction (check the executed GTID). How did you provision the data on the nodes ? If those are fresh instances, did you add the credentials on each node individually ? If this is the case and the only trx you did in the second instance, you can run RESET MASTER on it and join it again.

      Thank you for testing MySQL Group Replication & InnoDB Cluster.

      Cheers,

  3. Hi,

    Yes Data was exactly the same as the 2 other Server’s are clones of the Seed Server. The Only difference was that I ran the Credentials individually on the other 2 nodes and thus the difference in GTID’s. I ran a RESET MASTER on the 2nd node and now the Node is ready to be added to cluster, Now I get a different Issue:

    mysql-js> cluster.checkInstanceState(‘root@165.233.206.40:3306’)
    Please provide the password for ‘root@165.233.206.40:3306’:
    Analyzing the instance replication state…

    The instance ‘165.233.206.40:3306’ is valid for the cluster.
    The instance is fully recoverable.

    {
    “reason”: “recoverable”,
    “state”: “ok”
    }

    Then Addin gthe Node to Cluster:

    mysql-js> cluster.addInstance(‘root@165.233.206.40:3306’);
    A new instance will be added to the InnoDB cluster. Depending on the amount of
    data on the cluster this might take from a few seconds to several hours.

    Please provide the password for ‘root@165.233.206.40:3306’:
    Adding instance to the cluster …

    Cluster.addInstance: ERROR:
    Group Replication join failed.
    ERROR: Group Replication plugin failed to start. Server error log contains the following errors:
    2017-09-14T10:24:46.749189Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] Error connecting to the local group communication engine instance.’
    2017-09-14T10:24:47.774681Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] The member was unable to join the group. Local port: 13306’
    2017-09-14T10:25:46.748722Z 37 [ERROR] Plugin group_replication reported: ‘Timeout on wait for view after joining group’
    2017-09-14T10:25:46.748968Z 37 [ERROR] Plugin group_replication reported: ‘[GCS] The member is leaving a group without being on one.’

    ERROR: Error joining instance to cluster: ‘165.233.206.40:3306’ – Query failed. 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.. Query: START group_replication (RuntimeError)

    1. Hi Lerato,

      Could you check the error log on the 2nd node ? It might also try to add the credential he’s getting from node1 that you already added manually.

      If this is the case you should also peform the reset master on node 1.

      Next time, it’s preferable to add the credentials before the backup or using SET SQL_LOG_BIN=0 in the session you create those credentials (those dedicated for the cluster authentication).

      Cheers,

      1. Ohk Thanks, Added Same Credentials accross all 3 Nodes.

        The 2nd Node Error Log as it was being added to Cluster:

        2017-09-14T10:24:46.708780Z 37 [Note] Plugin group_replication reported: ‘Initialized group communication with configuration: group_replication_group_name: “9c99070e-9931-11e7-9744-005056830ccb”; group_replication_local_address: “165.233.206.40:13306”; group_replication_group_seeds: “165.233.206.43:13306”; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: “AUTOMATIC”‘
        2017-09-14T10:24:46.709380Z 39 [Note] ‘CHANGE MASTER TO FOR CHANNEL ‘group_replication_applier’ executed’. Previous state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”. New state master_host=”, master_port= 0, master_log_file=”, master_log_pos= 4, master_bind=”.
        2017-09-14T10:24:46.748556Z 42 [Note] Slave SQL thread for channel ‘group_replication_applier’ initialized, starting replication in log ‘FIRST’ at position 0, relay log ‘./host-db03-relay-bin-group_replication_applier.000003’ position: 4
        2017-09-14T10:24:46.748568Z 37 [Note] Plugin group_replication reported: ‘Group Replication applier module successfully initialized!’
        2017-09-14T10:24:46.748688Z 0 [Note] Plugin group_replication reported: ‘state 4257 action xa_init’
        2017-09-14T10:24:46.748744Z 0 [Note] Plugin group_replication reported: ‘Successfully bound to 0.0.0.0:13306 (socket=82).’
        2017-09-14T10:24:46.748771Z 0 [Note] Plugin group_replication reported: ‘Successfully set listen backlog to 32 (socket=82)!’
        2017-09-14T10:24:46.748780Z 0 [Note] Plugin group_replication reported: ‘Successfully unblocked socket (socket=82)!’
        2017-09-14T10:24:46.748961Z 0 [Note] Plugin group_replication reported: ‘connecting to 165.233.206.40 13306’
        2017-09-14T10:24:46.748985Z 0 [Note] Plugin group_replication reported: ‘Ready to accept incoming connections on 0.0.0.0:13306 (socket=82)!’
        2017-09-14T10:24:46.749074Z 0 [Note] Plugin group_replication reported: ‘client connected to 165.233.206.40 13306 fd 92’
        2017-09-14T10:24:46.749114Z 0 [Warning] Plugin group_replication reported: ‘[GCS] Connection attempt from IP address 165.233.206.40 refused. Address is not in the IP whitelist.’
        2017-09-14T10:24:46.749189Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] Error connecting to the local group communication engine instance.’
        2017-09-14T10:24:46.749208Z 0 [Note] Plugin group_replication reported: ‘state 4257 action xa_exit’
        2017-09-14T10:24:46.749401Z 0 [Note] Plugin group_replication reported: ‘Exiting xcom thread’
        2017-09-14T10:24:47.774681Z 0 [ERROR] Plugin group_replication reported: ‘[GCS] The member was unable to join the group. Local port: 13306’
        2017-09-14T10:25:46.748722Z 37 [ERROR] Plugin group_replication reported: ‘Timeout on wait for view after joining group’
        2017-09-14T10:25:46.748925Z 37 [Note] Plugin group_replication reported: ‘Requesting to leave the group despite of not being a member’
        2017-09-14T10:25:46.748968Z 37 [ERROR] Plugin group_replication reported: ‘[GCS] The member is leaving a group without being on one.’
        2017-09-14T10:25:46.749450Z 42 [Note] Error reading relay log event for channel ‘group_replication_applier’: slave SQL thread was killed
        2017-09-14T10:25:46.750215Z 39 [Note] Plugin group_replication reported: ‘The group replication applier thread was killed’

        Unsure about the Whitelist Warning as I created the Cluster with the all 3 Nodes IP addresses Exclusively in the Whitelist:

        mysql> show variables like ‘group_replication_ip_whitelist’;
        +——————————–+————————————————+
        | Variable_name | Value |
        +——————————–+————————————————+
        | group_replication_ip_whitelist | 165.233.206.40, 165.233.206.41, 165.233.206.43 |
        +——————————–+————————————————+

  4. No IP tables.

    All 3 Nodes on same Vlan.

    Tested Manually(mysqlsh) From Seed –> Node 1 and that works 100%

    Kind Regards
    LT

      1. Hi Lefred,

        Yes I changed the Seed.

        I hadnt tested that as yet, and im using version 5.7.17.

        I had a breakthrough yesterday evening, Since I was seeing this warning before all the Error’s I decided to follow it into the Rabit hole:

        2017-09-14T10:24:46.749114Z 0 [Warning] Plugin group_replication reported: ‘[GCS] Connection attempt from IP address 165.233.206.40 refused. Address is not in the IP whitelist.’

        So on each of the Node’s I defined the following:

        SET GLOBAL group_replication_ip_whitelist = ‘165.233.206.40, 165.233.206.41, 165.233.206.43’;

        This solved the Issue as I was able to successfully add the 2 Nodes to the Cluster.

        I think it may have something to do with my cluster definition:

        var cluster = dba.createCluster(‘My_Cluster’, {ipWhitelist: “165.233.206.40, 165.233.206.41, 165.233.206.43”});

        Thank you for your help again, will let you know how it goes

        1. Hi Lefred,

          Cluster is alive and running perfectly, I just have one potential Issue, I see this one line in the Error log:

          [ERROR] Plugin group_replication reported: ‘Group contains 2 members which is greater than group_replication_auto_increment_increment value of 1. This can lead to an higher rate of transactional aborts.’

          Should I be concerened with this Error as The Cluster is Online and succesfully switching between nodes with no issues, This came up when I added the Second Node to the Cluster.

          Tx!

Leave a Reply

Your email address will not be published. Required fields are marked *

recent

Last Tweets Last Tweets

Locations of visitors to this page
categories