MySQL InnoDB Cluster : MySQL Shell and the AdminAPI

As promised, here is a post more detailed on how to create a MySQL InnoDB Cluster using MySQL Shell and the AdminAPI.

First of all, as a good practice is never enough repeated, whatever the version of MySQL you are using, please use the latest MySQL Shell ! So if you are using 5.7, please use MySQL Shell 8.0.11. See this compatibility matrix or this official one.

dba class

The AdminAPI can be accessed by the MySQL Shell via the dba object. The reference manual for this class is here. The Shell provides you a very useful method for mostly all objects: help() (this method is not yet documented).

Let’s call the help() in the Shell and see the output:

MySQL JS> dba.help()

The global variable 'dba' is used to access the AdminAPI functionality and
perform DBA operations. It is used for managing MySQL InnoDB clusters.

The following properties are currently supported.

 - verbose Enables verbose mode on the Dba operations.


The following functions are currently supported.

 - checkInstanceConfiguration      Validates an instance for MySQL InnoDB
                                   Cluster usage.
 - configureInstance               Validates and configures an instance for
                                   MySQL InnoDB Cluster usage.
 - configureLocalInstance          Validates and configures a local instance
                                   for MySQL InnoDB Cluster usage.
 - createCluster                   Creates a MySQL InnoDB cluster.
 - deleteSandboxInstance           Deletes an existing MySQL Server instance on
                                   localhost.
 - deploySandboxInstance           Creates a new MySQL Server instance on
                                   localhost.
 - dropMetadataSchema              Drops the Metadata Schema.
 - getCluster                      Retrieves a cluster from the Metadata Store.
 - help                            Provides help about this class and it's
                                   members
 - killSandboxInstance             Kills a running MySQL Server instance on
                                   localhost.
 - rebootClusterFromCompleteOutage Brings a cluster back ONLINE when all
                                   members are OFFLINE.
 - startSandboxInstance            Starts an existing MySQL Server instance on
                                   localhost.
 - stopSandboxInstance             Stops a running MySQL Server instance on
                                   localhost.

For more help on a specific function use: dba.help('')

e.g. dba.help('deploySandboxInstance')

It’s also possible to get more info as explained above, let’s try:

MySQL JS> dba.help('configureInstance')

Validates and configures an instance for MySQL InnoDB Cluster usage.

SYNTAX

  .configureInstance([instance][, options])

WHERE

  instance: An instance definition.
  options: Additional options for the operation.

RETURNS

  A descriptive text of the operation result.

DESCRIPTION

This function auto-configures the instance for InnoDB Cluster usage.If the
target instance already belongs to an InnoDB Cluster it errors out.

The instance definition is the connection data for the instance.

For additional information on connection data use \? connection.

Only TCP/IP connections are allowed for this function.

The options dictionary may contain the following options:

 - mycnfPath: The path to the MySQL configuration file of the instance.
 - outputMycnfPath: Alternative output path to write the MySQL configuration
   file of the instance.
 - password: The password to be used on the connection.
 - clusterAdmin: The name of the InnoDB cluster administrator user to be
   created. The supported format is the standard MySQL account name format.
 - clusterAdminPassword: The password for the InnoDB cluster administrator
   account.
 - clearReadOnly: boolean value used to confirm that super_read_only must be
   disabled.
 - interactive: boolean value used to disable the wizards in the command
   execution, i.e. prompts are not provided to the user and confirmation
   prompts are not shown.
 - restart: boolean value used to indicate that a remote restart of the target
   instance should be performed to finalize the operation.

The connection password may be contained on the instance definition, however,
it can be overwritten if it is specified on the options.

This function reviews the instance configuration to identify if it is valid for
usage in group replication and cluster. An exception is thrown if not.

If the instance was not valid for InnoDB Cluster and interaction is enabled,
before configuring the instance a prompt to confirm the changes is presented
and a table with the following information:

 - Variable: the invalid configuration variable.
 - Current Value: the current value for the invalid configuration variable.
 - Required Value: the required value for the configuration variable.
 - Required Value: the required value for the configuration variable.

EXCEPTIONS

  ArgumentError in the following scenarios:

   - If 'interactive' is disabled and the instance parameter is empty.
   - If the instance definition is invalid.
   - If the instance definition is a connection dictionary but empty.
   - If the instance definition is a connection dictionary but any option is
     invalid.
   - If 'interactive' mode is disabled and the instance definition is missing
     the password.
   - If 'interactive' mode is enabled and the provided password is empty.

  RuntimeError in the following scenarios:

   - If the configuration file path is required but not provided or wrong.
   - If the instance accounts are invalid.
   - If the instance is offline.
   - If the instance is already part of a Replication Group.
   - If the instance is already part of an InnoDB Cluster.
   - If the given instance cannot be used for Group Replication.

For this setup, we will use MySQL 8.0.11 servers having one user created on each one (clusteradmin) like this:

create user 'clusteradmin'@'%' identified by 'fred';
grant all privileges on *.* to 'clusteradmin'@'%' with grant option;
reset master;

This user can be created from your GUI interface if you are using nodes in the cloud. But we can also create such user in the Shell as explained in the next chapter.

If you want to grant only the required privileges the list is availble here :

GRANT SELECT ON mysql_innodb_cluster_metadata.* TO your_user@'%';
GRANT SELECT ON performance_schema.global_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_configuration TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status_by_coordinator TO your_user@'%';
GRANT SELECT ON performance_schema.replication_applier_status_by_worker TO your_user@'%';
GRANT SELECT ON performance_schema.replication_connection_configuration TO your_user@'%';
GRANT SELECT ON performance_schema.replication_connection_status TO your_user@'%';
GRANT SELECT ON performance_schema.replication_group_member_stats TO your_user@'%';
GRANT SELECT ON performance_schema.replication_group_members TO your_user@'%';
GRANT SELECT ON performance_schema.threads TO your_user@'%' WITH GRANT OPTION

Configure the servers

The first step will be to configure the MySQL servers (mysql1, mysql2 and myslq3) that will take part in the cluster. By default, a new instance of MySQL that has been just installed, needs some configuration changes to be able to create or join an InnoDB Cluster.

Even if this is not required, it’s recommended to verify the configuration of the instances. This needs to be done against running mysqld. From the Shell we will verify the configuration of all the nodes using dba.checkInstanceConfiguration():

MySQL JS> dba.checkInstanceConfiguration('clusteradmin@mysql1')
Please provide the password for 'clusteradmin@mysql1': ****
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as mysql1
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Please use the dba.configureInstance() command to repair these issues.

{
    "config_errors": [
        {
            "action": "server_update", 
            "current": "CRC32", 
            "option": "binlog_checksum", 
            "required": "NONE"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "enforce_gtid_consistency", 
            "required": "ON"
        },
        {
            "action": "restart", 
            "current": "OFF", 
            "option": "gtid_mode", 
            "required": "ON"
        }
    ], 
    "errors": [], 
    "status": "error"
}

As we can see, 3 settings must be changed. GTID must be enabled which is expected as Group Replication is based on GTID. We also need to disable the checksum of binlogs as GCS/XCOM already use its internal checksum of events.

We can also notice that some changes require a restart of mysqld.

You can also verify the two other nodes:

MySQL JS> dba.checkInstanceConfiguration('clusteradmin@mysql2')
...
MySQL JS> dba.checkInstanceConfiguration('clusteradmin@mysql3')
...

Now we need to make those configuration changes and restart the mysqld instances. As we use MySQL 8.0, we can configure and restart mysqld remotely !

Let’s configure and restart mysql1 using dba.configureInstance() but in case you don’t have yet create a user to manage your cluster, it’s also possible to do it now !

We have then 2 options, configure the instance creating the user:

MySQL JS> dba.configureInstance('mysql1',{clusterAdmin: 'newclusteradmin@%',clusterAdminPassword: 'mypassword'})

Or using a user we already created earlier:

MySQL JS> dba.configureInstance('clusteradmin@mysql1')
Please provide the password for 'clusteradmin@mysql1': ****
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as mysql1
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Do you want to perform the required configuration changes? [y/n]: y
Do you want to restart the instance after configuring it? [y/n]: y
Configuring instance...
The instance 'mysql1:3306' was configured for cluster usage.
Restarting MySQL...
MySQL server at mysql1:3306 was restarted.

We can verify in the datadir the modification made by the server on its configuration:

# cat mysqld-auto.cnf  | jq
{
  "Version": 1,
  "mysql_server": {
    "mysql_server_static_options": {
      "binlog_checksum": {
        "Value": "NONE",
        "Metadata": {
          "Timestamp": 1530087517651637,
          "User": "clusteradmin",
          "Host": "mysql1"
        }
      },
      "enforce_gtid_consistency": {
        "Value": "ON",
        "Metadata": {
          "Timestamp": 1530087516620027,
          "User": "clusteradmin",
          "Host": "mysql1"
        }
      },
      "gtid_mode": {
        "Value": "ON",
        "Metadata": {
          "Timestamp": 1530087517697838,
          "User": "clusteradmin",
          "Host": "mysql1"
        }
      }
    }
  }
}

We can also configure mysql2 and mysql3 using the same MySQL Shell session and we can also avoid the prompt:

MySQL JS> dba.configureInstance('clusteradmin:fred@mysql2',{'restart': true, 'interactive': false})
Configuring MySQL instance at mysql2:3306 for use in an InnoDB cluster...

This instance reports its own address as mysql2
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable                 | Current Value | Required Value | Note                                             |
+--------------------------+---------------+----------------+--------------------------------------------------+
| binlog_checksum          | CRC32         | NONE           | Update the server variable                       |
| enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server |
| gtid_mode                | OFF           | ON             | Update read-only variable and restart the server |
+--------------------------+---------------+----------------+--------------------------------------------------+

Configuring instance...
The instance 'mysql2:3306' was configured for cluster usage.
Restarting MySQL...
MySQL server at mysql2:3306 was restarted.

MySQL JS> dba.configureInstance('clusteradmin:fred@mysql2',{'restart': true, 'interactive': false})
...

After the restart of the instances, we can of course verify the configuration again:

MySQL JS> dba.checkInstanceConfiguration('clusteradmin@mysql1')
Please provide the password for 'clusteradmin@mysql1': ****
Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as mysql1
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

Checking whether existing tables comply with Group Replication requirements...
No incompatible tables detected

Checking instance configuration...
Instance configuration is compatible with InnoDB cluster

The instance 'mysql1:3306' is valid for InnoDB cluster usage.

{
    "status": "ok"
}

This is perfect, we can now create our MySQL InnoDB Cluster !

Cluster creation

To create the cluster, we need to connect to one of the 3 nodes in the Shell and use this connection to create the cluster using dba.createCluster() method that returns a cluster object. It’s also at this time that we can set if we want to use a Single-Primary, default or a Multi-Primary cluster.

Let’s create our Single-Primary Cluster using mysql1:

MySQL JS> \c clusteradmin@mysql1
Creating a session to 'clusteradmin@mysql1'
Enter password: ****
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 10 (X protocol)
Server version: 8.0.11 MySQL Community Server - GPL
No default schema selected; type \use  to set one.

MySQL [mysql1+ ssl] JS> cluster = dba.createCluster('lefredCluster')
A new InnoDB cluster will be created on instance 'clusteradmin@mysql1:3306'.

Validating instance at mysql1:3306...

This instance reports its own address as mysql1

Instance configuration is suitable.
Creating InnoDB cluster 'lefredCluster' on 'clusteradmin@mysql1:3306'...
Adding Seed Instance...

Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.


As you can see, the returned object is now in the variable ‘cluster’. In case you forgot to set a variable like this, you can also do it later using the dba.getCluster() method:

MySQL [mysql1+ ssl] JS> cluster = dba.getCluster()

As for dba object, the cluster object has also a help() method available:

MySQL [mysql1+ ssl] JS> cluster.help()

The cluster object is the entry point to manage and monitor a MySQL InnoDB
cluster.

A cluster is a set of MySQLd Instances which holds the user's data.

It provides high-availability and scalability for the user's data.

The following properties are currently supported.

 - name Cluster name.


The following functions are currently supported.

 - addInstance                 Adds an Instance to the cluster.
 - checkInstanceState          Verifies the instance gtid state in relation
                               with the cluster.
 - describe                    Describe the structure of the cluster.
 - disconnect                  Disconnects all internal sessions used by the
                               cluster object.
 - dissolve                    Dissolves the cluster.
 - forceQuorumUsingPartitionOf Restores the cluster from quorum loss.
 - getName                     Retrieves the name of the cluster.
 - help                        Provides help about this class and it's members
 - rejoinInstance              Rejoins an Instance to the cluster.
 - removeInstance              Removes an Instance from the cluster.
 - rescan                      Rescans the cluster.
 - status                      Describe the status of the cluster.

For more help on a specific function use: cluster.help('')

e.g. cluster.help('addInstance')

We can verify our cluster using the status() method:

{
    "clusterName": "lefredCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "mysql1:3306": {
                "address": "mysql1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306"
}

We can now add a new instance to cluster. But first, we can check the compatibility of the dataset (check if the new node doesn’t have any transactions, gtid, that are unknown in the cluster). We will use the cluster.checkInstanceState() method to achieve this task:

MySQL [mysql1+ ssl] JS> cluster.checkInstanceState('clusteradmin@mysql2')
Please provide the password for 'clusteradmin@mysql2': ****
Analyzing the instance replication state...

The instance 'clusteradmin@mysql2' is valid for the cluster.
The instance is new to Group Replication.

{
    "reason": "new", 
    "state": "ok"
}

The state reported can be one of the following (we will see another example in the next chapter):

  • new: if the instance doesn’t have any transactions
  • recoverable: if the instance executed GTIDs are not conflicting with the executed GTIDs of the cluster instances
  • diverged: if the instance executed GTIDs diverged with the executed GTIDs of the cluster instances
  • lost_transactions: if the instance has more executed GTIDs than the executed GTIDs of the cluster instances

So for now, we can see that there are no problem and that the instance can be added, let’s do so using cluster.addInstance():

MySQL [mysql1+ ssl] JS> cluster.addInstance('clusteradmin@mysql2:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'clusteradmin@mysql2:3306': ****
Adding instance to the cluster ...

Validating instance at mysql2:3306...

This instance reports its own address as mysql2

Instance configuration is suitable.
The instance 'clusteradmin@mysql2:3306' was successfully added to the cluster

You could notice that the port is required ! We can verify the cluster using status() again:

MySQL [mysql1+ ssl] JS> cluster.status()
{
    "clusterName": "lefredCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "mysql1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "mysql1:3306": {
                "address": "mysql1:3306", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "mysql2:3306": {
                "address": "mysql2:3306", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306"
}

Wrong Initial State

I’ve added a transaction in mysql3 to show what happens when there is an error:

MySQL [mysql1+ ssl] JS> cluster.checkInstanceState('clusteradmin@mysql3')
Please provide the password for 'clusteradmin@mysql3': ****
Analyzing the instance replication state...

The instance 'clusteradmin@mysql3' is invalid for the cluster.
The instance contains additional transactions in relation to the cluster.

{
    "reason": "diverged", 
    "state": "error"
}

Router

The router configuration is very easy. The bootstrap option can be used and it will configure itself:

[root@mysql1 mysqlrouter]# mysqlrouter --user mysqlrouter --bootstrap clusteradmin@mysql1 
Please enter MySQL password for clusteradmin: 

Bootstrapping system MySQL Router instance...
MySQL Router  has now been configured for the InnoDB cluster 'lefredCluster'.

The following connection information can be used to connect to the cluster.

Classic MySQL protocol connections to cluster 'lefredCluster':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
X protocol connections to cluster 'lefredCluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470

Of course, don’t forget to start MySQL Router:

# systemctl start mysqlrouter
# systemctl status mysqlrouter
● mysqlrouter.service - MySQL Router
   Loaded: loaded (/usr/lib/systemd/system/mysqlrouter.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2018-06-27 09:57:42 UTC; 1s ago
 Main PID: 14955 (main)
   CGroup: /system.slice/mysqlrouter.service
           └─14955 /usr/bin/mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf

Jun 27 09:57:42 mysql1 systemd[1]: Started MySQL Router.
Jun 27 09:57:42 mysql1 systemd[1]: Starting MySQL Router...

Don’t forget to use latest mysql-router 8.x even with 5.7 !

Links to the MySQL Shell APIs documentation:

 

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

9 Comments

  1. Thank you for the detailed document. Is there a way I can use the python MySQL admin-api, directly from Python programs, rather than using MySQL Shell?

  2. I’m not quite sure about the privileges for clusteradmin user are only SELECT ones as you mentioned in the post.

    When I create the clusteradmin user using the dba.configureInstance() function, it automatically grants the following privileges to the clusteradmin user:

    GRANT RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT, INSERT, UPDATE, DELETE ON `mysql`.* TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `sys`.* TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT ALL PRIVILEGES ON `mysql_innodb_cluster_metadata`.* TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`replication_applier_configuration` TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`replication_applier_status_by_coordinator` TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`replication_applier_status_by_worker` TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`replication_applier_status` TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`replication_connection_configuration` TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`replication_connection_status` TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`replication_group_member_stats` TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`replication_group_members` TO `clusteradmin`@`%` WITH GRANT OPTION
    GRANT SELECT ON `performance_schema`.`threads` TO `clusteradmin`@`%` WITH GRANT OPTION

    • Hi Maxime,
      You are right, if you check the documentation I’m pointing it’s well stated that “the user needs to be assigned full read and write privileges on the InnoDB cluster metadata tables in addition to full MySQL administrator privileges (SUPER, GRANT OPTION, CREATE, DROP and so on) “.
      In summary, the privileges you are listing are indeed those needed.
      Cheers.

  3. Useful Article. But i am getting foolowwing error can u help me to sort it out.
    when i am trying to create a cluster at master i am getting following errror. I am using mysql 5.7.25
    on oracle linux 6.5

    dba.createCluster(“myCluster”)
    Dba.createCluster: Dba.createCluster: An open session is required to perform this operation. (RuntimeError)

  4. Regarding the read-only permissions, I also had to grant the following to the user before I was able to run cluster.status():

    GRANT SELECT ON mysql.slave_master_info TO your_user@’%’;

Leave a Reply

Your email address will not be published. Required fields are marked *

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.