From MySQL Group Replication to MySQL InnoDB Cluster

I wanted to be brave and I installed MySQL Group Replication manually…. it was painful !

Then I realized that managing those servers and especially deal with MySQL Routers was even more painful !

What are my options now ? Is there a solution or do I need to restart from scratch ?

Asking the answer is already answering it… and once again MySQL Shell at the rescue.

MySQL Group Replication

I’ve configured everything manually. I also loaded group_replication and clone plugins and finally after having bootstrapped my Group here is what I have:

mysql> select member_host, member_port port, member_state state, 
       member_role role, member_version version 
       from performance_schema.replication_group_members;
+-------------+------+--------+-----------+---------+
| member_host | port | state  | role      | version |
+-------------+------+--------+-----------+---------+
| mysql2      | 3306 | ONLINE | SECONDARY |  8.0.21 |
| mysql3      | 3306 | ONLINE | SECONDARY |  8.0.21 |
| mysql1      | 3306 | ONLINE | PRIMARY   |  8.0.21 |
+-------------+------+--------+-----------+---------+

Let’s install MySQL Shell:

[root@mysql1 ~]# dnf install mysql-shell -y

MySQL InnoDB Cluster

Now we can start MySQL Shell and create our InnoDB Cluster importing our existing Group !

If we don’t have a dedicated user to manage the cluster and rootis only able to connect to localhost (which is always better), we need to create a user with the required privileges. This must be done on the Primary member:

CREATE USER 'clusteradmin' identified by 'MyPassw0rd';

GRANT RELOAD, SHUTDOWN, PROCESS, FILE, SELECT, SUPER, REPLICATION SLAVE, 
      REPLICATION CLIENT, REPLICATION_APPLIER, CREATE USER, SYSTEM_VARIABLES_ADMIN, 
      PERSIST_RO_VARIABLES_ADMIN, BACKUP_ADMIN, CLONE_ADMIN, EXECUTE 
      ON . TO clusteradmin@'%' WITH GRANT OPTION;

GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, 
      CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES,
      SHOW VIEW, TRIGGER, UPDATE 
      ON mysql_innodb_cluster_metadata.* TO clusteradmin@'%';

GRANT INSERT, UPDATE, DELETE ON mysql.* TO clusteradmin@'%';

Now we can connect to our Primary member with the new credentials:

As you can see I connected to the name (or IP), mysql1, that will be used through the cluster and not localhost,

It’s time to migrate our Group into InnoDB Cluster:

The command is dba.createCluster('name of cluster', {adoptFromGR: true}).

The content of this option in the inline help (\h dba.createCluster) is:

adoptFromGR allows creating an InnoDB cluster from an existing unmanaged
Group Replication setup, enabling use of MySQL Router and the shell
AdminAPI for managing it.

Now we have a full working cluster ready to be used with MySQL Router and managed by MySQL Shell:

MySQL Router

And to bootstrap MySQL Router:

mysqlrouter --bootstrap clusteradmin@mysql1 -u mysqlrouter --conf-use-gr-notifications

Example:

And now that we are fully using MySQL InnoDB Cluster 8.0.21, we can also get the information related to the Routers:

Conclusion

You don’t have anymore excuses to not use MySQL InnoDB Cluster, and I highly recommend everybody using native Group Replication for a single cluster topology to migrate to MySQL InnoDB Cluster. This will make your life much easier !

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

5 Comments

  1. Hello,

    Thanks for your posts on mysql innodb cluster.

    I’m testing mysql innodb cluster (8.0.21) solution with docker-compose with persistent volume for the data.
    The cluster is automatically succesfully created by a script (using apiAdmin / script JS).

    When I stop the whole cluster by using the command docker-compose stop, the cluster configuration is not loaded during the next startup.

    I think it is normal beacause from point of view of mysql, it saw as an outage.

    I use rebootClusterFromCompleteOutage function to restore it.

    My question is :
    Is there a specific configuration to reload automatically the configuration during the startup ?

    Thanks for your feedback.
    Laurent

    • Hi Laurent,

      No we don’t have such option. And this is on purpose. For us the data integrity is the most important.

      As DBA, if all my system goes down in production, I much prefer to manually decide what to do and see what happened.

      Let’s imagine this scenario:

      – all group members (node1, node2 and node3) go down
      – node1 was the Primary and got the very last write operation
      – then you start again your system (or docker compose)
      – let’s imagine that node1 for any reason cannot start (or starts very very slowly)
      – if we do it like you would, node2 and node3 will create a cluster…
      – … but what about that missing write that was on node1 ? and if node1 comes back ?

      I hope you see the point here.

      Now if you really want and you know you won’t change Primary etc.. (which is in practice almost impossible), you could have that Primary Node with :

      group_replication_bootstrap_group=ON 

      ONLY ON ONE !

      and all 3 nodes with:

      group_replication_start_on_boot=ON 

      Cheers,

  2. Hi Lefred,

    It seems MySQL Router is available under both enterprise product and commercial product. Is there any article which highlights the enterprise features of MySQL Router? Is InnoDB Cluster totally an enterprise product?

Leave a Reply to LaurentCancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.