I’m just back from a trip in Barcelona where I presented MySQL 8.0 InnoDB Cluster (at dataops and Barcelona MySQL Meetup) and the majority of feedback was great, but I also had some comments on the demos I showed. The first one was:
This is a joke of course (maybe it’s true for some), people found it very easy and they liked it.
But then, the second one was that all I showed wasn’t easy to find, some people who already played with the solution didn’t succeeded in creating a cluster so easily… not because they had errors or encountered bugs, but more because they just didn’t know how to do it.
The goal of this blog post is to illustrate how to create a MySQL 8.0 InnoDB Cluster very quickly using the new MySQL Shell only !
Initial environment
We have 3 MySQL 8.0.11 instances running: mysql1, mysql2 and mysql3
They can all communicate to each others and they have a user with all privileges created on each of them : clusteradmin.
Nothing else has been changed, no configuration file has been modified. The only important thing is that if you have created the admin user on them individually, just run RESET MASTER
on all the instances.
Quick Cluster Creation
Let’s connect to one instance (doesn’t matter which one), and let’s create the cluster. I will list all commands needed to create the cluster very quickly and on a next post I will explain them in more details:
Connection using the Shell
MySQL JS> \c clusteradmin@mysql1 Creating a session to 'clusteradmin@mysql1' Enter password: **** Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 10 (X protocol) Server version: 8.0.11 MySQL Community Server - GPL No default schema selected; type \use to set one. MySQL [mysql1+ ssl] JS>
MySQL Configuration
MySQL [mysql1+ ssl] JS> dba.configureInstance('clusteradmin@mysql1',{'restart': true})
You will prompted to validate the changes and if a restart of MySQL is required, it will happen (if you use a default MySQL installation, 3 settings will be changed and a restart will be required).
And you need to configure all the other nodes that you want to be part of the cluster:
MySQL [mysql1+ ssl] JS> dba.configureInstance('clusteradmin@mysql2',{'restart': true}) MySQL [mysql1+ ssl] JS> dba.configureInstance('clusteradmin@mysql3',{'restart': true})
Cluster Creation
When the servers are restarted, just connect again to one of them using the Shell and create the cluster:
MySQL [mysql1+ ssl] JS> \c clusteradmin@mysql1 MySQL [mysql1+ ssl] JS> cluster=dba.createCluster('MyCluster')
Now you already have a cluster but with only one membere, you can verify this using the status()
method of the cluster
object we created:
MySQL [mysql1+ ssl] JS> cluster.status() { "clusterName": "MyCluster", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306" }
Add the other members
Now it’s time to add the 2nd and 3rd member to the new cluster:
MySQL [mysql1+ ssl] JS> cluster.addInstance('clusteradmin@mysql2:3306') MySQL [mysql1+ ssl] JS> cluster.addInstance('clusteradmin@mysql3:3306')
Please, pay attention that here the port for MySQL standard protocol (3306) is required.
Check the cluster
We can verify our cluster using again the status()
method:
MySQL [mysql1+ ssl] JS> cluster.status() { "clusterName": "MyCluster", "defaultReplicaSet": { "name": "default", "primary": "mysql1:3306", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql1:3306": { "address": "mysql1:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } }, "groupInformationSourceMember": "mysql://clusteradmin@mysql1:3306" }
And voilà ! We have a 3 node MySQL 8.0 InnoDB Cluster !
Minimal Commands Summary
- connect to mysql1:
\c clusteradmin@mysql1
- configure mysql1:
dba.configureInstance('clusteradmin@mysql1',{'restart': true})
- configure mysql2:
dba.configureInstance('clusteradmin@mysql2',{'restart': true})
- configure mysql3:
dba.configureInstance('clusteradmin@mysql3',{'restart': true})
- connect to mysql1 (after restart):
\c clusteradmin@mysql1
- create cluster:
dba.createCluster('MyCluster')
- add mysql2 to the cluster:
cluster.addInstance('clusteradmin@mysql2:3306')
- add mysql3 to the cluster:
cluster.addInstance('clusteradmin@mysql3:3306')
[…] As promised, here is a post more detailed on how to create a MySQL InnoDB Cluster using MySQL Shell and the AdminAPI. […]
[…] MySQL 8.0 InnoDB Cluster – the quick hands-on manual – this is truly the quick start guide. There has to be more of this beginner styled cookbook guides on the MySQL site too! (The user manual is now really, really, large). […]
Hi: I agree, you made it extremely simple. I have installed mysql 5.7 on three servers. I have a stupid question. Do I need to install mysql shell on all the 3 db servers?
Hi Ravi,
Thanks for the comment.
So usually it’s not needed to have the Shell installed on each server, you could have it on your laptop for example. However, if you want to use 5.7 has there is no possibility to use SET PERSIST, the Shell needs to modify the my.cnf file. Therefor, it needs to run locally. So you need to use the the Shell on each nodes to run dba.ConfigureLocalInstance().
Cheers,
lefred.
Hi, one question,
I have mysql 8.0 with innoDB Cluster.
I want decide the priority of node and the command to set manualy which is the primary node.
but I not found nothing about this.
you have any idea?
Hi lefred,
I just wanted to know:
What is the best way to update the MySQL Cluster from 8.0.11 to 8.0.12? (My environment contains 3 instances)
Thank u in advance
Coro
Hello lefred,
I have been experimenting with Innodb Cluster for the last few days … I am enthusiastic about the functionality!
But there is one thing that gives me a headache! Maybe you can help me?
Test environment:
InnoDB cluster with 3 servers
single-master setup
Scenario:
– very high load on the master
– both slaves can not apply the relay logs fast enough and have a larger gap
– the master fails and an automatic failover to one of the slaves occurs
The application now accesses the new master, but it is not yet up to date because not all relay logs have been processed yet!
this means the application reads an outdated state of the data! This should not happen under any circumstances!
Is there a way to prevent the automatic failover to a slave that is not yet synchronous with the master?
No matter how long it takes, but the failover may only happen if the slave is synchronous!
thanks
Sebastian
Hi Sebastian,
Currently this is not possible inside Group Replication but you could monitor the applying queue before you failover the app connections. ProxySQL has such functionality for GR. However we are working on a solution inside MySQL too. More to come in the future. Now the right question is WHY aren’t they able to apply faster? Is it the same hardware? Could you share your show global variables output? What kind of workload is it? Very large transactions?
Thank you.
Hi Lefred,
thanks for the fast reply.
Actually the scenario is only made up … but it could occur for several reasons.
We are still in the planing phase of the project and trying to find out the best solution for our needs.
I will have a look at ProxySQL. Do you have a sample config for ProxySQL for monitoring the apply queue and only fail-over if everything is synced?
thanks
Sebastian
Sebastian,
It’s by default in ProxySQL, you can just change the limit (https://lefred.be/content/mysql-group-replication-native-support-in-proxysql/)
However, I think you should really investigate in the WHY? because, with parallel appliers and binlog_transaction_dependency_tracking possibilities (https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html) replication should be faster than ever. So please investigate why you have a lot in the apply queue.