In the previous post, we setup a Drupal website with MySQL as backend on a different compute instance in a different subnet.
Today we will increase the High Availability of our Database by transforming this single MySQL instance into a MySQL InnoDB Cluster!
The architecture will be the following:
We will use 2 different Availability Domains (AD), of course we could have used 3, it’s up to you.
We will install MySQL Router on the Application Server as advised, and we will modify our Drupal’s configuration to connect locally on the R/W port of the MySQL Router.
You will see that this is very easy to achieve, the most complicated part is again dealing with local security on the instances (local firewall and SELinux).
The Process
To achieve our objective to setup a full HA backend of our MySQL Database for our Drupal site, we will proceed following these steps:
- Configure
myMySQL01
to be part of an InnoDB Cluster - Create a cluster called
MyClusterOCI
- Install MySQL Router on
myDrupal
, the application server - Bootstrap and start MySQL Router
- Create 2 new Compute Instances
- Install MySQL on both new instances
- Configure MySQL to be part of a Cluster (and configure firewall and SELinux)
- Add the new instances to the MySQL InnoDB Cluster
Configuration of myMySQL01
myMySQL01
is the current database server used by the Drupal website.
To configure it, we will have to configure the local firewall, configure SELinux and add some rules in OCI’s Security List.
So we first start by connecting to myMySQL01
in ssh (we use myDrupal
as jump host) and then we use MySQL Shell to configure the instance:
> dba.configureInstance('root@localhost')
You can see the output of all the commands in the video at the end of the article
This command will configure the MySQL instance, but we will also use it the create a dedicated admin user to manage our MySQL InnoDB Cluster: clusteradmin
(this is option 2).
This dedicated account must be the same on all instances and use the same password.
The local firewall rules and SELinux commands that will be required on all instances are the following:
firewall-cmd --zone=public --permanent --add-port=3306/tcp firewall-cmd --zone=public --permanent --add-port=33060/tcp firewall-cmd --zone=public --permanent --add-port=33061/tcp firewall-cmd --reload semanage port -a -t mysqld_port_t -p tcp 33060-33061
We also add to our Default Security List for myVCN, the port 33060
and 33061
for 10.0.0.0/16
, like this:
Cluster Creation
Now, we can create the MySQL InnoDB Cluster using this single instance. Our cluster won’t be tolerant to any failure yet, but it will be ready to be managed like a cluster.
So in the MySQL Shell, we connect to our server using its name (mymysql01
) with the new created user, clusteradmin
:
$ mysqlsh clusteradmin@mymysql02 > cluster=dba.createCluster('MyClusterOCI')
We can verify our cluster using cluster.status()
.
MySQL Router on the Application Server
It’s time to connect Drupal to our cluster trough the router. The router will be installed on the webserver (mydrupal
compute instance).
To do so, we install first the MySQL repository and then MySQL Router with the following commands:
[opc@mydrupal ~]$ sudo yum install -y \ https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm [opc@mydrupal ~]$ sudo yum install -y mysql-router
Bootstrapping the Router
We don’t need to configure manually MySQL Router, we will bootstrap it using the IP of our only member in our cluster (for the moment). We will also use Group Replication Push notifications (this requires X protocol, that’s the reason why we also opened port 33060
in our firewall and OCI Security List):
[opc@mydrupal ~]$ sudo mysqlrouter --bootstrap \ clusteradmin@10.0.1.2:3306 --user \ mysqlrouter --conf-use-gr_notifications
Now that the router is configured, we can start it and enable its start when the instance boots:
[opc@mydrupal ~]$ sudo systemctl start mysqlrouter [opc@mydrupal ~]$ sudo systemctl enable mysqlrouter
By default, the router will listen on 4 ports:
6446
: Read/Write for MySQL Classic Protocol (the one we will use in Drupal)6447
: Read/Only for MySQL Classic Protocol64460
: Read/Write for MySQL X Protocol64461
: Read/Only for MySQL X Protocol
Once again, we need to tell our friend SELinux to allow connections to those ports:
[opc@mydrupal ~]$ sudo semanage port -a -t mysqld_port_t \ -p tcp 6446-6447
Changing Configuration in Drupal
We need to edit the file sites/default/settings.php
located in the web document root (/var/www/html
by default). We need to modify the host
and port
values with 127.0.0.1
and 6446
, something like this:
$databases['default']['default'] = array ( 'database' => 'drupal', 'username' => 'drupal', 'password' => 'MyPassw0rd!', 'prefix' => '', 'host' => '127.0.0.1', 'port' => '6446', 'namespace' => 'Drupal\Core\Database\Driver\mysql', 'driver' => 'mysql', );
You can refresh the website, Drupal now connects via the MySQL Router \o/
Creation of the New Instances
We will create the two new instances that will be used as extra nodes to our MySQL InnoDB Cluster to achieve HA. You can create them in different Availability Domains. You also need to use again the Private Subnet.
We need to create 2 Compute Instances: myMySQL02
and myMySQL03
.
For convenience, we will also add all our database servers in /etc/hosts
of the Drupal instance as we use it as jump host too:
[root@mydrupal ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain ... ::1 localhost localhost.localdomain ... 10.0.1.2 mymysql01 10.0.1.3 mymysql02 10.0.1.4 mymysql03
MySQL Server Installation
Now, we will install MySQL Server on both new instances. We need to connect to them in ssh via mydrupal
, don’t forget to forward the agent with -A
in the ssh command:
$ ssh -A -i ~/.ssh/id_rsa_oci opc@150.136.231.34 [opc@mydrupal ~]$ ssh mymysql02 Last login: Fri Apr 3 14:09:29 2020 from mydrupal.subnet.vcn.oraclevcn.com [opc@mymysql02 ~]
150.136.231.34
is the public IP of my Drupal compute instance. Of course you need to change it by the IP of your own instance 😉
On both server, you will have to install the MySQL Repository, MySQL Server and MySQL Shell.
This is a summary of the commands (but you can find more details in the previous post):
sudo yum install -y https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm sudo yum install -y mysql-community-server sudo yum install -y mysql-shell
You, need to start MySQL and change the default password (see this post).
Members Configuration
When done, you will have to configure both instances and create the exact same user we did earlier on mymysql01
. We called it clusteradmin
.
We also need to configure again the local firewall and SELinux the same way we did on mymysql01
:
firewall-cmd --zone=public --permanent --add-port=3306/tcp firewall-cmd --zone=public --permanent --add-port=33060/tcp firewall-cmd --zone=public --permanent --add-port=33061/tcp firewall-cmd --reload semanage port -a -t mysqld_port_t -p tcp 33060-33061
Adding Instances in the Cluster
The last step is to add the new instances in the cluster.
To perform this task, we go back on mymysql01
in the MySQL Shell, we make sure we have the cluster
object, with cluster.status()
for example. If not we need to create it using cluster.getCluster()
We need to be connected to
mymysql01
in the Shell. If you are not, you can do\c clusteradmin@mymysql01
Once you have the cluster
object, you can just add the new instances one by one like this:
> cluster.addInstance('clusteradmin@mymysql02')
And you do the same for mymysql03
.
Finally, you can check the status of your cluster:
In just a bit more than 20 minutes, we were able to setup a HA setup for our database backend.
You can see the video below that illustrates also all the different steps:
Enjoy MySQL and OCI ! #MySQL8isGreat