Point-in-Time Recovery in OCI MDS with Object Storage – part 1

To setup point-in-time recovery for MDS using Object Storage, these are the prerequisites:

  • a MDS instance running
  • a backup plan (default)
  • a compute instance
  • an Object Storage Bucket

In this article, I won’t focus on how to create a MDS instance, a compute instance, enable backups and Object Storage Bucket, this is easy and there is already a lot of literature about them (you can also check several of my previous articles).

This is part 1 of the Point-in-Time Recovery in OCI MDS series. It’s only about streaming the binary logs to Object Store. In part 2, we will see how to use them and perform PITR.

Saving the binary logs

In fact most of the work will be to setup the compute instance to store the binary logs coming from MDS (one single compute instance could stream those binlogs from multiple MDS source).

On the compute instance we need some extra packages:

  • mysql-client (we need mysqlbinlog and mysql_config_editor)
  • mysql-shell (because I like to use it)
  • s3fs-fuse

The first two packages are available in the MySQL Repositories and the last one needs EPEL on rpm based distributions.

We need to create an ACCESS_KEY_ID and a SECRET_ACCESS_KEY to access our Object Storage Bucket:

We copy these keys on one single line separated with colon ‘:‘ in a file, for example ~/.passwd-ocifs.

And we mount it like this:

# chmod 600 ~/.passwd-ocifs
# mkdir /mnt/oci
# s3fs lefred-bucket /mnt/oci -o endpoint=us-ashburn-1 \
> -o passwd_file=~/.passwd-ocifs \
> -o url=https://ixxxxxxxxxx.compat.objectstorage.us-ashburn-1.oraclecloud.com/ \
> -onomultipart -o use_path_request_style

The mount point to Object Storage Bucket is ready and we will be able to stream the binary logs to /mnt/oci.

A dedicated MySQL User

We will now create a dedicated MySQL User that will fetch the binary logs from the MySQL Instance:

We connect to our MDS instance (I use MySQL Shell) and we do:

SQL> CREATE USER 'binlog_to_object_storage'@'10.0.0.%' 
     IDENTIFIED BY 'C0mpl1c4t3d!Passw0rd' REQUIRE SSL;
SQL> GRANT REPLICATION SLAVE ON *.* 
     TO 'binlog_to_object_storage'@'10.0.0.%';

I used a range ip for the host part of the credential account but you could have used the private IP of the compute instance.

Don’t mess up with credentials

To connect to MDS with the new credentials, we could specify the user, host and password in the command line… but it’s not always safe and also it can behave strangely when we use special characters and we want to script all that (it’s the case here as I use an exclamation mark, !, in the password).

The best solution that MySQL provides us, is to use mysql_config_editor that works with all MySQL clients (mysqlbinlog too of course, we will see in the next chapter why we need it).

So now we need to create a config path using mysql_config_editor. As my MDS instance is called my-mds, I will use the same name but you can use what suits you the best:

[root@mysql-shell ~]# mysql_config_editor set --login-path=my-mds \
 --host=10.0.1.128 --user=binlog_to_object_storage --password
Enter password: 

10.0.1.128 is the IP of my MDS instance.

We can test the connection using the old mysql client for example:

[root@mysql-shell ~]# mysql --login-path=my-mds 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 87
Server version: 8.0.26-u1-cloud MySQL Enterprise - Cloud

This is perfect, we can move forward.

mysqlbinlog

As you may know, mysqlbinlog has the possibility of reading the binary logs from a live server and just store them to disk using the options --raw --read-from-remote-server. This is what we gonna use.

I’ve created a wrapper slightly inspirited by the one of my old friend Tamas.

We need to save it on the compute instance in /root/bin/binlog_to_object_storage.sh

the opc user is not allowed to exec fusermount, we need to use root

The file is here: binlog_to_object_storage.sh (for MDS HA, use this script that includes the hostname in the saved binary log’s file name: binlog_to_object_storage.sh)

We also need a configuration file. As I wrote before, the same sever could be used to stream multiple different MDS instances.

Let’s create the configuration file in /root/conf and call it my-mds.conf:

MYSQL_LOGIN_PATH=my-mds
OBJECT_STORAGE_MOUNT=/mnt/oci
BINLOGS_DIR=my-mds

We only need this three variables what will define the credentials to use and where to store the data.

I also created a systemd service script to start the process, same script can be also used for multiple streaming processes.

So we create a file called /etc/systemd/system/binlog_streaming@.service:

[Unit]
Description=Streaming MDS binary logs to Object Storage using %i
After=network.target

[Service]
Type=simple
User=root
Restart=on-failure
ExecStart=/root/bin/binlog_to_object_storage.sh /root/conf/%i.conf

[Install]
WantedBy=multi-user.target

We just need to reload the systemclt daemon and start our new service:

As you can see, we provide the configuration name (my-mds) to the service:

binlog_streaming@my-mds.service

And we can see in Object Storage that the binary logs are now stored in it even after being purged in the MySQL Database Service instance:

Conclusion

It’s very easy to stream the binary logs of a MySQL Database Service instance to Object Storage in OCI. MySQL provides all the needed tools.

In the second part of the series, we will see how we can use those binary logs to perform Point-in-Time Recovery.

Enjoy MySQL and MySQL Database Service !

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

2 Comments

Leave a Reply to Point-in-Time Recovery in OCI MDS with Object Storage – part 2 – lefred's blog: tribulations of a MySQL EvangelistCancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.