Using MySQL Database Service for WordPress

Today we will see how to use MySQL Database Service aka MDS with WordPress.

To achieve this easy task, we will use the architecture we already deployed in this article.

We have then two Compute Instances on OCI, 1 running WordPress (Apache and PHP) and one running MySQL 8.0.

The Plan

This is how we will proceed to migrate to MDS with minimal maintenance time, we will:

  1. create a MDS instance
  2. verify if the database is ready to act as replication source
  3. dump the MySQL instance running on OCI for being migrated to MDS.
  4. load the dump in MDS
  5. create a user dedicated to the replication
  6. create a replication channel on MDS (from OCI to MDS)
  7. modify WordPress config to point to MDS

Create a MDS instance

The first step is of course to connect to OCI’s dashboard. We should see our two running compute instances:

Let’s now create the MySQL Database Service instance, you will find the service in the left column;

Now we just follow the creation wizard that is very simple:

It’s very important to create an admin user (the name can be whatever you want) and don’t forget the password. We will put our service in the private subnet we created earlier (see also this post):

The last screen of the wizard is related to the automatic backups:

The MDS instance will be provisioned after a short time and you can see that in its detailed view:

All good for step 1, we can now proceed with the second one.

Verify our database to act as replication source

We now connect to our current MySQL running on OCI and verify that all settings required to replication with GTID are in place.

mysql> select @@gtid_mode;
+-------------+
| @@gtid_mode |
+-------------+
| OFF         |
+-------------+
1 row in set (0.00 sec)

By default GTID mode is disabled and we need to enable it. To be able to perform this operation without restarting the MySQL instance, this is how to proceed:

mysql> SET PERSIST server_id=1;
mysql> SET PERSIST enforce_gtid_consistency=true;
mysql> SET PERSIST gtid_mode=off_permissive;
mysql> SET PERSIST gtid_mode=on_permissive;
mysql> SET PERSIST gtid_mode=on;
mysql> select @@gtid_mode;
+-------------+
| @@gtid_mode |
+-------------+
| ON          |
+-------------+

All good now !

MySQL Shell Dump & Load

Instead of using mysqldump or mysqlpump that are not efficient in case of large dataset and are certainly not optimized for Cloud, we will use the new amazing feature of MySQL Shell (since 8.0.21): Dump & Load.

Please check those links for more details:

We connect in ssh to our MySQL instance in OCI and we can start the dump.

There are multiple ways to perform such operation. As my WordPress dataset is not so large and already in the cloud, I use the filesystem. If it was on premise with a large dataset I would have used Object Store.

I start MySQL Shell and connect to my MySQL instance on OCI (the one WordPress is using for the moment). As I’m connected via ssh on the server, I use localhost to connect using Shell:

$ mysqlsh root@localhost

And now I just dump my instance. As I know this dump will be used to be loaded in MDS, I will add some options:

MySQL localhost:33060+ ssl JS > util.dumpInstance('file://dump', 
  {ocimds: true, compatibility: ["strip_restricted_grants"]})

This is the full output of the command:

Checking for compatibility with MySQL Database Service 8.0.21
NOTE: User root@localhost had restricted privileges (RELOAD, FILE, SUPER, 
      BINLOG_ADMIN, SET_USER_ID) removed
NOTE: Database wordpress had unsupported ENCRYPTION option commented out
Compatibility issues with MySQL Database Service 8.0.21 were found and repaired. 
Please review the changes made before loading them.
Acquiring global read lock
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Preparing data dump for table wordpress.wp_commentmeta
Writing DDL for schema wordpress
Writing DDL for table wordpress.wp_commentmeta
Data dump for table wordpress.wp_commentmeta will be chunked using column meta_id
Preparing data dump for table wordpress.wp_comments
Data dump for table wordpress.wp_comments will be chunked using column comment_ID
Preparing data dump for table wordpress.wp_links
Data dump for table wordpress.wp_links will be chunked using column link_id
Preparing data dump for table wordpress.wp_options
Data dump for table wordpress.wp_options will be chunked using column option_id
Preparing data dump for table wordpress.wp_postmeta
Data dump for table wordpress.wp_postmeta will be chunked using column meta_id
Preparing data dump for table wordpress.wp_posts
Data dump for table wordpress.wp_posts will be chunked using column ID
Preparing data dump for table wordpress.wp_term_relationships
Data dump for table wordpress.wp_term_relationships will be chunked using column object_id
Preparing data dump for table wordpress.wp_term_taxonomy
Data dump for table wordpress.wp_term_taxonomy will be chunked using column term_taxonomy_id
Preparing data dump for table wordpress.wp_termmeta
Data dump for table wordpress.wp_termmeta will be chunked using column meta_id
Preparing data dump for table wordpress.wp_terms
Data dump for table wordpress.wp_terms will be chunked using column term_id
Preparing data dump for table wordpress.wp_usermeta
Writing DDL for table wordpress.wp_comments
Writing DDL for table wordpress.wp_links
Writing DDL for table wordpress.wp_options
Data dump for table wordpress.wp_usermeta will be chunked using column umeta_id
Preparing data dump for table wordpress.wp_users
Data dump for table wordpress.wp_users will be chunked using column ID
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing DDL for table wordpress.wp_postmeta
Writing DDL for table wordpress.wp_posts
Writing DDL for table wordpress.wp_term_relationships
Writing DDL for table wordpress.wp_term_taxonomy
Writing DDL for table wordpress.wp_termmeta
Writing DDL for table wordpress.wp_terms
Writing DDL for table wordpress.wp_usermeta
Writing DDL for table wordpress.wp_users
Data dump for table wordpress.wp_commentmeta will be written to 1 file
Data dump for table wordpress.wp_comments will be written to 1 file
Data dump for table wordpress.wp_links will be written to 1 file
Data dump for table wordpress.wp_postmeta will be written to 1 file
Data dump for table wordpress.wp_options will be written to 1 file
Data dump for table wordpress.wp_term_taxonomy will be written to 1 file
Data dump for table wordpress.wp_term_relationships will be written to 1 file
Data dump for table wordpress.wp_terms will be written to 1 file
Data dump for table wordpress.wp_posts will be written to 1 file
Data dump for table wordpress.wp_termmeta will be written to 1 file
Data dump for table wordpress.wp_users will be written to 1 file
Data dump for table wordpress.wp_usermeta will be written to 1 file
1 thds dumping - 110% (2.22K rows / ~2.00K rows), 0.00 rows/s, 0.00 B/s uncompressed, 
                                                  0.00 B/s compressed
Duration: 00:00:00s
Schemas dumped: 1
Tables dumped: 12
Uncompressed data size: 1.49 MB
Compressed data size: 341.04 KB
Compression ratio: 4.4
Rows written: 2216
Bytes written: 341.04 KB
Average uncompressed throughput: 1.49 MB/s
Average compressed throughput: 341.04 KB/s

We can now load that dump to our MDS Instance easily. From MySQL Shell we connect to our instance using the IP we have in the detailed view or using the name we gave it during setup:

MySQL localhost:33060+ ssl JS > \c admin@10.0.1.9
Creating a session to 'admin@10.0.1.11'
Please provide the password for 'admin@10.0.1.11': ***
Save password for 'admin@10.0.1.9'? [Y]es/[N]o/Ne[v]er (default No): y
Fetching schema names for autocompletion… Press ^C to stop.
Closing old connection…
Your MySQL connection id is 9 (X protocol)
Server version: 8.0.21-u1-cloud MySQL Enterprise - Cloud
No default schema selected; type \use to set one.

Perfect ! We can start the load:

MySQL 10.0.1.9:33060+ ssl JS > util.loadDump('file://dump')
Loading DDL and Data from 'file://dump' using 4 threads.
Target is MySQL 8.0.21-u1-cloud. Dump was produced from MySQL 8.0.21
Checking for pre-existing objects…
Executing common preamble SQL
Executing DDL script for schema wordpress
Executing DDL script for wordpress.wp_users
Executing DDL script for wordpress.wp_usermeta
Executing DDL script for wordpress.wp_links
Executing DDL script for wordpress.wp_options
Executing DDL script for wordpress.wp_termmeta
Executing DDL script for wordpress.wp_comments
Executing DDL script for wordpress.wp_commentmeta
Executing DDL script for wordpress.wp_postmeta
Executing DDL script for wordpress.wp_term_taxonomy
Executing DDL script for wordpress.wp_term_relationships
Executing DDL script for wordpress.wp_posts
Executing DDL script for wordpress.wp_terms
[Worker003] wordpress@wp_options@@0.tsv.zst: Records: 128 Deleted: 0 Skipped: 0 Warnings: 0
[Worker003] wordpress@wp_usermeta@@0.tsv.zst: Records: 18 Deleted: 0 Skipped: 0 Warnings: 0
[Worker003] wordpress@wp_term_relationships@@0.tsv.zst: Records: 177 Deleted: 0 Skipped: 0 Warnings: 0
[Worker003] wordpress@wp_users@@0.tsv.zst: Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
[Worker003] wordpress@wp_terms@@0.tsv.zst: Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
[Worker003] wordpress@wp_term_taxonomy@@0.tsv.zst: Records: 1 Deleted: 0 Skipped: 0 Warnings: 0
[Worker002] wordpress@wp_comments@@0.tsv.zst: Records: 201 Deleted: 0 Skipped: 0 Warnings: 0
[Worker001] wordpress@wp_postmeta@@0.tsv.zst: Records: 1266 Deleted: 0 Skipped: 0 Warnings: 0
[Worker000] wordpress@wp_posts@@0.tsv.zst: Records: 423 Deleted: 0 Skipped: 0 Warnings: 0
Executing common postamble SQL
9 chunks (2.22K rows, 1.49 MB) for 9 tables in 1 schemas were loaded in 1 sec (avg throughput 1.49 MB/s)
0 warnings were reported during the load.

Et voilà ! All is imported.

In MDS with MySQL Shell 8.0.21, loading users can be problematic. You can try with {loadUsers: true} but it will fails for the user root. In 8.0.22, you will have to possibility to exclude or include some users.

Therefor, in our case the wp user needs to be created manually in MDS (you will find the statements in the dump @.users.sql file.

If you are a specialist of MySQL Replication, you already noticed that we are missing on important thing to after have loaded the dump.

We still need to set the GTID purged information from when the dump was taken.

In MDS, this operation can be achieved calling a dedicated procedure called sys.set_gtid_purged()

Now let’s find the value we need to add there. The value of GTID executed from the dump is written in the file @.json in the dump directory:

$ cat dump/@.json | grep gtidEx
"gtidExecuted": "7c9286c7-dd49-11ea-8564-020017013873:1-369",

Now that we know the value, we can specify it in our MDS instance:

MySQL 10.0.1.9:33060+ ssl 
JS > \sql call sys.set_gtid_purged("7c9286c7-dd49-11ea-8564-020017013873:1-369")

Replication user creation

On the production MySQL (the one still running on the OCI compute instance), we need to create a user dedicated to replication:

mysql> CREATE USER 'repl'@'10.0.1.%' IDENTIFIED BY 'C0mpl1c4t3d!Paddw0rd' REQUIRE SSL; 
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'10.0.1.%';

Create the replication channel

We go back on OCI’s dashboard and in our MDS instance’s details page, we click on Channels:

We now create a channel and follow the wizard:

We use the credentials we just created and as hostname we put or the IP or the fqdn of the compute instance.

After a little while, the channel will be created and in MySQL Shell when connected to your MDS instance, you can see that replication is running:

I invite you to use this query to have all the information you need (cut and paste it 😉 ):

SELECT
concat(conn_status.channel_name, ' (', worker_id,')') AS channel,
conn_status.service_state AS io_state,
applier_status.service_state AS sql_state,
format_pico_time(if(GTID_SUBTRACT(LAST_QUEUED_TRANSACTION, LAST_APPLIED_TRANSACTION) = "","0",
abs(time_to_sec(if(time_to_sec(APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP)=0,0,
timediff(APPLYING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP,now()))))) * 1000000000000) latency,
format_pico_time((LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP -
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP) * 100000000000) transport_time,
format_pico_time((LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP -
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP) * 1000000000000) time_to_relay_log,
format_pico_time((LAST_APPLIED_TRANSACTION_END_APPLY_TIMESTAMP -
LAST_APPLIED_TRANSACTION_START_APPLY_TIMESTAMP) * 1000000000000) apply_time,
conn_status.LAST_QUEUED_TRANSACTION AS last_queued_transaction,
applier_status.LAST_APPLIED_TRANSACTION AS last_applied_transaction
FROM
performance_schema.replication_connection_status AS conn_status
JOIN performance_schema.replication_applier_status_by_worker AS applier_status
ON applier_status.channel_name = conn_status.channel_name
;

This is an example of the output:

Point WordPress to MDS

Now you need to change the location of your MySQL Server in WordPress. If you prefer you can put WordPress in Maintenance Mode (there are few plugins to do that) or just change the wp-config.php file immediately.

You can of course wait a but to see how MDS behave and to get familiar with it. Replication is setup and running. When you are ready, just make the change.

In my case I need to replace

/** MySQL hostname */
define( 'DB_HOST', '10.0.1.5' );

by

/** MySQL hostname */
define( 'DB_HOST', '10.0.1.9' );

And that’s it ! Wooohoooo, it works \o/

We are now running WordPress with MDS !

After, you can stop the replication channel and terminal the MySQL Compute Instance.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.