Deploying WordPress and MySQL on OKE using MySQL Operator for Kubernetes

Let’s see how to deploy WordPress and MySQL on a Kubernetes Cluster. The Kubernets cluster we are using is OKE (Oracle Kubernetes Engine) in OCI (Oracle Cloud Infrastructure):

OKE Cluster

We start by creating a Kubernetes Cluster on OCI using the Console:

We select the Quick create mode:

We need to name our cluster and make some choices:

When created, we can find it in the OKE Clusters list:

And we can see the pool of workers nodes and the workers:

kubectl

I like to use kubectl directly on my latop to manage my K8s Cluster.

On my Linux Desktop, I need to install kubernetes-client package (rpm).

Then on the K8s Cluster details, you can click on Access Cluster to get all the commands to use:

We need to copy them on our terminal and then, I like to also enable the bash completion for kubectl in my environment:

$ source <(kubectl completion bash)
$ echo "source <(kubectl completion bash)" >> $HOME/.bashrc

And now, we can easily test it:

$ kubectl get nodes
NAME          STATUS   ROLES   AGE     VERSION
10.0.10.155   Ready    node    21s     v1.27.2
10.0.10.193   Ready    node    21s     v1.27.2
10.0.10.216   Ready    node    21s     v1.27.2

MySQL Operator Deployment

To deploy MySQL, we use the mysql-operator for Kubernetes that manages the deployment of an InnoDB Cluster (including MySQL Router).

This is an overview of the architecture:

We start by installing the operator using manifest files:

$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
customresourcedefinition.apiextensions.k8s.io/innodbclusters.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/mysqlbackups.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/clusterkopfpeerings.zalando.org created
customresourcedefinition.apiextensions.k8s.io/kopfpeerings.zalando.org created
$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
clusterrole.rbac.authorization.k8s.io/mysql-operator created
clusterrole.rbac.authorization.k8s.io/mysql-sidecar created
clusterrolebinding.rbac.authorization.k8s.io/mysql-operator-rolebinding created
clusterkopfpeering.zalando.org/mysql-operator created
namespace/mysql-operator created
serviceaccount/mysql-operator-sa created
deployment.apps/mysql-operator created

We can verify that the mysql-operator for Kubernetes has been successfully deployed and that it’s reay:

$ kubectl get deployment mysql-operator --namespace mysql-operator
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
mysql-operator   1/1     1            1           35s

MySQL InnoDB Cluster

For our architecture, we will create two namespaces:

  • web-database: for everything related to MySQL
  • web-frontend: for the webservers

This is how we create them:

$ kubectl create ns web-database
$ kubectl create ns web-frontend

We need to create the password for the root MySQL user. We use K8s Secret resource to store the credentials:

$ kubectl -n web-database create secret generic mypwds \
        --from-literal=rootUser=root \
        --from-literal=rootHost=% \
        --from-literal=rootPassword="Passw0rd!"

We can verify that the credentials were created correctly:

$ kubectl -n web-database get secret mypwds
NAME     TYPE     DATA   AGE
mypwds   Opaque   3      112s

Just for fun, you can try to decode the password (or in case you forgot it):

$ kubectl -n web-database get secret mypwds -o yaml | \
> grep rootPassword | cut -d: -f 2 | xargs | base64 -d
Passw0rd!

To deploy our first MySQL InnoDB Cluster, we need to create a YAML file (mycluster.yaml):

apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
  name: mycluster
spec:
  secretName: mypwds
  tlsUseSelfSigned: true
  instances: 3
  router:
    instances: 1

And we deploy it in the web-database namespace:

$ kubectl -n web-database apply -f mycluster.yaml    
innodbcluster.mysql.oracle.com/mycluster created

We can verify the status of the pods:

$ kubectl -n web-database get pods
NAME          READY   STATUS            RESTARTS   AGE
mycluster-0   2/2     Running           0          80s
mycluster-1   0/2     PodInitializing   0          80s
mycluster-2   0/2     Init:2/3          0          80s

After a while, we can check several resources that have been deployed by the operator:

We will deploy a new pod to connect to our MySQL instances using MySQL Shell. We connect through the router (port 6446):

$ kubectl run --rm -it myshell \
--image=container-registry.oracle.com/mysql/community-operator -n web-database -- \
 mysqlsh root@mycluster.web-database.svc.cluster.local:6446
If you don't see a command prompt, try pressing enter.
********

And in MySQL Shell, we create the wordpress database and a dedicated user:

SQL> create database wordpress;

SQL> create user wordpress identified by 'W0rdPress';

SQL> grant all privileges on wordpress.* to wordpress;

WordPress

It’s time to deploy WordPress. Once again, we create a new YAML file (wordpress.yaml):

apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:latest
        name: wordpress
        env:
        - name: WORDPRESS_DB_NAME
          value: wordpress
        - name: WORDPRESS_DB_HOST
          value: mycluster.web-database.svc.cluster.local:6446
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        - name: WORDPRESS_DB_USER
          value: wordpress
        - name: WORDPRESS_CONFIG_EXTRA # enable SSL connection for MySQL
          value: |
            define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL);
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim 

It’s very important to force the usage of client SSL for mysqli. If not, WordPress won’t work with a MySQL user using the recommended authentication plugin: caching_sha2_password

The deployment is easy:

$ kubectl -n web-frontend apply -f wordpress.yaml

And we can even scale the WordPress web servers very easily:

$ kubectl scale deployment wordpress --replicas=3 -n web-frontend
deployment.apps/wordpress scaled

And everything is deployed and ready to use:

If we use the external public IP, we join the last step of the WordPress installation:

I’ve installed a PHP code snipped to see which of the 3 web servers I’m reaching when visiting the website:

We can see that we are load balancing the requests accross the 3 web servers.

This is what we deployed:

Conclusion

In this post, we have walked through the comprehensive steps to successfully deploy WordPress and MySQL on Oracle Kubernetes Engine (OKE) using the official mysql-operator for Kubernetes. This process simplifies the complex task of managing databases, automating many of complicated steps involved to setup High Availability.

Happy MySQL deployments in OKE using MySQL Operator for Kubernetes!

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.