Using OCI Serverless Functions and API Gateways to create logical dumps of a MySQL Database Service with MySQL Shell

When using cloud native application architectures, Functions have an important role for the business logic without managing any infrastructure. Functions is a serverless platform powered by the Fn Project open source engine.

These Functions are mostly used to create and run different tasks. They can be executed using Fn Project CLI, OCI CLI or a HTTP request.

HTTP requests are useful to invoke Functions from a different service. Usually use to schedule the execution of the function according to a time schedule. Currently, Oracle Cloud Infrastructure does not provide a native mechanism to create time based schedules to execute some jobs like invoking a Function directly or via a HTTP request.

There are some workarounds (see this post) but they won’t be compatible with today’s topic.

The Task

The first thing we need to define is the job of our function. We also need to define some variables and their scope.

The end goal of our function is that when it is invoked, a dump instance will be executed for a MySQL Database Service Instance and the logical dump will be stored in Object Storage. Of course MySQL Shell will be used to perform the logical dump.

The Variables

A a Function (that is part of an Application), can receive information using a Configuration Variable or by parsing a received JSON document (body).

Configuration variables can be defined at the Application level (parent of the functions) or a the Function level. However every time the function is invoked, those configuration settings are not changed. They are similar to static variables.

Dynamic variables (data sent as JSON in the body) can be sent every time at each execution.

To be able to perform a MySQL Shell Dump of a MySQL Database Instance to Object Storage, the following information is required:

  • mds_host: the host of MySQL Database Service Instance (can be the IP)
  • mds_port: the port where MySQL is listening (usually 3306)
  • mds_user: the MySQL user that will be used to perform the dump
  • mds_password: the password of the MySQL user
  • mds_name: the name of the MDS instance (we only use it for the backup’s name, no check is performed)
  • obs_bucket: the Object Storage bucket’s name
  • obs_namesapce: the Object Storage’s namespace
  • oci_fingerprint: MySQL Shell requires a OCI config file, therefore we need several info to create it in the serverless instance, we need the user’s API Key fingerprint
  • oci_region: the Region
  • oci_user: the OCID of the user
  • oci_tenancy: the tenancy’s OCID
  • oci_key: the key file base64 encoded string

Some variables won’t change each time we will invoke the Function. Those variables will become config variables:

  • obs_bucket
  • obs_namespace
  • oci_fingerprint
  • oci_region
  • oci_user
  • oci_tenancy
  • oci_key

All the other ones will be part of the data we sent each time we execute our function. This will allow us to dump multiple MySQL Database Instances using the same Function.


It’s time to create our Application !

In OCI Console, we go in Functions Applications and we create a new one:

We need to provide a name and very important select the VCN used by our MDS instance(s):

As soon as the application is ACTIVE (very quickly):

we can follow the information in Getting Started‘s section:

We launch Cloud Shell and we cut and paste the commands:

Don’t forget to generate your Auth Token (point 5) and save it.

In Cloud Shell we can list our apps and we should see the one we created:

cloudshell:~ (us-ashburn-1)$ fn list apps
NAME                    ID
mds_logical_dump        ocid1.fnapp.oc1....3uwiq

We then initiate our function (mysqlshell_dump) using python:

cloudshell:~ (us-ashburn-1)$ fn init --runtime python mysqlshell_dump
Creating function at: ./mysqlshell_dump
Function boilerplate generated.
func.yaml created.


Now, we need to create a custom Dockerfile because the default one doesn’t include MySQL Shell.

So in Cloud Shell, we will change directory to mysqlshell_dump and create a Dockerfile containing the following content:

FROM fnproject/python:3.8-dev as build-stage
WORKDIR /function
ADD requirements.txt /function/

RUN pip3 install --target /python/  --no-cache --no-cache-dir -r requirements.txt && rm -fr ~/.cache/pip /tmp*  requirements.txt func.yaml Dockerfile .venv && chmod -R o+r /python

ADD . /function/

RUN rm -fr /function/.pip_cache

FROM fnproject/python:3.8
WORKDIR /function

COPY --from=build-stage /python /python
COPY --from=build-stage /function /function

RUN chmod -R o+r /function && mkdir -p /home/fn && chown fn /home/fn

RUN rpm -U \
  && rpm -U
RUN microdnf install -y mysql-shell
RUN microdnf install -y glibc-all-langpacks

ENV PYTHONPATH=/function:/python
ENTRYPOINT ["/python/bin/fdk", "/function/", "handler"]

The Function

The function’s code is defined in We replace its content with the following code:

import io
import json
import logging
import subprocess 
import os
import base64

from fdk import response
from datetime import datetime

def handler(ctx, data: io.BytesIO = None):
        cfg = ctx.Config()
        obs_bucket = cfg["bucket"]
        obs_namespace = cfg["namespace"]
        oci_fingerprint = cfg["oci_fingerprint"]
        oci_region = cfg["oci_region"]
        oci_user = cfg["oci_user"]
        oci_tenancy = cfg["oci_tenancy"]
        oci_key = cfg["oci_key"]
    except (Exception, ValueError) as ex:
        logging.getLogger().error('ERROR: Missing configuration key', ex)

        body = json.loads(data.getvalue())
        mds_host = body.get("mds_host")
        mds_port = body.get("mds_port")
        mds_user = body.get("mds_user")
        mds_pwd = body.get("mds_password")
        mds_name = body.get("mds_name")
        backup_name=backup_name.replace(" ","_")
        backup_name="{}_{}".format(backup_name, datetime.utcnow().strftime("%Y%m%d%H%M"))
        logging.getLogger().info('oci_fingerprint: {}'.format(oci_fingerprint))
    except (Exception, ValueError) as ex:
        logging.getLogger().error('ERROR: Missing parameter', ex)

        with open('/tmp/.oci_config', 'w') as f:
        with open('/tmp/key.pem', 'w') as g:
    except (Exception, ValueError) as ex:
        logging.getLogger().error('ERROR: Problem creating OCI config', ex)

    logging.getLogger().info("Inside Python MDS logical dump function")
    os.system('export LC_ALL="en_US.UTF-8"')
    my_env = os.environ.copy()
    shell_cmd = subprocess.Popen(['/usr/bin/mysqlsh','{}@{}:{}'.format(mds_user, mds_host, mds_port),'--force', '--log-file=/tmp/shell.log', '--password={}'.format(mds_pwd),'--','util','dump_instance', backup_name,'--osBucketName={}'.format(obs_bucket),'--osNamespace={}'.format(obs_namespace), '--ociConfigFile=/tmp/.oci_config', '--compatibility=strip_definers,strip_restricted_grants' ], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=my_env)
    logging.getLogger().info("Subprocess called")
    output, errors = shell_cmd.communicate()
    logging.getLogger().info("Subprocess communicated")
    logging.getLogger().info("Subprocess waited")
    logging.getLogger().info("Output: {}".format(output))
    logging.getLogger().info("Errors: {}".format(errors))
    return response.Response(
        ctx, response_data=json.dumps(
            {"message": "MDS Dump in progress: {}".format(backup_name)}),
        headers={"Content-Type": "application/json"}

This is what we have now in our function directory on Cloud Shell:

cloudshell:mysqlshell_dump (us-ashburn-1)$ ls
Dockerfile  func.yaml  requirements.txt

And we can deploy the application:

Configuration Variables

We can set the configuration variables directly from Cloud Shell or using OCI Console:

fn config app mds_logical_dump oci_region "us-ashburn-1"
fn config app mds_logical_dump bucket "lefred-bucket"
fn config app mds_logical_dump namespace "i...j"
fn config app mds_logical_dump oci_fingerprint "58:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:4b"
fn config app mds_logical_dump oci_user "ocid1.user.oc1..aa...vqa"
fn config app mds_logical_dump oci_tenancy "ocid1.tenancy.oc1..aa...2a"
fn config app mds_logical_dump oci_key "LS0tLS1...0tLQ=="       

For the oci_key in base64, this is what I do to generate the string of the key.pem file:

$ python
>>> import base64
... ...
... -----END PRIVATE KEY-----"""
>>> base64.b64encode(a.encode('ascii')).decode('ascii')

Once the variables have been added using Cloud Shell, we can see them in OCI Console too:

We could also have added them directly from OCI Console.

Invoking the Function

The first time, the best way to invoke the function is to use again Cloud Shell and the Fn Project CLI:

cloudshell:mysqlshell_dump (us-ashburn-1)$ echo -n '{"mds_host": "", "mds_user": "admin", "mds_port": "3306", "mds_password": "my password", "bucket": "lefred-bucket", "namespace": "i.....j", "mds_name": "my_mds"}' | fn invoke mds_logical_dump mysqlshell_dump

And we can verify in Object Storage the dump in our bucket:

The second possibility is to use OCI’s CLI with the OCID of the function:

API Gateway

Now that our function works we can create an API Gateway. For the example I will use a public API to start a logical dump from anywhere:

We can now create a deployment for our function:

And we specify the function we want to use and the method it supports. As we need to pass JSON information related to the instance we want to dump, I use the POST method:

As soon as the deployment is active, we can test it:

We use the endpoint‘s url like this:


Now we can use this API with any external project or home made (like cronjobs) solution to trigger a MySQL Shell logical dump of our MDS Instance to Object Storage.

This is useful when you want to use CloudNative Serverless solution. If you have a large database and you want to perform regular logical dumps of if, you might not benefit from MySQL Shell Dump & Load utility’s speed and parallelism as the deployed instance can have a maximum of 2GB of RAM (defined in func.yaml).

A powerful dedicated compute instance for MySQL Shell is then recommended.

Enjoy using MySQL in OCI.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.