When using cloud native application architectures, Functions have an important role for the business logic without managing any infrastructure. Functions is a serverless platform powered by the Fn Project open source engine.
These Functions are mostly used to create and run different tasks. They can be executed using Fn Project CLI, OCI CLI or a HTTP request.
HTTP requests are useful to invoke Functions from a different service. Usually use to schedule the execution of the function according to a time schedule. Currently, Oracle Cloud Infrastructure does not provide a native mechanism to create time based schedules to execute some jobs like invoking a Function directly or via a HTTP request.
There are some workarounds (see this post) but they won’t be compatible with today’s topic.
The Task
The first thing we need to define is the job of our function. We also need to define some variables and their scope.
The end goal of our function is that when it is invoked, a dump instance will be executed for a MySQL Database Service Instance and the logical dump will be stored in Object Storage. Of course MySQL Shell will be used to perform the logical dump.
The Variables
A a Function (that is part of an Application), can receive information using a Configuration Variable or by parsing a received JSON document (body).
Configuration variables can be defined at the Application level (parent of the functions) or a the Function level. However every time the function is invoked, those configuration settings are not changed. They are similar to static variables.
Dynamic variables (data sent as JSON in the body) can be sent every time at each execution.
To be able to perform a MySQL Shell Dump of a MySQL Database Instance to Object Storage, the following information is required:
mds_host
: the host of MySQL Database Service Instance (can be the IP)mds_port
: the port where MySQL is listening (usually 3306)mds_user
: the MySQL user that will be used to perform the dumpmds_password
: the password of the MySQL usermds_name
: the name of the MDS instance (we only use it for the backup’s name, no check is performed)obs_bucket
: the Object Storage bucket’s nameobs_namesapce
: the Object Storage’s namespaceoci_fingerprint
: MySQL Shell requires a OCI config file, therefore we need several info to create it in the serverless instance, we need the user’s API Key fingerprintoci_region
: the Regionoci_user
: the OCID of the useroci_tenancy
: the tenancy’s OCIDoci_key
: the key file base64 encoded string
Some variables won’t change each time we will invoke the Function. Those variables will become config variables:
obs_bucket
obs_namespace
oci_fingerprint
oci_region
oci_user
oci_tenancy
oci_key
All the other ones will be part of the data we sent each time we execute our function. This will allow us to dump multiple MySQL Database Instances using the same Function.
Application
It’s time to create our Application !
In OCI Console, we go in Functions Applications and we create a new one:
We need to provide a name and very important select the VCN used by our MDS instance(s):
As soon as the application is ACTIVE (very quickly):
we can follow the information in Getting Started‘s section:
We launch Cloud Shell and we cut and paste the commands:
Don’t forget to generate your Auth Token (point 5) and save it.
In Cloud Shell we can list our apps and we should see the one we created:
cloudshell:~ (us-ashburn-1)$ fn list apps NAME ID mds_logical_dump ocid1.fnapp.oc1....3uwiq
We then initiate our function (mysqlshell_dump) using python:
cloudshell:~ (us-ashburn-1)$ fn init --runtime python mysqlshell_dump Creating function at: ./mysqlshell_dump Function boilerplate generated. func.yaml created.
Dockerfile
Now, we need to create a custom Dockerfile because the default one doesn’t include MySQL Shell.
So in Cloud Shell, we will change directory to mysqlshell_dump
and create a Dockerfile containing the following content:
FROM fnproject/python:3.8-dev as build-stage WORKDIR /function ADD requirements.txt /function/ RUN pip3 install --target /python/ --no-cache --no-cache-dir -r requirements.txt && rm -fr ~/.cache/pip /tmp* requirements.txt func.yaml Dockerfile .venv && chmod -R o+r /python ADD . /function/ RUN rm -fr /function/.pip_cache FROM fnproject/python:3.8 WORKDIR /function COPY --from=build-stage /python /python COPY --from=build-stage /function /function RUN chmod -R o+r /function && mkdir -p /home/fn && chown fn /home/fn RUN rpm -U https://repo.mysql.com/mysql-community-minimal-release-el8.rpm \ && rpm -U https://repo.mysql.com/mysql80-community-release-el8.rpm RUN microdnf install -y mysql-shell RUN microdnf install -y glibc-all-langpacks ENV PYTHONPATH=/function:/python ENTRYPOINT ["/python/bin/fdk", "/function/func.py", "handler"]
The Function
The function’s code is defined in func.py
. We replace its content with the following code:
import io import json import logging import subprocess import os import base64 from fdk import response from datetime import datetime def handler(ctx, data: io.BytesIO = None): try: cfg = ctx.Config() obs_bucket = cfg["bucket"] obs_namespace = cfg["namespace"] oci_fingerprint = cfg["oci_fingerprint"] oci_region = cfg["oci_region"] oci_user = cfg["oci_user"] oci_tenancy = cfg["oci_tenancy"] oci_key = cfg["oci_key"] except (Exception, ValueError) as ex: logging.getLogger().error('ERROR: Missing configuration key', ex) raise try: body = json.loads(data.getvalue()) mds_host = body.get("mds_host") mds_port = body.get("mds_port") mds_user = body.get("mds_user") mds_pwd = body.get("mds_password") mds_name = body.get("mds_name") backup_name=mds_name.replace(".","_") backup_name=backup_name.replace(" ","_") backup_name="{}_{}".format(backup_name, datetime.utcnow().strftime("%Y%m%d%H%M")) logging.getLogger().info('oci_fingerprint: {}'.format(oci_fingerprint)) except (Exception, ValueError) as ex: logging.getLogger().error('ERROR: Missing parameter', ex) raise try: with open('/tmp/.oci_config', 'w') as f: f.write('[DEFAULT]\n') f.write('user={}\n'.format(oci_user)) f.write('fingerprint={}\n'.format(oci_fingerprint)) f.write('tenancy={}\n'.format(oci_tenancy)) f.write('region={}\n'.format(oci_region)) f.write('key_file=/tmp/key.pem\n') with open('/tmp/key.pem', 'w') as g: g.write(base64.b64decode(oci_key.encode('ascii')).decode('ascii')) except (Exception, ValueError) as ex: logging.getLogger().error('ERROR: Problem creating OCI config', ex) raise logging.getLogger().info("Inside Python MDS logical dump function") os.system('export LC_ALL="en_US.UTF-8"') my_env = os.environ.copy() my_env["MYSQLSH_USER_CONFIG_HOME"]="/tmp" shell_cmd = subprocess.Popen(['/usr/bin/mysqlsh','{}@{}:{}'.format(mds_user, mds_host, mds_port),'--force', '--log-file=/tmp/shell.log', '--password={}'.format(mds_pwd),'--','util','dump_instance', backup_name,'--osBucketName={}'.format(obs_bucket),'--osNamespace={}'.format(obs_namespace), '--ociConfigFile=/tmp/.oci_config', '--compatibility=strip_definers,strip_restricted_grants' ], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=my_env) logging.getLogger().info("Subprocess called") output, errors = shell_cmd.communicate() logging.getLogger().info("Subprocess communicated") shell_cmd.wait() logging.getLogger().info("Subprocess waited") logging.getLogger().info("Output: {}".format(output)) logging.getLogger().info("Errors: {}".format(errors)) return response.Response( ctx, response_data=json.dumps( {"message": "MDS Dump in progress: {}".format(backup_name)}), headers={"Content-Type": "application/json"} )
This is what we have now in our function directory on Cloud Shell:
cloudshell:mysqlshell_dump (us-ashburn-1)$ ls Dockerfile func.py func.yaml requirements.txt
And we can deploy the application:
Configuration Variables
We can set the configuration variables directly from Cloud Shell or using OCI Console:
fn config app mds_logical_dump oci_region "us-ashburn-1" fn config app mds_logical_dump bucket "lefred-bucket" fn config app mds_logical_dump namespace "i...j" fn config app mds_logical_dump oci_fingerprint "58:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:4b" fn config app mds_logical_dump oci_user "ocid1.user.oc1..aa...vqa" fn config app mds_logical_dump oci_tenancy "ocid1.tenancy.oc1..aa...2a" fn config app mds_logical_dump oci_key "LS0tLS1...0tLQ=="
For the oci_key in base64, this is what I do to generate the string of the key.pem
file:
$ python >>> import base64 >>> a="""<PASTE HERE THE MULTILINE KEY> ... ... ... -----END PRIVATE KEY-----""" >>> base64.b64encode(a.encode('ascii')).decode('ascii')
Once the variables have been added using Cloud Shell, we can see them in OCI Console too:
We could also have added them directly from OCI Console.
Invoking the Function
The first time, the best way to invoke the function is to use again Cloud Shell and the Fn Project CLI:
cloudshell:mysqlshell_dump (us-ashburn-1)$ echo -n '{"mds_host": "10.0.1.15", "mds_user": "admin", "mds_port": "3306", "mds_password": "my password", "bucket": "lefred-bucket", "namespace": "i.....j", "mds_name": "my_mds"}' | fn invoke mds_logical_dump mysqlshell_dump
And we can verify in Object Storage the dump in our bucket:
The second possibility is to use OCI’s CLI with the OCID of the function:
API Gateway
Now that our function works we can create an API Gateway. For the example I will use a public API to start a logical dump from anywhere:
We can now create a deployment for our function:
And we specify the function we want to use and the method it supports. As we need to pass JSON information related to the instance we want to dump, I use the POST method:
As soon as the deployment is active, we can test it:
We use the endpoint‘s url like this:
Conclusion
Now we can use this API with any external project or home made (like cronjobs) solution to trigger a MySQL Shell logical dump of our MDS Instance to Object Storage.
This is useful when you want to use CloudNative Serverless solution. If you have a large database and you want to perform regular logical dumps of if, you might not benefit from MySQL Shell Dump & Load utility’s speed and parallelism as the deployed instance can have a maximum of 2GB of RAM (defined in func.yaml
).
A powerful dedicated compute instance for MySQL Shell is then recommended.
Enjoy using MySQL in OCI.
[…] creation of the function and its use is similar to the one explained in the previous post about creating a logical dump of a MySQL instance to Object […]