Analyzing queries in MySQL Database Service – Slow Query Log (part 1)

In my previous post, I explained how to deal with Performance_Schema and Sys to identify the candidates for Query Optimization but also to understand the workload on the database.

In this article, we will see how we can create an OCI Fn Application that will generate a slow query log from our MySQL Database Service instance and store it to Object Storage.

The creation of the function and its use is similar to the one explained in the previous post about creating a logical dump of a MySQL instance to Object Storage.

The Application

We need to create an application and 2 functions, one to extract the data in JSON format and one in plain text. We also need to deploy an API Gateway that will allow to call those function from anywhere (publicly):

Let’s start by creating the application in OCI console:

We need to use the Public Subnet of our VCN to be able to reach it later from our API Gateway:

After we click on Create, we can see our new Application created:

We then need to follow the statement displayed on the rest of the page. We use Cloud Shell:


This looks like this:

fdescamp@cloudshell:~ (us-ashburn-1)$ fn update context registry iad.ocir.io/i***********j/lefred
Current context updated registry with iad.ocir.io/i***********j/lefred
fdescamp@cloudshell:~ (us-ashburn-1)$ fn update context registry iad.ocir.io/i***********j/lefred
Current context updated registry with iad.ocir.io/i***********j/lefred
fdescamp@cloudshell:~ (us-ashburn-1)$ docker login -u 'idinfdw2eouj/fdescamp' iad.ocir.io
Password: **********
WARNING! Your password will be stored unencrypted in /home/fdescamp/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
fdescamp@cloudshell:~ (us-ashburn-1)$ fn list apps
NAME            ID
slow_query_log  ocid1.fnapp.oc1.iad.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxq

After that in Cloud Shell, we initialize our two new functions:

fdescamp@cloudshell:~ (us-ashburn-1)$ fn init --runtime python mysql_slowlog_txt
Creating function at: ./mysql_slowlog_txt
Function boilerplate generated.
func.yaml created.
fdescamp@cloudshell:~ (us-ashburn-1)$ fn init --runtime python mysql_slowlog
Creating function at: ./mysql_slowlog
Function boilerplate generated.
func.yaml created.

Both functions are initialized, we will start with the one dumping the queries in JSON format.

Function mysql_slowlog

As this is the first function of our application, we will define a Dockerfile and the requirements in a file (requirements.txt) in the folder of the function:

fdescamp@cloudshell:~ (us-ashburn-1)$ cd mysql_slowlog
fdescamp@cloudshell:mysql_slowlog (us-ashburn-1)$ ls
Dockerfile  func.py  func.yaml  requirements.txt

We need to add the following content in the Dockerfile:

FROM fnproject/python:3.9-dev as build-stage
WORKDIR /function
ADD requirements.txt /function/

RUN pip3 install --target /python/  --no-cache --no-cache-dir -r requirements.txt && rm -fr ~/.cache/pip /tmp*  requirements.txt func.yaml Dockerfile .venv && chmod -R o+r /python

ADD . /function/

RUN rm -fr /function/.pip_cache

FROM fnproject/python:3.9
WORKDIR /function

COPY --from=build-stage /python /python
COPY --from=build-stage /function /function

RUN chmod -R o+r /function && mkdir -p /home/fn && chown fn /home/fn

ENV PYTHONPATH=/function:/python
ENTRYPOINT ["/python/bin/fdk", "/function/func.py", "handler"]

The requirements.txt file needs to contain the following lines:

fdk>=0.1.48
oci
mysql-connector-python

We also need to modify the content of func.yaml file to increase the memory to 2048:

memory: 2048

All the magic of the function resides in the Python file func.py.

Modify the content of the file with the code of the file linked above.

Once done, we can deploy the function:

fdescamp@cloudshell:mysql_slowlog (us-ashburn-1)$ fn -v deploy --app slow_query_log
Deploying mysql_slowlog to app: slow_query_log
Bumped to version 0.0.1
Using Container engine docker
Building image iad.ocir.io/i**********j/lefred/mysql_slowlog:0.0.1
Dockerfile content
-----------------------------------
FROM fnproject/python:3.9-dev as build-stage
WORKDIR /function
ADD requirements.txt /function/

RUN pip3 install --target /python/  --no-cache --no-cache-dir -r requirements.txt && rm -fr ~/.cache/pip /tmp*  requirements.txt func.yaml Dockerfile .venv && chmod -R o+r /python

ADD . /function/

RUN rm -fr /function/.pip_cache

FROM fnproject/python:3.9
WORKDIR /function

COPY --from=build-stage /python /python
COPY --from=build-stage /function /function

RUN chmod -R o+r /function && mkdir -p /home/fn && chown fn /home/fn

ENV PYTHONPATH=/function:/python
ENTRYPOINT ["/python/bin/fdk", "/function/func.py", "handler"]


-----------------------------------
FN_REGISTRY:  iad.ocir.io/i**********j/lefred
Current Context:  us-ashburn-1
Sending build context to Docker daemon  9.728kB
Step 1/13 : FROM fnproject/python:3.9-dev as build-stage
 ---> 808c3fde4a95
Step 2/13 : WORKDIR /function
 ---> Using cache
 ---> 7953c328cf0e
Step 3/13 : ADD requirements.txt /function/
 ---> Using cache
 ---> 5d44308f3376
Step 4/13 : RUN pip3 install --target /python/  --no-cache --no-cache-dir -r requirements.txt && rm -fr ~/.cache/pip /tmp*  requirements.txt func.yaml Dockerfile .venv && chmod -R o+r /python
 ---> Using cache
 ---> 608ec9527aca
Step 5/13 : ADD . /function/
 ---> ae85dfe7245e
Step 6/13 : RUN rm -fr /function/.pip_cache
 ---> Running in 60421dfa5e4d
Removing intermediate container 60421dfa5e4d
 ---> 06de6b9b1860
Step 7/13 : FROM fnproject/python:3.9
 ---> d6c82f055722
Step 8/13 : WORKDIR /function
 ---> Using cache
 ---> b6bf41dd40e4
Step 9/13 : COPY --from=build-stage /python /python
 ---> Using cache
 ---> c895f3bb74f7
Step 10/13 : COPY --from=build-stage /function /function
 ---> b397ec7769a1
Step 11/13 : RUN chmod -R o+r /function && mkdir -p /home/fn && chown fn /home/fn
 ---> Running in 5af6a775d055
Removing intermediate container 5af6a775d055
 ---> fac578e4290a
Step 12/13 : ENV PYTHONPATH=/function:/python
 ---> Running in fe0bb2f24d6e
Removing intermediate container fe0bb2f24d6e
 ---> c0460b0ca6f9
Step 13/13 : ENTRYPOINT ["/python/bin/fdk", "/function/func.py", "handler"]
 ---> Running in 0ed370d1b391
Removing intermediate container 0ed370d1b391
 ---> 6907b3653dac
Successfully built 6907b3653dac
Successfully tagged iad.ocir.io/i************j/lefred/mysql_slowlog:0.0.1

Parts:  [iad.ocir.io i*************j lefred mysql_slowlog:0.0.1]
Using Container engine docker to push
Pushing iad.ocir.io/i********j/lefred/mysql_slowlog:0.0.1 to docker registry...The push refers to repository [iad.ocir.io/i**********j/lefred/mysql_slowlog]
50019643244c: Pushed 
b2b65f9f6bdd: Pushed 
4ae76999236e: Layer already exists 
9dbf415302a5: Layer already exists 
fcc297df3f46: Layer already exists 
79b7117c006c: Layer already exists 
05dc728e5e49: Layer already exists 
0.0.1: digest: sha256:e0a693993c7470557fac557cba9a2a4d3e828fc2d21789afb7ebe6163f4d4c14 size: 1781
Updating function mysql_slowlog using image iad.ocir.io/i**********j/lefred/mysql_slowlog:0.0.1...

Function mysql_slowlog_txt

Now we go in the ../mysql_slowlog_txt directory and we modify the func.yaml file to increase the memory to match the same amount as th previous function (2048).

Then we copy the content of this files in to func.py.

When done we can deploy that function too:

fdescamp@cloudshell:mysql_slowlog (us-ashburn-1)$ cd ../mysql_slowlog_txt/
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ vi func.yaml 
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ vi func.py 
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ fn -v deploy --app slow_query_log

Variables

Our application requires some variables to work. Some will be sent every time the function is called, like which MySQL Instance, the user’s credentials, which Object Storage bucket to use, … Some will be “hardcoded” to not having to specify them each time (like the tencancy, oci user, …).

We use again Cloud Shell to specify those that won’t be specified each time:

fn config app slow_query_log oci_fingerprint "fe:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:3d"
fn config app slow_query_log oci_tenancy 'ocid1.tenancy.oc1..xxxxx'
fn config app slow_query_log oci_user "ocid1.user.oc1..xxxxxx"
fn config app slow_query_log namespace "i********j"
fn config app slow_query_log bucket "lefred-bucket"
fn config app slow_query_log oci_region "us-ashburn-1"

We also need to provide an OCI key as a string. The content of the string can be generated using base64 command line program:

And then we add it in Cloud Shell like this:

fn config app slow_query_log oci_key '<THE CONTENT OF THE STRING ABOVE>'

Testing

If we have the security list well configured (Private Subnet accepting connection on port 3306 from Public Subnet) and if we have an Object Storage bucket ready (with the name configured earlier), we can already test our functions directly form Cloud Shell:

fdescamp@cloudshell:~ (us-ashburn-1)$ echo -n '{"mds_host": "10.0.1.127",
 "mds_user": "admin", "mds_port": "3306", "mds_password": "Passw0rd!", 
 "mds_name": "lefred-mysql"}' | fn invoke slow_query_log mysql_slowlog
{"message": "MySQL Slow Log saved: slow_lefred-mysql_202210132114.json"}

fdescamp@cloudshell:~ (us-ashburn-1)$ echo -n '{"mds_host": "10.0.1.127",
 "mds_user": "admin", "mds_port": "3306", "mds_password": "Passw0rd!",
 "mds_name": "lefred-mysql"}' | fn invoke slow_query_log mysql_slowlog_txt
{"message": "MySQL Slow Log saved: slow_lefred-mysql_202210132124.log"}

We can see the files in Object Storage:

In the next article, we will see how to configure the API Gateway to call our application and store the statements on Object Storage.

And finally, we will see the content of those files and how to use them.

Stay tuned !

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

7 Comments

  1. I wonder if logic is Get the Log, Store it to the Cloud, Truncate the Log would not you be loosing bunch of records?

    Not a big problem for demo but for Prod using unknown number of records is not the good idea. Especially in this case when system under duress may take longer time to return you the log causing more records to be loss.

    Of course I understand you need to deal with bad design. Truncating is not a proper pattern for log – purging until timestamp (up to which it was processed is better)

  2. Hi PZ, thank you for your comment.

    In fact, it’s not allowed to delete records in Performance_Schema tables, only delete them.
    The function can be modified to truncate the history table just after collecting the records before sending them to Object Storage.

    In the last part of this blog series, I will cover the limitations, that as you know, are to be considered when using performance_schema history tables.

    • Yes I know,

      My point is Performance Schema needs to provide such functionality to make it possible for clean design.

      Yes you can move things closer but you can’t eliminate race condition. I dislike such designs as it may become “footgun” in edge cases

      • I already modify the workflow schema, it was already the case in the JSON function and I modified the plain text one.

        Being able to delete in pfs tables or truncate until timestamp should be a new feature request.

        In many cases this can already provide accurate information to digest the workload. On busy production system, fetching all queries in slow query log is also not recommended, isn’t it ?

        Cheers.

        • Yes of course,

          This might be another thing to document.

          This is frankly another issue. Event at “Medium” load of 50K QPS (you folks showed MySQL can easily do order of magnitude more) this will surely choke.

          It would be actually interesting to see at what query rate you are able to have this approach reliably work

Leave a Reply to Peter ZaitsevCancel Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.