Analyzing queries in MySQL Database Service

If like me you are an old experienced MySQL DBA, to analyze your MySQL workload, you certainly have used the slow query log with long_query_time set to 0.

The slow query log is a file that contains all the queries whose execution time is greater than the value of long_query_time. This file can be huge and uses up all available disk space on busy systems.

Constantly writing to that file can also add an overhead to the server.

For those reasons, the slow query log is not available in MySQL Database Service (HeatWave) in OCI.

Plan B ?

As a DBA, what are my options for finding the queries that need to be optimized? As usual, the queries that consume the most time are the ones that need some attention

It can be a very long query or a short query executed too many times.

Currently the MySQL DBAs use Performance_Schema to manage the queries executed on their database.

I already wrote an article on how to use Performance_Schema and Sys, let’s point out the query consuming most of the execution time (latency) with this new rewritten query (using new functions):

SQL> select schema_name, format_pico_time(total_latency) tot_lat,
            exec_count, format_pico_time(total_latency/exec_count)
            latency_per_call, query_sample_text
     from sys.x$statements_with_runtimes_in_95th_percentile as t1
     join performance_schema.events_statements_summary_by_digest as t2
       on t2.digest=t1.digest 
    where schema_name not in ('performance_schema', 'sys')
 order by (total_latency/exec_count) desc limit 1\G
*************************** 1. row ***************************
      schema_name: employees
          tot_lat: 21.54 s
       exec_count: 4
 latency_per_call: 5.38 s
query_sample_text: select * from salaries where salary > 80000
1 row in set (0.0127 sec)

Sys schema also contains a statement analysis view that can be used containing plenty of information to hunt the bad queries.

Let’s see an example to illustrate all the available information:

SQL> select * from sys.statement_analysis 
     where db not in ('performance_schema', 'sys') limit 1\G
*************************** 1. row ***************************
            query: SELECT `new_table` . `title` , ... `title` ORDER BY `salary` DESC
               db: employees
        full_scan: 
       exec_count: 11
        err_count: 0
       warn_count: 0
    total_latency: 38.96 s
      max_latency: 5.15 s
      avg_latency: 3.54 s
     lock_latency: 33.00 us
      cpu_latency:   0 ps
        rows_sent: 77
    rows_sent_avg: 7
    rows_examined: 13053117
rows_examined_avg: 1186647
    rows_affected: 0
rows_affected_avg: 0
       tmp_tables: 22
  tmp_disk_tables: 11
      rows_sorted: 77
sort_merge_passes: 0
           digest: 922701de9e5c51847f9f7de245b88fef4080b515ba8805082cd90c32830714eb
       first_seen: 2022-10-12 20:45:50.770465
        last_seen: 2022-10-13 11:49:13.140228
1 row in set (0.0022 sec)

Slow Query Log

But as I always say, old habits die hard… and sometimes people wants to deal with a slow query log anyway.

To generate something like this for example:

MySQL Database Server – queries in Grafana Loki

Or digest them offline with a tool like pt-query-digest that most of the MySQL DBAs have already used.

In the next article, I will show you how to use OCI Fn applications to generate a slow query log (in JSON or plain text) from MySQL Database Service and store it to Object Storage.

Stay tuned !

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

2 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.