MySQL and Memory: a love story (part 2)


We saw in the previous post that MySQL likes memory. We also saw how to perform operating system checks and some configuration changes for Swap and NUMA.

Today, we will check what MySQL server can tell us about its memory usage.

Introduced in MySQL 5.7 and enabled by default in MySQL 8.0, the Performance_Schema‘s Memory instrumentation allows us to have a better overview of what MySQL is allocating and why.

Let’s check on our MySQL server using SYS:

Pay attention that there is a bug related to how InnoDB Buffer Pool statistics are accounted in Performance_Schema. This is fixed in 8.0.13.

SYS schema provides 5 tables to get memory allocation information:

| Tables_in_sys (memory%)           |
| memory_by_host_by_current_bytes   |
| memory_by_thread_by_current_bytes |
| memory_by_user_by_current_bytes   |
| memory_global_by_current_bytes    |
| memory_global_total               |

It’s possible to get an overview by “code area“:

SELECT SUBSTRING_INDEX(event_name,'/',2) AS code_area, 
       sys.format_bytes(SUM(current_alloc)) AS current_alloc 
FROM sys.x$memory_global_by_current_bytes 
GROUP BY SUBSTRING_INDEX(event_name,'/',2) 
ORDER BY SUM(current_alloc) DESC;
| code_area                 | current_alloc |
| memory/innodb             | 333.47 MiB    |
| memory/performance_schema | 276.40 MiB    |
| memory/sql                | 28.54 MiB     |
| memory/mysys              | 8.96 MiB      |
| memory/temptable          | 7.00 MiB      |
| memory/mysqld_openssl     | 208.16 KiB    |
| memory/mysqlx             | 31.35 KiB     |
| memory/myisam             | 696 bytes     |
| memory/vio                | 624 bytes     |
| memory/csv                | 88 bytes      |
| memory/blackhole          | 88 bytes      |

Buffer Pool

When using InnoDB, one of the most important component is the InnoDB Buffer Pool. Every time an operation happen to a table  (read or write), the page where the records (and indexes) are located is loaded into the Buffer Pool.

This means that if the data you read and write the most has its pages in the Buffer Pool, the performance will be better than if you have to read pages form disk. Also don’t forget that when there is no more free pages in it, older pages must be evicted and if they were modified, synchronized back to disk (checkpointing). So when all the data you need is present in the Buffer Pool, we say that the working-set fits in memory. You can have a data-set of 3TB (with a log of historical data that is never queried) but a working set of several GB or even less.

Since MySQL 8.0, if you have a dedicated server for MySQL, you can let MySQL configure the size of the Buffer Pool for you by setting innodb_dedicated_server to ON.

It’s possible to verify how much the InnoDB Buffer Pool is filled with data using the Performance_Schema:

mysql> SELECT CONCAT(FORMAT(A.num * 100.0 / B.num,2),"%") BufferPoolFullPct FROM
	(SELECT variable_value num FROM performance_schema.global_status
	WHERE variable_name = 'Innodb_buffer_pool_pages_data') A,
	(SELECT variable_value num FROM performance_schema.global_status
	WHERE variable_name = 'Innodb_buffer_pool_pages_total') B;
| BufferPoolFullPct |
| 27.73%            |

Has you can see for the moment on this server the working set fits in memory as there are still plenty of free pages in the InnoDB Buffer Pool.

Of course it’s also possible to see the memory allocation for the Buffer Pool using the following query:

mysql> select * from sys.memory_global_by_current_bytes 
where event_name like 'memory/innodb_buf_buf_pool';
*************************** 1. row ***************************
       event_name: memory/innodb/buf_buf_pool
    current_count: 2
    current_alloc: 262.12 MiB
current_avg_alloc: 131.06 MiB
       high_count: 2
       high_alloc: 262.12 MiB
   high_avg_alloc: 131.06 MiB

If you want to know what schemas or tables are present into the Buffer Pool, please query on of these SYS schema tables:

  • innodb_buffer_stats_by_schema
  • innodb_buffer_stats_by_table

You can also verify the ratio of pages requested by InnoDB and those read from the Buffer Pool to know if your working-set fits in memory or not. In this example I will check it the request for 1 minute on my database server:

show global status like 'innodb_buffer_pool_read%s';select sleep(60); show global status like 'innodb_buffer_pool_read%s';
| Variable_name                    | Value   |
| Innodb_buffer_pool_read_requests | 2459014 |
| Innodb_buffer_pool_reads         | 3550    |
2 rows in set (0.0026 sec)
| sleep(60) |
|         0 |
1 row in set (1 min 0.0006 sec)
| Variable_name                    | Value   |
| Innodb_buffer_pool_read_requests | 2465390 |
| Innodb_buffer_pool_reads         | 3550    |
2 rows in set (0.5880 sec)

We can see that for 60 seconds, there were 5175 page read and all were served by the Buffer Pool which is great !

You can find similar information from the output of SHOW ENGINE INNODB STATUS on the Buffer pool hit rate line. 1000 / 1000 is the number you want to see there. If you see permanently a lower number, then you should consider reducing your working-set or increasing your Buffer Pool.

Better indexes, no duplicate ones can help reducing the working-set. In sys schema, you can find 2 tables that can help you targeting the tables with non optimal indexes (schema_redundant_indexes & schema_unused_indexes). Better queries can also help reducing the working-set. You can fetch candidates in these two other tables: schema_tables_with_full_table_scans & statements_with_full_table_scans

Temporary Tables

Of course temporary tables also use the memory.

You can track their creation in the global status:

select * from performance_schema.global_status 
where variable_name like '%tmp%tables';
| Created_tmp_disk_tables | 0              |
| Created_tmp_tables      | 4903           |

You can see that some Temp Tables were created in memory. You can find the statements creating them querying the table statements_with_temp_tables.

We can monitor the amount of allocated RAM for temporary tables as it is reported in event memory/temptable/physical_ram:

select * from memory_global_by_current_bytes 
where event_name like '%temp%'\G
*************************** 1. row ***************************
       event_name: memory/temptable/physical_ram
    current_count: 7
    current_alloc: 7.00 MiB
current_avg_alloc: 1.00 MiB
       high_count: 10
       high_alloc: 10.00 MiB
   high_avg_alloc: 1.00 MiB

To limit the size of RAM used for one temporary table (the amount of temp tables is not limited), there are some configuration variables that can be used. Of course it depends of the Temporary Tables engine used. Before MySQL 8.0 only the MEMORY engine was available, in 8.0 the new TempTable engine is available and used by default.

With MEMORY engine the max size of a table in memory is limited by the lowest value of these two variables: max_heap_table_size and tmp_table_size.

Once the size exceeded (or incompatible types of fields), the temporary table goes to disk.

For TempTable engine, that allows VARCHAR columns, VARBINARY columns, or other binary large object type columns (supported as of MySQL 8.0.13), the variable temptable_max_ram limits the size of the table.

The Engine used for internal temporary tables is defined in these variables:

select * from performance_schema.global_variables 
where variable_name like 'internal_tmp%';
| VARIABLE_NAME                    | VARIABLE_VALUE |
| internal_tmp_disk_storage_engine | InnoDB         |
| internal_tmp_mem_storage_engine  | TempTable      |

Small tip, it’s possible to see temporary tables that were just deleted on disk from the OS using lsof:

# lsof -p $(pidof mysqld) | grep -i del
mysqld  17275 mysql    5u   REG                8,1      4183   131170 /tmp/ibQdvz8e (deleted)
mysqld  17275 mysql    6u   REG                8,1         0   131180 /tmp/ibO15nJp (deleted)
mysqld  17275 mysql    7u   REG                8,1         0   131181 /tmp/ibqDRckA (deleted)
mysqld  17275 mysql    8u   REG                8,1         0   131182 /tmp/ibcb3V3Z (deleted)
mysqld  17275 mysql   12u   REG                8,1         0   131183 /tmp/ibzlww6g (deleted)

We can see that only one temp table has some records as it was a bit larger than 4k.


And finally, there are Caches and Buffers that use some amount of memory. Some are globals and easy to identify them  like binlog_cache_size,  what is important to know is what a user can use as memory and see if the max_user_connections should be reduced.

For example every session will have their own  read_buffer_size, read_rnd_buffer_size, sort_buffer_size, join_buffer_sizethread_stack, max_allowed_packet, net_buffer_length, ...

The following statement is also very interesting to have an idea of the allocated memory per user. That’s also why I encourage to have a different user per application.

select * from memory_by_user_by_current_bytes where user in('lefred','root')\G
*************************** 1. row ***************************
              user: lefred
current_count_used: 19402
 current_allocated: 69.05 MiB
 current_avg_alloc: 3.64 KiB
 current_max_alloc: 42.06 MiB
   total_allocated: 3.78 GiB
*************************** 2. row ***************************
              user: root
current_count_used: 649
 current_allocated: 2.78 MiB
 current_avg_alloc: 4.38 KiB
 current_max_alloc: 1.10 MiB
   total_allocated: 244.42 MiB

I hope you now understand a bit more how MySQL handles memory, why it’s not recommended to oversize session buffers, and that a new Engine is available for internal temporary tables (TempTables).

Don’t hesitate to share your tips too 😉

4 thoughts on “MySQL and Memory: a love story (part 2)

  1. Hey dude, I am really fond of writings on your blog. They are made well, easy to digest and memorize, despite English being my 2nd language. Wish you well.

  2. I would call 1000/1000 a hit ratio rather than a hit rate 🙂

    Your rate is zero during the 60-second observation window. It is useful to think in terms of rate, since you can compare this to your knowledge of your storage (how many IOPS is it capable of?) Ratios are a bit more of a vanity metric.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

I deeply value the diversity of MySQL developers, users, community and my blog readers. I cannot agree the suffering, oppression, and systemic racism the Black community faces every day. Black lives matter.