MySQL Group Replication and logical backup

on

Taking a logical backup of a member of a Group Replication Cluster is not something very easy.

Currently (5.7.17, 5.7.18 or 8.0.0) if you want to use mysqldump to take a logical backup of your dataset, you need to lock all the tables on the member you are taking the dump. Indeed, a single transaction can’t be used as savepoints are not compatible with Group Replication.

[root@mysql3 ~]# mysqldump -p  --single-transaction --all-databases --triggers \
                      --routines --events >dump.sql
Enter password:
mysqldump: Couldn't execute 'SAVEPOINT sp': The MySQL server is running with the 
--transaction-write-set-extraction!=OFF option so it cannot execute this 
statement (1290)

So we need to use:

[root@mysql3 ~]# mysqldump -p  --lock-all-tables --all-databases --triggers \
                      --routines --events >dump.sql
Enter password:

This can have a negative effect on the full Group’s performance as the member having all the tables locked might start sending statistics reaching the flow control threshold. Currently we don’t have any way to decide to ignore those statistics for a given node.

The replication development team was of course aware of this problem (reported in bug 81494) and decided to support savepoints with group replication too.

Anibal blogged yesterday about this new improvement.

So with MySQL 8.0.1, we can now take a logical backup using a single transaction:

[root@mysql1 ~]# mysql -p -e "select @@version";
Enter password:
+---------------+
| @@version     |
+---------------+
| 8.0.1-dmr-log |
+---------------+
[root@mysql1 ~]# mysqldump -p  --single-transaction --all-databases --triggers \
                 --routines --events >dump.sql
Enter password:

Wooohoo, it works ! Good Job replication team ! Savepoints can now be considered as NOT being a LIMITATION anymore for MySQL Group Replication !

3 thoughts on “MySQL Group Replication and logical backup

  1. Hi Daniel,
    I do have a small doubt in replication. In slave server I am getting the below error

    Slave SQL: Error ‘Duplicate entry ‘914166’ for key
    ‘PRIMARY” on query. Default database: ‘zo_dev_20121216’. Query: ‘UPDATE

    How can I fix this error permanently?

    1) is setting the slave skip error parameter to 1062 is a fix
    2) or making the slave as read only and revoking super privileges to user in slave can be done? Will it fix my issue
    3) else do we have any other permanent solution for this?

    1. Hi Karthick,

      I would first find where the problem comes from. I would for sure never skip errors in replication stream like you suggest in point 1.

      You can parse the binary logs of your slave to find when/who wrote that record if you suspect somebody locally to the slave writing to the slave directly.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

recent

Last Tweets

Locations of visitors to this page
categories