Migrate Ownership of your stored routines, views and triggers for MySQL in 2021

Yesterday my friend and ex-colleague Sveta published a blog post on how to migrate ownership of stored routines, views and triggers in MySQL.

I usually agree a lot with her and even if I agree with what she wrote, there is one single point I would like to comment: use MySQL Shell !

In the blog post, Sveta recommends the use of mysqlpump which I don’t. For any logical dump & load operations, I really recommend the use of MySQL Shell’s Dump & Load Utility ! It’s much faster, it has many nice options like hidden Primary Key creation, and is compatible with OCI.

And of course, MySQL Shell also provides a solution for the problem exposed in the blog post.

Let’s see MySQL Shell in action with the exact same example:

First, using my own user (fred), I create a database and a view:

mysql> create database definers;
mysql> use definers;
mysql> CREATE VIEW large_tables AS 
       SELECT * FROM information_schema.tables 
       WHERE DATA_LENGTH > 100000000;
mysql> SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME 
       FROM information_schema.views 
       WHERE TABLE_NAME='large_tables';
 +---------+--------------+--------------+
 | DEFINER | TABLE_SCHEMA | TABLE_NAME   |
 +---------+--------------+--------------+
 | fred@%  | definers     | large_tables |
 +---------+--------------+--------------+

Now, I create the production user, for obvious security reasons, I assign a password to it:

mysql> CREATE USER production identified by '********';
mysql> GRANT ALL ON definers.* TO production@'%';
mysql> GRANT SESSION_VARIABLES_ADMIN ON . TO production@'%';

We can now dump the definers schema using MySQL Shell where I will use fred to connect:

[fred@imac ~] $ mysqlsh fred@localhost
MySQL   localhost:33060+   2021-07-08 07:41:47  
JS  util.dumpSchemas(['definers'], '/tmp/dump')
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing DDL for schema definers
Writing DDL for view definers.large_tables
?% (0 rows / ?), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Duration: 00:00:00s                                                     
Schemas dumped: 1                                                       
Tables dumped: 0                                                        
Uncompressed data size: 0 bytes                                         
Compressed data size: 0 bytes                                           
Compression ratio: 0.0                                                  
Rows written: 0                                                         
Bytes written: 0 bytes                                                  
Average uncompressed throughput: 0.00 B/s                               
Average compressed throughput: 0.00 B/s 

And now I will load it (the view needs to be removed first if you load it on the same server) using the production user:

[fred@imac ~] $ mysqlsh production@localhost
MySQL  localhost:33060+   2021-07-08 07:42:53 
JS  util.loadDump('/tmp/dump')
Loading DDL and Data from '/tmp/dump' using 4 threads.
Opening dump…
Target is MySQL 8.0.25. Dump was produced from MySQL 8.0.25
Checking for pre-existing objects…
Executing common preamble SQL
Executing DDL script for schema definers
[Worker002] Executing DDL script for definers.large_tables (placeholder for view)
Executing DDL script for view definers.large_tables
ERROR: Error executing DDL script for view definers.large_tables: MySQL Error 1227 (42000): Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation: /*!50001 CREATE ALGORITHM=UNDEFINED DEFINER=fred@% SQL SECURITY DEFINER VIEW large_tables AS select information_schema.tables.TABLE_CATALOG AS TABLE_CATALOG,information_schema.tables.TABLE_SCHEMA AS TABLE_SCHEMA,information_schema.tables.TABLE_NAME AS TABLE_NAME,information_schema.tables.TABLE_TYPE AS TABLE_TYPE,information_schema.tables.ENGINE AS ENGINE,information_schema.tables.VERSION AS VERSION,information_schema.tables.ROW_FORMAT AS ROW_FORMAT,information_schema.tables.TABLE_ROWS AS TABLE_ROWS,information_schema.tables.AVG_ROW_LENGTH AS AVG_ROW_LENGTH,information_schema.tables.DATA_LENGTH AS DATA_LENGTH,information_schema.tables.MAX_DATA_LENGTH AS MAX_DATA_LENGTH,information_schema.tables.INDEX_LENGTH AS INDEX_LENGTH,information_schema.tables.DATA_FREE AS DATA_FREE,information_schema.tables.AUTO_INCREMENT AS AUTO_INCREMENT,information_schema.tables.CREATE_TIME AS CREATE_TIME,information_schema.tables.UPDATE_TIME AS UPDATE_TIME,information_schema.tables.CHECK_TIME AS CHECK_TIME,information_schema.tables.TABLE_COLLATION AS TABLE_COLLATION,information_schema.tables.CHECKSUM AS CHECKSUM,information_schema.tables.CREATE_OPTIONS AS CREATE_OPTIONS,information_schema.tables.TABLE_COMMENT AS TABLE_COMMENT from information_schema.TABLES tables where (information_schema.tables.DATA_LENGTH > 100000000) */
Util.loadDump: Access denied; you need (at least one of) the SUPER or SET_USER_ID privilege(s) for this operation (MYSQLSH 1227)

As you can see, it failed the exact same way as it happened with mysqldump and this is because by default we don’t enable the strip_definers option.

Now we will do the dump again enabling the option this time:

JS  util.dumpSchemas(['definers'], '/tmp/dump', {compatibility:["strip_definers"]})
Acquiring global read lock
Global read lock acquired
Gathering information - done
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing DDL for schema definers
Writing DDL for view definers.large_tables
NOTE: View definers.large_tables had definer clause removed and SQL SECURITY characteristic set to INVOKER
 ?% (0 rows / ?), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Duration: 00:00:00s                                                     
Schemas dumped: 1                                                       
Tables dumped: 0                                                        
Uncompressed data size: 0 bytes                                         
Compressed data size: 0 bytes                                           
Compression ratio: 0.0                                                  
Rows written: 0                                                         
Bytes written: 0 bytes                                                  
Average uncompressed throughput: 0.00 B/s                               
Average compressed throughput: 0.00 B/s  

We can see that the dump is now working as expected:

JS  util.loadDump('/tmp/dump')
Loading DDL and Data from '/tmp/dump' using 4 threads.
Opening dump…
Target is MySQL 8.0.25. Dump was produced from MySQL 8.0.25
Checking for pre-existing objects…
Executing common preamble SQL
Executing DDL script for schema definers
[Worker002] Executing DDL script for definers.large_tables (placeholder for view)
Executing DDL script for view definers.large_tables
Executing common postamble SQL               
No data loaded.
0 warnings were reported during the load.

And if we check the definer this time of the loaded view:

mysql> SELECT DEFINER, TABLE_SCHEMA, TABLE_NAME 
       FROM information_schema.views 
       WHERE TABLE_NAME='large_tables';
 +--------------+--------------+--------------+
 | DEFINER      | TABLE_SCHEMA | TABLE_NAME   |
 +--------------+--------------+--------------+
 | production@% | definers     | large_tables |
 +--------------+--------------+--------------+
 1 row in set (0.00 sec)

As you can see, MySQL Shell Dump & Load Utility is the way to go for any logical dump and load: faster and powerful !

Thank you for using MySQL !

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

As MySQL Community Manager, I am an employee of Oracle and the views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

You can find articles I wrote on Oracle’s blog.