In this final part of the MySQL Shell Configuration Management series, we explore how to automate performance tuning, backups and recovery using MySQL Shell. This guide introduces a complete system that captures not just data, but also configuration and performance context — enabling intelligent restores and optimized deployments across diverse environments.
We’re going to focus on a commonly overlooked aspect of database lifecycle management (DLM): creating versioned, performance-aware database copies for development and testing.
These copies aren’t just backups — they capture configuration and performance context, so that developers and testers can spin up environments with confidence, even on unfamiliar or scaled-down hardware. To illustrate the point, we’ll share scripts that can provide a foundation for a system that captures not just your data but the entire context of your database environment, including configuration snapshots and performance baselines.
It’s the Recovery process that best displays the advantage of a DLM approach that preserves the whole database context. Instead of the naïve optimism of restoring a backup with its original configuration, we’ll show how to build recovery procedures that adapt intelligently to the target environment.
Imagine, for example, restoring a database from a server with 32GB of RAM to one with 64GB. Our system will automatically adjust buffer pool sizes, connection limits, and other parameters to match the capabilities of the new environment.
Finally, we’ll bring everything together into an automation script that runs continuously, monitoring performance, making adjustments, generating reports, and maintaining backups. By the end of this article, you’ll have a complete automation system that can manage your MySQL servers from initial configuration through ongoing optimization, backup, and recovery.
As well as providing resilience and accurate recovery, this will make it far easier to create reliable new instances of the database for development work in a variety of server environments. It also brings together in a single system everything we’ve learned throughout this series on MySQL Configuration Management.
Every script used in this article can be found in this GitHub repo.
Automating Backups with Configuration Tracking in MySQL Shell
Backups are an important part of database management. Let’s look at a script that not only backs up your database but also records your configuration and performance data alongside the backup. When we restore or copy the database, we will have more information with which to calculate the configuration parameters, but also to test the result to make sure we have an installation which performs as we expect
The function begins by creating a timestamp-based directory name for the backup, announcing the start of the backup process, creating the backup directory, and using MySQL Shell’s built-in `util.dumpInstance()` function to create a consistent backup of all databases:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
function backupWithConfig() { var session = shell.getSession(); var timestamp = new Date().toISOString().replace(/[:.]/g, '-'); var backupDir = '/tmp/mysql_backup_' + timestamp; print("Starting backup with configuration tracking..."); print("Backup directory: " + backupDir); // Create backup directory shell.run('mkdir -p ' + backupDir); // Backup the database using util.dumpInstance util.dumpInstance(backupDir + '/data', { consistent: true, compression: "gzip" }); |
The `consistent: true` option ensures that the backup is transactionally consistent, which is essential for InnoDB tables.
Next, the function creates a file to store the current MySQL configuration, defines a list of important configuration parameters to export, writes a header with the timestamp, queries MySQL for the current value of each parameter, and writes each parameter and its value to the file. This configuration snapshot is invaluable for troubleshooting or when restoring to a different server as it provides a reference point for the optimal configuration at the time of the backup.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
// Export current configuration var configFile = backupDir + '/mysql_config.txt'; var configFileHandle = shell.openFile(configFile, 'w'); var configParams = [ 'innodb_buffer_pool_size', 'innodb_buffer_pool_instances', 'innodb_log_file_size', 'innodb_flush_log_at_trx_commit', 'innodb_flush_method', 'innodb_io_capacity', 'innodb_io_capacity_max', 'table_open_cache', 'table_definition_cache', 'max_connections', 'thread_cache_size', 'join_buffer_size', 'sort_buffer_size', 'read_buffer_size', 'read_rnd_buffer_size', 'key_buffer_size' ]; configFileHandle.write("MySQL Configuration Backup - " + timestamp + "\n"); configFileHandle.write("=".repeat(50) + "\n\n"); for (var i = 0; i < configParams.length; i++) { var param = configParams[i]; var result = session.runSql("SELECT @@GLOBAL." + param); var value = result.fetchOne()[0]; configFileHandle.write(param + " = " + value + "\n"); } configFileHandle.close(); |
The function then creates a CSV file to store historical performance metrics, writes a header row with column names, queries the monitoring database for all metrics, and writes each row of metrics to the CSV file. Exporting this historical data allows you to analyze performance trends even if the original monitoring database is lost or corrupted.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
// Export performance metrics history var metricsFile = backupDir + '/performance_metrics.csv'; var metricsFileHandle = shell.openFile(metricsFile, 'w'); metricsFileHandle.write("timestamp,buffer_pool_usage,threads_connected,threads_running,slow_queries,table_open_cache_hits,table_open_cache_misses\n"); var metricsResult = session.runSql(` SELECT timestamp, buffer_pool_usage, threads_connected, threads_running, slow_queries, table_open_cache_hits, table_open_cache_misses FROM performance_monitoring.mysql_metrics ORDER BY timestamp `); var row; while ((row = metricsResult.fetchOne())) { metricsFileHandle.write( row[0] + "," + row[1] + "," + row[2] + "," + row[3] + "," + row[4] + "," + row[5] + "," + row[6] + "\n" ); } metricsFileHandle.close(); |
Next, the function creates another CSV file to store configuration change history, exports all configuration changes from the monitoring database, creates yet another CSV file to store database size information, queries the information_schema to calculate the size of each database, and writes the database names and sizes to the CSV file. The database size information is particularly useful for capacity planning and for determining appropriate buffer pool sizes during restore operations.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
// Export configuration changes history var changesFile = backupDir + '/config_changes.csv'; var changesFileHandle = shell.openFile(changesFile, 'w'); changesFileHandle.write("timestamp,parameter_name,old_value,new_value,change_reason\n"); var changesResult = session.runSql(` SELECT timestamp, parameter_name, old_value, new_value, change_reason FROM performance_monitoring.config_changes ORDER BY timestamp `); while ((row = changesResult.fetchOne())) { changesFileHandle.write( row[0] + "," + row[1] + "," + row[2] + "," + row[3] + "," + row[4].replace(",", ";") + "\n" ); } changesFileHandle.close(); // Export database size information var sizeFile = backupDir + '/database_sizes.csv'; var sizeFileHandle = shell.openFile(sizeFile, 'w'); sizeFileHandle.write("database_name,size_mb\n"); var sizeResult = session.runSql(` SELECT table_schema AS database_name, ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS size_mb FROM information_schema.tables GROUP BY table_schema ORDER BY size_mb DESC `); while ((row = sizeResult.fetchOne())) { sizeFileHandle.write(row[0] + "," + row[1] + "\n"); } sizeFileHandle.close(); |
Finally, the function returns the successful completion of the backup, lists all the files that were created, and returns the backup directory path for potential further processing. This backup includes not just the database data but also sufficient configuration and performance information to help with routine troubleshooting and optimization.
1 2 3 4 5 6 7 8 9 10 |
// Summary print("Backup completed successfully!"); print("Database data: " + backupDir + '/data'); print("Configuration: " + backupDir + '/mysql_config.txt'); print("Performance metrics: " + backupDir + '/performance_metrics.csv'); print("Configuration changes: " + backupDir + '/config_changes.csv'); print("Database sizes: " + backupDir + '/database_sizes.csv'); return backupDir; } backupWithConfig(); |
This enriched backup forms the basis for intelligent restores and meaningful performance comparisons.
Recovery with Configuration Analysis
When restoring a database, it’s not always appropriate to use the same configuration as before. Let’s look at a script that restores a database and adjusts its configuration to the current environment.
This function begins by stating the start of the intelligent restore process, opens and reads the configuration file from the backup, parses the file to extract parameter names and values, and displays key configuration values from the backup. This analysis provides a baseline for comparison with the current server’s capabilities.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
function intelligentRestore(backupDir) { print("Starting intelligent restore from: " + backupDir); // Read backup configuration file var configFile = backupDir + '/mysql_config.txt'; var configContent = shell.readFile(configFile); var configLines = configContent.split('\n'); var backupConfig = {}; for (var i = 0; i < configLines.length; i++) { var line = configLines[i].trim(); if (line.indexOf(' = ') > 0) { var parts = line.split(' = '); backupConfig[parts[0]] = parts[1]; } } print("Backup configuration analyzed:"); print("- Buffer pool size: " + (parseInt(backupConfig.innodb_buffer_pool_size) / (1024 * 1024 * 1024)).toFixed(2) + " GB"); print("- Max connections: " + backupConfig.max_connections); print("- Table open cache: " + backupConfig.table_open_cache); |
Next, the function reads and parses the database size information from the backup, calculates the total size of all databases, determines the current server’s capabilities, and displays the total database size and server capabilities. Understanding both the database size and server capabilities is essential for calculating appropriate configuration values.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
// Analyze database size var sizeFile = backupDir + '/database_sizes.csv'; var sizeContent = shell.readFile(sizeFile); var sizeLines = sizeContent.split('\n'); var totalDatabaseSizeMB = 0; for (var i = 1; i < sizeLines.length; i++) { var line = sizeLines[i].trim(); if (line) { var parts = line.split(','); totalDatabaseSizeMB += parseFloat(parts[1]); } } print("Total database size: " + totalDatabaseSizeMB.toFixed(2) + " MB (" + (totalDatabaseSizeMB / 1024).toFixed(2) + " GB)"); // Simulated server capabilities var currentServerMemoryGB = 16; var currentServerCores = 8; var currentServerDiskType = "SSD"; print("\nCurrent server capabilities:"); print("- Memory: " + currentServerMemoryGB + " GB"); print("- CPU cores: " + currentServerCores); print("- Disk type: " + currentServerDiskType); |
Here, it calculates optimal MySQL configuration settings based on the server’s capabilities and database size, and recommends a buffer pool size large enough to hold the entire database but capped at 75% of total memory. It also determines the number of buffer pool instances, suitable I/O capacity based on disk type, and max connections using both memory and CPU-based heuristics.
Additionally, it sets values for thread cache size (25% of max connections) and table open cache (based on estimated number of tables). These values are printed out for the user to review before proceeding.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
// Recommend configuration var recommendedConfig = {}; var databaseSizeGB = totalDatabaseSizeMB / 1024; var recommendedBufferPoolGB = Math.min( currentServerMemoryGB * 0.75, Math.max(databaseSizeGB * 1.2, 1) ); recommendedConfig.innodb_buffer_pool_size = Math.round(recommendedBufferPoolGB * 1024 * 1024 * 1024); recommendedConfig.innodb_buffer_pool_instances = Math.min(Math.max(Math.round(recommendedBufferPoolGB), 1), 8); if (currentServerDiskType === "SSD") { recommendedConfig.innodb_io_capacity = 2000; recommendedConfig.innodb_io_capacity_max = 4000; } else { recommendedConfig.innodb_io_capacity = 200; recommendedConfig.innodb_io_capacity_max = 400; } var connectionMemoryMB = 10; var maxConnectionsFromMemory = Math.floor((currentServerMemoryGB * 1024 * 0.2) / connectionMemoryMB); var maxConnectionsFromCPU = currentServerCores * 25; recommendedConfig.max_connections = Math.min(maxConnectionsFromMemory, maxConnectionsFromCPU); recommendedConfig.thread_cache_size = Math.round(recommendedConfig.max_connections * 0.25); var tableEstimate = totalDatabaseSizeMB / 10; recommendedConfig.table_open_cache = Math.min(Math.max(Math.round(tableEstimate * 1.5), 400), 2000); print("\nRecommended configuration for this server:"); for (var param in recommendedConfig) { print("- " + param + ": " + recommendedConfig[param]); } |
Here, the script compares the recommended configuration with the one used in the backup. It prints a side-by-side comparison table showing each parameter, its value in the backup (if available), the newly calculated value, and the reason for any changes. If a parameter is missing in the backup, it’s marked as such. If the value hasn’t changed, it’s labeled “Same.” Otherwise, the script states that the change was made to align with the current hardware. This step helps users understand how and why the configuration is being tailored for the new environment.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
// Comparison table print("\nComparison with backup configuration:"); print("Parameter | Backup Value | Recommended Value | Reason"); print("------------------------|--------------|-------------------|------------------"); var compareParams = [ { name: "innodb_buffer_pool_size", format: function(v) { return (parseInt(v) / (1024 * 1024 * 1024)).toFixed(2) + " GB"; } }, { name: "innodb_buffer_pool_instances", format: String }, { name: "innodb_io_capacity", format: String }, { name: "innodb_io_capacity_max", format: String }, { name: "max_connections", format: String }, { name: "thread_cache_size", format: String }, { name: "table_open_cache", format: String } ]; function getReasonForChange(param, oldVal, newVal) { if (oldVal === "N/A") return "Missing in backup"; if (String(oldVal) === String(newVal)) return "Same"; return "Adjusted for hardware"; } for (var i = 0; i < compareParams.length; i++) { var p = compareParams[i]; var name = p.name; var oldVal = backupConfig[name] || "N/A"; var newVal = recommendedConfig[name]; var reason = getReasonForChange(name, oldVal, newVal); print(name.padEnd(24) + " | " + p.format(oldVal).padEnd(12) + " | " + p.format(newVal).padEnd(17) + " | " + reason); } |
Before making any changes or restoring data, the script pauses to get user confirmation. If the user does not explicitly respond with “yes,” the restore process is canceled. This safeguard ensures the user has full control and can opt out if the recommended changes don’t look suitable. Once the user approves, the script uses `util.loadDump()` to restore the database from the backup directory. It ignores the mismatches of the MySQL version and resets the progress counter. This method handles importing all schema and data files stored during the logical backup, effectively recreating the databases on the new server.
1 2 3 4 5 6 7 8 9 10 11 |
// Confirmation var answer = shell.prompt("\nDo you want to proceed with the restore using these settings? (yes/no): "); if (answer.toLowerCase() !== "yes") { print("Restore cancelled."); return false; // Restore data print("\nRestoring database data..."); util.loadDump(backupDir + "/data", { ignoreVersion: true, resetProgress: true }); |
Finally, the script applies the recommended configuration settings to the MySQL server using `SET GLOBAL`. Each change is applied one at a time, and the script prints a confirmation message for every parameter successfully set. After this step, the server is both restored and tuned based on current hardware, ensuring optimal performance for the workload.
1 2 3 4 5 6 7 8 9 |
// Apply config print("\nApplying recommended configuration..."); for (var param in recommendedConfig) { session.runSql("SET GLOBAL " + param + " = ?", [recommendedConfig[param]]); print("Set " + param + " to " + recommendedConfig[param]); } print("\nRestore and reconfiguration complete."); return true; } |
With this configuration analysis complete, we’re no longer restoring a snapshot of the past — we’re provisioning an environment tuned for the present, ready for real testing and confident deployment.
Creating an Automation Script for MySQL Shell
Now that we’ve created individual functions for various aspects of MySQL performance tuning, we can combine them all by creating an automation script that ties everything together.
By leveraging automation through MySQL Shell, you’ll spend less time on routine database maintenance and more time doing what matters. Your database will be more reliable, more performant, and easier to manage, even as it scales to handle larger workloads.
Before you begin ….
This code is a foundation for automating MySQL performance tuning. Before it can be run, you will need to fill in the parts that are currently just placeholders. These placeholders are there to show where you should add your code, because every MySQL setup is unique. What you want to monitor, how you want to tune your database, the backups you want to take, and where you want to store those backups will depend on your specific server environment and your personal preferences.
Regarding the backup folder paths and files, when the script mentions something like ‘/path/to/your/backup’, this is just a placeholder for the actual folder location on your laptop where you want to save or load backups. You need to change this to a folder path that exists on your system. For example, on Windows, it might look like C:\\Users\\YourName\\MySQLBackups\\, while on macOS or Linux, it might be something like /home/username/mysql_backups/. If you don’t update this path, the script won’t find the files it needs, and backups won’t save correctly.
To prepare your computer or server for this automation, make sure that the folders you want to use for backups or loading configuration files already exist. The script does not create folders automatically, so this step is essential. Also, if your script needs to read files like configuration files or backup files, those files must be in the exact locations you specify in the script. Make sure to update the script to point precisely to those folder paths.
The Key Components of the Automation System
The key components of our automation system that I’ve covered in this series of articles are:
- `setupMonitoringDatabase()`: Creates the necessary database and tables for storing metrics and configuration changes
- `collectMySQLMetrics()`: Gathers performance metrics from MySQL and stores them in the database
- `setupPrometheusEndpoint()`: Exposes MySQL metrics in a format that Prometheus can scrape
- `adjustBufferPoolSize()`: Analyzes buffer pool usage and adjusts its size accordingly
- `analyzePerformanceTrends()`: Generates reports of performance trends over time
- `backupWithConfig()`: Creates backups that include configurations
- `intelligentRestore()`: Restores a database and adapts its configuration to the current environment
- `automatePerformanceTuning()`: Ties everything together into a comprehensive automation solution
Building the Automation Script
This function begins by setting default intervals for monitoring (5 minutes), tuning (1 hour), reporting (1 day), and backup (1 week), but allows these to be customized through a configuration object. It then ensures the monitoring database exists by calling our `setupMonitoringDatabase()` function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
function automatePerformanceTuning(config) { // Default configuration config = config || {}; config.monitoringInterval = config.monitoringInterval || 300; // 5 minutes config.tuningInterval = config.tuningInterval || 3600; // 1 hour config.reportingInterval = config.reportingInterval || 86400; // 1 day config.backupInterval = config.backupInterval || 604800; // 1 week print("Starting comprehensive MySQL performance automation..."); print("Monitoring interval: " + config.monitoringInterval + " seconds"); print("Tuning interval: " + config.tuningInterval + " seconds"); print("Reporting interval: " + config.reportingInterval + " seconds"); print("Backup interval: " + config.backupInterval + " seconds"); // Ensure monitoring database exists print("\nSetting up monitoring database..."); if (!setupMonitoringDatabase()) { print("Failed to set up monitoring database. Exiting automation."); return; } |
The function initializes timers to track when each task was last performed, and then enters a continuous loop. It checks if it is time to collect metrics based on the monitoring interval and calls our `collectMySQLMetrics()` function if it is.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
// Initialize timers var lastMonitoringTime = 0; var lastTuningTime = 0; var lastReportingTime = 0; var lastBackupTime = 0; // Main automation loop print("\nAutomation running. Press Ctrl+C to stop."); while (true) { var currentTime = Math.floor(Date.now() / 1000); // Collect metrics if (currentTime - lastMonitoringTime >= config.monitoringInterval) { print("\n" + new Date().toISOString() + " - Collecting performance metrics..."); collectMySQLMetrics(); lastMonitoringTime = currentTime; } |
The function then checks if it’s time to perform tuning based on the tuning interval. If it is, it calls our `adjustBufferPoolSize()` and `adjustTableOpenCache()` functions. If any configuration changes are made, it immediately collects new metrics to measure the impact of those changes.
1 2 3 4 5 6 7 8 9 10 11 |
// Perform tuning if (currentTime - lastTuningTime >= config.tuningInterval) { print("\n" + new Date().toISOString() + " - Performing performance tuning..."); var bufferPoolChanged = adjustBufferPoolSize(); var tableCacheChanged = adjustTableOpenCache(); if (bufferPoolChanged || tableCacheChanged) { print("Configuration changes were made. Collecting new metrics..."); collectMySQLMetrics(); } lastTuningTime = currentTime; } |
Finally, the function does two checks at each interval and the script performs either of these key actions:
- If it’s time to generate a report based on the reporting interval, it calls our `analyzePerformanceTrends()` function.
- If it’s time to perform a backup based on the backup interval, it calls our `backupWithConfig()` function.
The function then sleeps briefly to avoid high CPU usage. This allows the automation script to provide a solid foundation for keeping your MySQL server optimized around the clock.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
// Generate reports if (currentTime - lastReportingTime >= config.reportingInterval) { print("\n" + new Date().toISOString() + " - Generating performance report..."); analyzePerformanceTrends(7); // Past 7 days lastReportingTime = currentTime; } // Perform backup if (currentTime - lastBackupTime >= config.backupInterval) { print("\n" + new Date().toISOString() + " - Performing backup with configuration tracking..."); backupWithConfig(); lastBackupTime = currentTime; } // Sleep for 1 second to avoid high CPU usage shell.wait(1); } } |
You can start the automation with default settings:
1 2 3 4 5 6 7 8 9 10 |
// Start automation with default settings automatePerformanceTuning(); Or customize the Intervals: // Start automation with custom intervals automatePerformanceTuning({ monitoringInterval: 60, // Collect metrics every minute tuningInterval: 1800, // Tune every 30 minutes reportingInterval: 43200, // Generate reports every 12 hours backupInterval: 259200 // Backup every 3 days }); |
When executed, the automation script will produce continuous output as it collects metrics, performs tuning, generates reports, and creates backups.
Recommended Articles on MySQL Performance
If you’re interested in learning more about MySQL memory management, performance tuning, and best practices for optimizing the InnoDB buffer pool, the following resources are great places to dive deeper:
Conclusion and Next Steps
I hope that the techniques that I’ve explained in this series serve you well as you build and maintain high-performance database environments that can scale with your growing needs.
The series has taken you from the basics of MySQL Shell all the way to automation techniques. The foundation we’ve built is extensible and adaptable: you can add new metrics, implement additional optimization algorithms, integrate with other tools in your infrastructure, or customize the decision-making logic to match your specific requirements.
In this final article, we’ve created a complete automation system for monitoring, analyzing, and optimizing MySQL performance using MySQL Shell. This system collects performance metrics at regular intervals, stores them in a dedicated database, analyzes them to identify optimization opportunities, automatically adjusts MySQL configuration based on actual usage patterns, and provides tools for analyzing performance trends.
By implementing this automation system, you can ensure that your MySQL server remains optimized even as conditions change. The system continuously monitors performance, makes data-driven decisions about configuration changes, and provides valuable insights into performance trends over time. By writing scripts like the ones we’ve demonstrated, you can automate nearly every aspect of MySQL performance tuning and management. This automation not only saves time but also ensures that your MySQL servers remain optimized even as conditions change.
Remember that this is just a starting point. You can extend the system to monitor and optimize additional MySQL parameters, integrate with external monitoring tools like Prometheus and Grafana, or add more analysis and decision-making logic. Of course, automation isn’t something you set once and forget. As your environment evolves, your scripts will need to evolve too. The examples we’ve covered are meant to be a starting point – a solid foundation you can build on to create a performance tuning system made to your specific needs.
Load comments