MySQL Shell Basic Configuration Management (Part 6 – Backups and Recovery Procedures)

An image showing a tablet device being backed up.
Comments 0

Share to social media

In this final part of the MySQL Shell Configuration Management series, we explore how to automate performance tuning, backups and recovery using MySQL Shell. This guide introduces a complete system that captures not just data, but also configuration and performance context — enabling intelligent restores and optimized deployments across diverse environments.

We’re going to focus on a commonly overlooked aspect of database lifecycle management (DLM): creating versioned, performance-aware database copies for development and testing.

These copies aren’t just backups — they capture configuration and performance context, so that developers and testers can spin up environments with confidence, even on unfamiliar or scaled-down hardware. To illustrate the point, we’ll share scripts that can provide a foundation for a system that captures not just your data but the entire context of your database environment, including configuration snapshots and performance baselines.

It’s the Recovery process that best displays the advantage of a DLM approach that preserves the whole database context. Instead of the naïve optimism of restoring a backup with its original configuration, we’ll show how to build recovery procedures that adapt intelligently to the target environment.

Imagine, for example, restoring a database from a server with 32GB of RAM to one with 64GB. Our system will automatically adjust buffer pool sizes, connection limits, and other parameters to match the capabilities of the new environment.

Finally, we’ll bring everything together into an automation script that runs continuously, monitoring performance, making adjustments, generating reports, and maintaining backups. By the end of this article, you’ll have a complete automation system that can manage your MySQL servers from initial configuration through ongoing optimization, backup, and recovery.

As well as providing resilience and accurate recovery, this will make it far easier to create reliable new instances of the database for development work in a variety of server environments. It also brings together in a single system everything we’ve learned throughout this series on MySQL Configuration Management.

Every script used in this article can be found in this GitHub repo.

Automating Backups with Configuration Tracking in MySQL Shell

Backups are an important part of database management. Let’s look at a script that not only backs up your database but also records your configuration and performance data alongside the backup. When we restore or copy the database, we will have more information with which to calculate the configuration parameters, but also to test the result to make sure we have an installation which performs as we expect

The function begins by creating a timestamp-based directory name for the backup, announcing the start of the backup process, creating the backup directory, and using MySQL Shell’s built-in `util.dumpInstance()` function to create a consistent backup of all databases:

The `consistent: true` option ensures that the backup is transactionally consistent, which is essential for InnoDB tables.

Next, the function creates a file to store the current MySQL configuration, defines a list of important configuration parameters to export, writes a header with the timestamp, queries MySQL for the current value of each parameter, and writes each parameter and its value to the file. This configuration snapshot is invaluable for troubleshooting or when restoring to a different server as it provides a reference point for the optimal configuration at the time of the backup.

The function then creates a CSV file to store historical performance metrics, writes a header row with column names, queries the monitoring database for all metrics, and writes each row of metrics to the CSV file. Exporting this historical data allows you to analyze performance trends even if the original monitoring database is lost or corrupted.

Next, the function creates another CSV file to store configuration change history, exports all configuration changes from the monitoring database, creates yet another CSV file to store database size information, queries the information_schema to calculate the size of each database, and writes the database names and sizes to the CSV file. The database size information is particularly useful for capacity planning and for determining appropriate buffer pool sizes during restore operations.

Finally, the function returns the successful completion of the backup, lists all the files that were created, and returns the backup directory path for potential further processing. This backup includes not just the database data but also sufficient configuration and performance information to help with routine troubleshooting and optimization.

This enriched backup forms the basis for intelligent restores and meaningful performance comparisons.

Recovery with Configuration Analysis

When restoring a database, it’s not always appropriate to use the same configuration as before. Let’s look at a script that restores a database and adjusts its configuration to the current environment.

This function begins by stating the start of the intelligent restore process, opens and reads the configuration file from the backup, parses the file to extract parameter names and values, and displays key configuration values from the backup. This analysis provides a baseline for comparison with the current server’s capabilities.

Next, the function reads and parses the database size information from the backup, calculates the total size of all databases, determines the current server’s capabilities, and displays the total database size and server capabilities. Understanding both the database size and server capabilities is essential for calculating appropriate configuration values.

Here, it calculates optimal MySQL configuration settings based on the server’s capabilities and database size, and recommends a buffer pool size large enough to hold the entire database but capped at 75% of total memory. It also determines the number of buffer pool instances, suitable I/O capacity based on disk type, and max connections using both memory and CPU-based heuristics.

Additionally, it sets values for thread cache size (25% of max connections) and table open cache (based on estimated number of tables). These values are printed out for the user to review before proceeding.

Here, the script compares the recommended configuration with the one used in the backup. It prints a side-by-side comparison table showing each parameter, its value in the backup (if available), the newly calculated value, and the reason for any changes. If a parameter is missing in the backup, it’s marked as such. If the value hasn’t changed, it’s labeled “Same.” Otherwise, the script states that the change was made to align with the current hardware. This step helps users understand how and why the configuration is being tailored for the new environment.

Before making any changes or restoring data, the script pauses to get user confirmation. If the user does not explicitly respond with “yes,” the restore process is canceled. This safeguard ensures the user has full control and can opt out if the recommended changes don’t look suitable. Once the user approves, the script uses `util.loadDump()` to restore the database from the backup directory. It ignores the mismatches of the MySQL version and resets the progress counter. This method handles importing all schema and data files stored during the logical backup, effectively recreating the databases on the new server.

Finally, the script applies the recommended configuration settings to the MySQL server using `SET GLOBAL`. Each change is applied one at a time, and the script prints a confirmation message for every parameter successfully set. After this step, the server is both restored and tuned based on current hardware, ensuring optimal performance for the workload.

With this configuration analysis complete, we’re no longer restoring a snapshot of the past — we’re provisioning an environment tuned for the present, ready for real testing and confident deployment.

Creating an Automation Script for MySQL Shell

Now that we’ve created individual functions for various aspects of MySQL performance tuning, we can combine them all by creating an automation script that ties everything together.

By leveraging automation through MySQL Shell, you’ll spend less time on routine database maintenance and more time doing what matters. Your database will be more reliable, more performant, and easier to manage, even as it scales to handle larger workloads.

Before you begin ….

This code is a foundation for automating MySQL performance tuning. Before it can be run, you will need to fill in the parts that are currently just placeholders. These placeholders are there to show where you should add your code, because every MySQL setup is unique. What you want to monitor, how you want to tune your database, the backups you want to take, and where you want to store those backups will depend on your specific server environment and your personal preferences.

Regarding the backup folder paths and files, when the script mentions something like ‘/path/to/your/backup’, this is just a placeholder for the actual folder location on your laptop where you want to save or load backups. You need to change this to a folder path that exists on your system. For example, on Windows, it might look like C:\\Users\\YourName\\MySQLBackups\\, while on macOS or Linux, it might be something like /home/username/mysql_backups/. If you don’t update this path, the script won’t find the files it needs, and backups won’t save correctly.

To prepare your computer or server for this automation, make sure that the folders you want to use for backups or loading configuration files already exist. The script does not create folders automatically, so this step is essential. Also, if your script needs to read files like configuration files or backup files, those files must be in the exact locations you specify in the script. Make sure to update the script to point precisely to those folder paths.

The Key Components of the Automation System

The key components of our automation system that I’ve covered in this series of articles are:

  1. `setupMonitoringDatabase()`: Creates the necessary database and tables for storing metrics and configuration changes
  2. `collectMySQLMetrics()`: Gathers performance metrics from MySQL and stores them in the database
  3. `setupPrometheusEndpoint()`: Exposes MySQL metrics in a format that Prometheus can scrape
  4. `adjustBufferPoolSize()`: Analyzes buffer pool usage and adjusts its size accordingly
  5. `analyzePerformanceTrends()`: Generates reports of performance trends over time
  6. `backupWithConfig()`: Creates backups that include configurations
  7. `intelligentRestore()`: Restores a database and adapts its configuration to the current environment
  8. `automatePerformanceTuning()`: Ties everything together into a comprehensive automation solution

Building the Automation Script

This function begins by setting default intervals for monitoring (5 minutes), tuning (1 hour), reporting (1 day), and backup (1 week), but allows these to be customized through a configuration object. It then ensures the monitoring database exists by calling our `setupMonitoringDatabase()` function.

The function initializes timers to track when each task was last performed, and then enters a continuous loop. It checks if it is time to collect metrics based on the monitoring interval and calls our `collectMySQLMetrics()` function if it is.

The function then checks if it’s time to perform tuning based on the tuning interval. If it is, it calls our `adjustBufferPoolSize()` and `adjustTableOpenCache()` functions. If any configuration changes are made, it immediately collects new metrics to measure the impact of those changes.

Finally, the function does two checks at each interval and the script performs either of these key actions:

  • If it’s time to generate a report based on the reporting interval, it calls our `analyzePerformanceTrends()` function.
  • If it’s time to perform a backup based on the backup interval, it calls our `backupWithConfig()` function.

The function then sleeps briefly to avoid high CPU usage. This allows the automation script to provide a solid foundation for keeping your MySQL server optimized around the clock.

You can start the automation with default settings:

When executed, the automation script will produce continuous output as it collects metrics, performs tuning, generates reports, and creates backups.

If you’re interested in learning more about MySQL memory management, performance tuning, and best practices for optimizing the InnoDB buffer pool, the following resources are great places to dive deeper:

Conclusion and Next Steps

I hope that the techniques that I’ve explained in this series serve you well as you build and maintain high-performance database environments that can scale with your growing needs.

The series has taken you from the basics of MySQL Shell all the way to automation techniques. The foundation we’ve built is extensible and adaptable: you can add new metrics, implement additional optimization algorithms, integrate with other tools in your infrastructure, or customize the decision-making logic to match your specific requirements.

In this final article, we’ve created a complete automation system for monitoring, analyzing, and optimizing MySQL performance using MySQL Shell. This system collects performance metrics at regular intervals, stores them in a dedicated database, analyzes them to identify optimization opportunities, automatically adjusts MySQL configuration based on actual usage patterns, and provides tools for analyzing performance trends.

By implementing this automation system, you can ensure that your MySQL server remains optimized even as conditions change. The system continuously monitors performance, makes data-driven decisions about configuration changes, and provides valuable insights into performance trends over time. By writing scripts like the ones we’ve demonstrated, you can automate nearly every aspect of MySQL performance tuning and management. This automation not only saves time but also ensures that your MySQL servers remain optimized even as conditions change.

Remember that this is just a starting point. You can extend the system to monitor and optimize additional MySQL parameters, integrate with external monitoring tools like Prometheus and Grafana, or add more analysis and decision-making logic. Of course, automation isn’t something you set once and forget. As your environment evolves, your scripts will need to evolve too. The examples we’ve covered are meant to be a starting point – a solid foundation you can build on to create a performance tuning system made to your specific needs.

Article tags

Load comments

About the author

Chisom Kanu

See Profile

I am a software developer and technical writer with excellent writing skills, I am dedicated to producing clear and concise documentation, I also enjoy solving problems, reading and learning.