What are the biggest challenges in monitoring SQL Server, and what should you expect from a third-party tool?
Redgate has just published the 2019 State of SQL server Monitoring Report, so I thought I’d read it to find out if the results it reveals chime with my own experience talking to large, distributed enterprises about the challenges they face when it comes to monitoring.
Based on a survey of 820 database professionals from around the world, this is the second edition of the report, so I was also interested in seeing what trends are emerging.
This is an in-depth report and its 24 pages cover everything from how organizations currently monitor their SQL Server instances right through to what respondents thought would be the biggest challenge over the next 12 months.
The highlights for me were:
- Migrations are a major challenge for 2019 – Last year, security and compliance was right up there as the big concern which isn’t really surprising. This year, the end of support for SQL Server 2008 R2 along with increasing interest in the cloud has seen migrations move to the top spot.
- Estates continue to grow – Those organizations with fewer than 10 servers now make up 33% of the market, compared to 37% last year, and those with 10-49 servers have also fallen by an average of 4%. In contrast, there has been in increase of around 4% in those with estates of 50 servers and more – estates which in some cases now exceed 1,000 servers.
- Adoption of cloud technologies is increasing – Cloud usage is on the up, with 44% of organizations now using SQL Server on VMs in Azure, Amazon and the Google Cloud at least some of the time, 38% using Azure SQL Database, and 23% using Azure Managed Instances.
- Monitoring is key to managing large estates – 43% of respondents to the survey now use a third-party monitoring tool, a figure which rises to 60% in organizations with 50 SQL Servers or more.
This resonates with what I’ve been hearing when talking to customers, particularly the bigger ones where their SQL Server estates are increasing in size and complexity. A lot of them are telling me that they’re now looking into how they can use the cloud for some projects while leaving legacy databases on on-premises SQL Server, for example.
And many of them, like those in the report, are considering a third-party monitoring tool because it’s becoming increasingly difficult to monitor and manage their growing estates. That said, what should you expect from such a tool?
This has to be top of the list for me. As SQL Server estates grow and become more complex, a global overview on one central web-based interface can provide a handy way to check the status of every server in seconds, not hours. It’s also worth finding out if the monitoring tool offers a way of grouping servers so that you can, for example, group servers with business-critical or sensitive data.
Connected to the growth of estates is the changing nature of those estates, with servers in different data centers, or on different networks with distinct security protocols. Any third-party monitoring tool should be able to handle this, and also be able to monitor on-premises servers, servers running on VMware, and Azure-based servers at the same time, on the same screen.
A good third-party monitoring tool should be up and running in a matter of days and limit the data collection to lightweight, efficient SQL operations, exploiting minimum resource frameworks such as Extended Events.
The installation should also not require agents on each monitored SQL Server instance. Instead, it should minimize the exposed “surface area” and reduce risk, and perform all data processing on a separate server. Finally, it should be easy to view the actions taken by the tool itself, to capture the monitoring data.
The solution will need to feature a focused set of performance metrics that highlight queries having the biggest impact, as well as customizable alerts for operational and performance issues which are most important for you and your business.
A nice-to-have is alerts that can be grouped to avoid the alert overload scenario that is common when first introducing a monitoring tool. If you’re deploying changes more frequently, you should also look out for a solution that marks on the performance timeline when deployments were made and which database they were made to.
Baselines and trending information
When monitoring SQL Server, it’s incredibly useful to have baselines to provide a quick guide to understanding the significance of events like performance spikes and whether they need attention. Trends are also valuable because they can show, for example, the probable point in the future when new resources will be required, which is essential for effective planning.
Finally, I’d recommend looking for a solution that allows you to share information easily. If your estate can only be monitored from a single desktop, it limits the value of the insights it can provide. I’ve seen, for example, enterprises where developers and managers can see the global overview screen as they work, so they know if a deployment that was made that morning is causing a performance issue.
If you’re looking for information on SQL Server monitoring, The State of SQL Server Monitoring Report is a great place to start. It includes insights such as issues caused by migrations, getting to grips with growing estates, which tooling is being used, and staying on top of multiple database systems.
Was this article helpful?
Also in Database development
We sometimes receive questions from customers who are moving to use Microsoft's Azure SQL Managed Instances as to how Redgate can help manage backups.
This is a tricky question because Microsoft's ...
Also in Blog
Source controlling database code and automating deployments is a tricky business. To work quickly and maintain control over changes, developers need both productivity tooling to help generate code qui...