1 November 2017
1 November 2017

Preparing your data platform for peak period sales

Guest post

This is a guest post from Coeo. Europe's most trusted analytics and data management expert, Coeo is the number one provider of database strategy in the Retail, Financial services and Gaming industries, and delivers technology strategy and support for businesses who need to get the most from their data.

The Coeo team hold more Microsoft certifications than any other data platform specialist in Europe and are passionate about sharing their knowledge and expertise to help customers become industry leaders.

In the last few years, Black Friday and Cyber Monday have become synonymous with both getting a good bargain and causing retail pandemonium. However, times are changing. News items about people queueing outside shops at 4am have been replaced with stories about how much consumer spending has moved online. Quite some achievement for a couple of days still relatively new to the British retail calendar.

Everyone plans to proactively maintain their data platform but the reality for some organizations is often If it ain’t broke, don’t touch it. We all know that tinkering can be bad, but we also know that proactive maintenance is good, often essential. But if your business is facing a date like Black Friday (it’s coming up fast and will be on 24 November this year), how can you prepare your data platform for the increase in traffic?

Testing high availability

Although it’s often confused with a complete disaster recovery test, it’s just as important to know that a regular failover cluster or Availability Group failover from an active to an inactive node works as expected. In stable production environments, it’s possible for database servers not to be failed-over for several months – increasing the chances of resources being missing after the failover. Although it’s not necessary to do it too often, manually failing-over a production workload during a period of planned downtime helps check that the storage, networking, logins, and scheduled tasks the database server needs are still available.

Testing data quality

Knowing that the data in reports is accurate is always important but finding out it’s wrong during a peak period is probably the worst possible situation. It’s crucial, then, to test the quality of the data that reports – whether existing or ad-hoc – provide well in advance. Do the totals they provide match what would be calculated manually? Is all the expected data appearing on reports? If reports are only run once a quarter or even once a year, it’s common to find changes in source data not being reflected in reports – and data being missed.

Checking network stability

Peak periods can see the amount of network traffic a database server handles increase significantly, often caused by more application traffic or users running larger ad-hoc reports. These bulkier workloads can amplify the effect of unreliable network connectivity between database servers, application servers, and end users – often seen as disconnected sessions or connections timing out. A check of network card statistics for dropped packets and connectivity errors, along with scans of error logs for network warnings can help find any significant issues.

Server patching

While everyone knows that patching services is important, it’s typically one of the most delayed pieces of proactive maintenance. A large backlog of outstanding patches not only means more downtime to apply them, but more time to first test them. If that’s the situation your database server is in, then it’s probably worth scheduling several downtime windows rather than one large one to patch Windows and SQL Server separately. It’s also important to schedule this for as soon as possible in case of any incompatibilities or error messages which need to be investigated. Finally, it’s also good practice to patch inactive failover cluster nodes first, and then patch the active nodes several days later.

Baselining performance

Knowing how hard a server is working is important, but not as important as knowing how effective its performance is, by relating server performance statistics to business productivity metrics. If a database server is running at 80% utilization, but supporting 1,200 POS transactions a minute, that might be nothing to worry about. Whereas 80% utilization for only 12 transactions a minute may well be the sign of a serious bottleneck.

If you need assistance with any of your preparations for your peak period,
Coeo can help you by delivering retail data and analytics solutions delivered
by our experienced SQL Server consultants. Find out more about us.

Guest post

This is a guest post from Coeo. Europe's most trusted analytics and data management expert, Coeo is the number one provider of database strategy in the Retail, Financial services and Gaming industries, and delivers technology strategy and support for businesses who need to get the most from their data.

The Coeo team hold more Microsoft certifications than any other data platform specialist in Europe and are passionate about sharing their knowledge and expertise to help customers become industry leaders.

Share this post.

Share on FacebookShare on Google+Share on LinkedInTweet about this on Twitter

Related posts

Also in Blog

Redgate data governance survey reveals database DevOps is the key to GDPR compliance

A major new data governance survey from Redgate demonstrates there are important GDPR compliance issues that need to be addressed – and that a DevOps approach to database development can provide the...

Also in Database development

Why it's time to think seriously about SQL Server 2017

SQL Server 2017 officially landed today and is now on general release. The latest version of the heavyweight platform is more than the sum of its parts, however, because it doesn’t just deliver new ...