{"id":98447,"date":"2023-11-06T00:31:13","date_gmt":"2023-11-06T00:31:13","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/?p=98447"},"modified":"2023-10-19T20:51:13","modified_gmt":"2023-10-19T20:51:13","slug":"moving-sql-server-to-the-cloud-modernizing-stack-overflow-for-teams","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/database-administration-sql-server\/moving-sql-server-to-the-cloud-modernizing-stack-overflow-for-teams\/","title":{"rendered":"Moving SQL Server To The Cloud: Modernizing Stack Overflow for Teams"},"content":{"rendered":"<p>Earlier this year, we migrated the entire <a href=\"https:\/\/stackoverflow.co\/teams\/\" target=\"_blank\" rel=\"noopener\">Stack Overflow for Teams<\/a> platform to Azure. This was a lengthy endeavour and <a href=\"https:\/\/stackoverflow.com\/users\/997973\/wouter-de-kort\" target=\"_blank\" rel=\"noopener\">Wouter de Kort<\/a>, one of our core engineers, wrote about multiple technical aspects of the project in these posts:<\/p>\n<ul>\n<li><a href=\"https:\/\/stackoverflow.blog\/2023\/08\/30\/journey-to-the-cloud-part-i-migrating-stack-overflow-teams-to-azure\/\" target=\"_blank\" rel=\"noopener\">Journey to the cloud part I: Migrating Stack Overflow Teams to Azure<\/a><\/li>\n<li><a href=\"https:\/\/stackoverflow.blog\/2023\/09\/05\/journey-to-the-cloud-part-ii-migrating-stack-overflow-for-teams-to-azure\/\" target=\"_blank\" rel=\"noopener\">Journey to the cloud part II: Migrating Stack Overflow for Teams to Azure<\/a><\/li>\n<\/ul>\n<p>In this post, I\u2019ll share a little more detail about the SQL Server portions of the migration, what we\u2019ve done since, and how we\u2019ve approached modernizing our environment while migrating to Azure. I&#8217;ll talk about a few key choices we made and trade-offs between simplicity and risk in case they are helpful as you face similar decisions.<\/p>\n<h4>Background<\/h4>\n<p>In our New York (actually, New Jersey) and Colorado data centers, the databases supporting Teams ran on four physical servers, two in each data center. The infra consisted of a Windows Server Failover Cluster (we\u2019ll call it <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">NYCHCL01<\/code>), and four servers (<code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">NY-CHSQL01<\/code>\/<code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">02<\/code> and <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">CO-CHSQL01<\/code>\/<code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">02<\/code>), hosting 103 databases in a single availability group. Primary was always on one of the NY nodes, with a sync secondary on the other NY node and two async secondaries in Colorado:<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-1.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-1.png\" alt=\"What Teams looked like in the data center\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">What Teams looked like in the data center<\/em><\/p>\n<h4>But we wanted out of the data center<\/h4>\n<p>In order to migrate to the cloud, we built a mirrored environment in Azure: two Azure VMs in East US and two Azure VMs in West US. These servers joined the same cluster in the data center, and ran the same version of the operating system (Windows Server 2016) and SQL Server (2019).<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-2.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-2.png\" alt=\"A new mirrored environment in Azure\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">A new mirrored environment in Azure<\/em><\/p>\n<p>We went with <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/azure-sql\/virtual-machines\/windows\/sql-server-on-azure-vm-iaas-what-is-overview?view=azuresql\" target=\"_blank\" rel=\"noopener\">Azure VMs running &#8220;on-prem&#8221; SQL Server<\/a>, over PaaS offerings like <a href=\"https:\/\/learn.microsoft.com\/en-us\/azure\/azure-sql\/managed-instance\/sql-managed-instance-paas-overview?view=azuresql\" target=\"_blank\" rel=\"noopener\">Azure SQL Managed Instance (MI)<\/a>, for a few reasons:<\/p>\n<ul>\n<li>A rule I tend to strictly follow is to <strong>change as few things as possible<\/strong>. In this case, we felt that sticking with the exact same engine and version would make for a more stable experience.<\/li>\n<li>When possible, we want to make sure <strong>a migration is reversible<\/strong>. Since the source systems were still running SQL Server 2019, and we didn&#8217;t want to upgrade them <em>before<\/em> the migration, we couldn&#8217;t take advantage of newer features that would allow failing over and back between MI and SQL Server 2022.<\/li>\n<li>We already exceeded MI&#8217;s <strong>hard limit of 100 databases<\/strong>. Fitting into a managed instance would mean breaking it up so not all databases were on the same instance &#8211; not insurmountable, but not something we&#8217;ve ever done with this system, and we didn&#8217;t want to disrupt the timeline trying it out.<\/li>\n<li>When analyzing and forecasting costs, we just couldn&#8217;t find a sweet spot in the <strong>price\/performance ratio<\/strong> that made sense &#8211; for the same power, a VM running SQL Server is currently the most economical choice for us. Even if it means we still have to manage some of the maintenance (e.g., OS patching and cumulative updates). We&#8217;ll continue to watch MI pricing over time and see if it comes down into an orbit that makes it attractive.<\/li>\n<\/ul>\n<p>Once built and configured, we joined these new nodes to the AG, making them all async secondaries, and removed the Colorado nodes from the AG:<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-3.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-3.png\" alt=\"Getting Colorado out of the picture\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">Getting Colorado out of the picture<\/em><\/p>\n<p>We didn\u2019t want to stay in this mode for long since that is a lot of secondaries to maintain. We quickly made <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL01<\/code> synchronous and the other NY secondary async.<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-4.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-4.png\" alt=\"Briefly using sync mode to Azure\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">Briefly using sync mode to Azure<\/em><\/p>\n<p>Making <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL01<\/code> synchronous and the other NY secondary async let us fail over to Azure (during a maintenance window), making <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL01<\/code> primary and <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL02<\/code> a sync secondary. This was not a point of no return, since we could fail back to the data center if we needed to, but we gradually cut remaining ties with the data center by removing the NY secondaries:<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-5.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-5.png\" alt=\"Saying goodbye to the data center\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">Saying goodbye to the data center<\/em><\/p>\n<p>Now we had the AG fully in Azure, with zero customer impact other than the announced maintenance window.<\/p>\n<div style=\"padding-left: 24px; margin: auto 0 24px 8px; border-left: 3px solid #aaa; color: #888;\">\n<p><b style=\"color: #666;\">Some notes<\/b><\/p>\n<ul>\n<li>I\u2019ve intentionally left a lot of the complexity out, as this was more than a simple AG failover. The operation required coordinating moving the application to Azure at the same time, since we didn\u2019t want to have the app running in the data center talking to databases in Azure or vice-versa.<\/li>\n<li>The failover itself should have been a tiny blip, measured in seconds, which wouldn\u2019t even require a maintenance window. I\u2019m glad we did plan for a window because the migration wasn\u2019t as smooth as it should have been (the failover due to network constraints, and other pieces due to various application-side issues). Wouter talked about some of that in <a href=\"https:\/\/stackoverflow.blog\/2023\/09\/05\/journey-to-the-cloud-part-ii-migrating-stack-overflow-for-teams-to-azure\/\" target=\"_blank\" rel=\"noopener\">the second part<\/a> of his blog series.<\/li>\n<\/ul>\n<\/div>\n<h3>We thought we were done<\/h3>\n<p>During and after that migration, we came across some sporadic and not-so-well-documented issues with cross-region network performance. We\u2019re talking about transfer speeds that were 60X slower at times &#8211; observed while backing up, restoring, or copying files between east and west nodes. While we could mitigate some of this by using Azure storage exclusively, this would cause double effort for some operations. In addition, there was valid concern that this unreliable network could cause broader latency for log transport and could even jeopardize successful failovers between regions. We also theorize that it contributed to some of the struggles we faced on migration day.<\/p>\n<p>Several colleagues ran boatloads of tests using <a href=\"https:\/\/iperf.fr\/\" target=\"_blank\" rel=\"noopener\">iPerf<\/a> and other tools. We discovered that newer generation VM images running Windows Server 2019, while not completely immune to the issue, were much less likely to experience drastic fluctuations in transfer performance than our older gen images running Windows Server 2016. We also believed (but couldn\u2019t explicitly prove) that the old cluster\u2019s ties to the data center might contribute to the issue, since we could reproduce sluggishness or failures on those servers when performing operations that involve domain controllers (e.g., creating server-level logins or creating computer objects) &#8211; issues that never occur on identically configured servers that aren\u2019t part of that cluster.<\/p>\n<h4>The new plan<\/h4>\n<p>We made a plan to ditch the old cluster and get off of Windows Server 2016 completely. This puts us in a much better position to have reliable cross-region failovers, helps us clean up some tech debt, and paves the way for upgrading to SQL Server 2022. Since we can&#8217;t just take new maintenance windows on the fly, we would have to do this with minimal downtime. For me, this means <em>no data movement<\/em> (e.g. manual backup \/ restore of all 103 databases). We also wanted to do this in a simple way, which for me means <em>no <a href=\"https:\/\/learn.microsoft.com\/en-us\/sql\/database-engine\/availability-groups\/windows\/distributed-availability-groups?view=sql-server-ver16\" target=\"_blank\" rel=\"noopener\">distributed availability groups<\/a><\/em>. So how would we move an AG to a new cluster with minimal downtime, no data movement, and without using a distributed AG?<\/p>\n<p>We started by evicting the NY and CO nodes from the old cluster. Then we created a new cluster in Azure (let\u2019s call it <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">AZTMCL01<\/code>), and four new VMs all running Windows Server 2019 (we&#8217;ll call them <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL03<\/code>\/<code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">04<\/code> and <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-W-SQL03<\/code>\/<code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">04<\/code>). The two 03 nodes ran SQL Server 2019, and were added as nodes to the existing cluster. The two 04 nodes ran SQL Server 2022, and were added as nodes to the new cluster.<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-6.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-6.png\" alt=\"A new cluster has entered the chat\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">A new cluster has entered the chat<\/em><\/p>\n<p>Next, we removed the west 01\/02 nodes from the AG, and joined the new 03 nodes as async secondaries.<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-7.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-7.png\" alt=\"Losing the west 01\/02 secondaries\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">Losing the west 01\/02 secondaries<\/em><\/p>\n<p>Then we made <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL03<\/code> a sync secondary, and kicked <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL02<\/code> out of the AG.<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-8.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-8.png\" alt=\"Losing one more Windows Server 2016 node\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">Losing one more Windows Server 2016 node<\/em><\/p>\n<p>After that, we failed over to <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL03<\/code>, made <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-W-SQL03<\/code> a sync secondary temporarily, and removed <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL01<\/code> from the AG.<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-9.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-9.png\" alt=\"And then there were two\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">And then there were two<\/em><\/p>\n<h4>The trickiest part<\/h4>\n<p>Next up, how would we actually move the 03 nodes to the new cluster? As mentioned before, we didn\u2019t want to use distributed AGs to additional nodes already in the new cluster, though this would probably be a common and logical suggestion. Instead, we developed a plan to use a short maintenance window and simply move the existing nodes out of the old cluster and into the new cluster. Now, that sounds simple, but there are a lot of steps, and we can\u2019t get there while the AG is up and running, so we\u2019d have to perform the following before and during the maintenance window:<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-checklst.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 100%; min-width: 320px; border: 2px solid #333;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-checklst.png\" alt=\"Checklist with guesstimates for duration\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">Checklist with guesstimates for duration<\/em><\/p>\n<p>There is some risk there, of course, and a few points of no (or at least cumbersome) return. If anything went wrong while the AG was offline or while the primary was the single point of failure, we\u2019d have to resort to the west node (or a full restore). And if the west node couldn\u2019t join successfully, we\u2019d have to seed the AG there from scratch, and would have a single point of failure until <em>that<\/em> finished. This is why we take a round of backups before the window and a round of log backups immediately after putting the app into read-only mode.<\/p>\n<p>Spoiler: nothing went wrong. The transition was smooth, and the app was in read-only mode for a grand total of 25 minutes, with the offline portion lasting just 9 minutes (most of this time waiting for AD\/DNS). Could we have avoided those 9 minutes of downtime? Sure. We could have deployed connection string changes to point to an explicit node instead of the listener then deployed another change to set it back. Then the only downtime would have been two brief service restarts. But this is a lot of additional work and pipeline orchestration to elevate the end-user experience &#8211; during an announced maintenance window &#8211; from 9 minutes of \u201coffline for maintenance\u201d to 2 minutes of &#8220;offline for maintenance&#8221; and 7-8 minutes of \u201cyou can read, but you can\u2019t write.\u201d<\/p>\n<p>Once we were in a happy state, we could end the maintenance window and turn the app back on:<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-X.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-X.png\" alt=\"All Windows Server 2019 now\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">All Windows Server 2019 now<\/em><\/p>\n<h4>Now, on to SQL Server 2022<\/h4>\n<p>With the migration to the new cluster out of the way, we turned our attention to upgrading the environment to SQL Server 2022. This time, we could perform a rolling upgrade without a maintenance window and with just a minor failover blip, similar to when we perform patching. We disabled read-only routing for the duration of these operations, knowing that would mean increased workload on the primary.<\/p>\n<p>First, we added the 04 nodes as secondaries, but &#8211; being a newer version of SQL Server &#8211; they were not readable.<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-X2.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-X2.png\" alt=\"First, we made the 2022 nodes secondaries\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">First, we made the 2022 nodes secondaries<\/em><\/p>\n<p>Next, we failed over to <code style=\"font-family: consolas; background: #e4e4e4; padding: 3px 4px 1px 4px; border-radius: 3px;\">TM-E-SQL04<\/code> as primary, which made the 03 nodes unreadable. This transition was the only downtime for customers, and the only point of no return. The most any customer might have been affected was 42 seconds &#8211; this was the longest any database took to come fully online, however this was not even a wholly user-facing database, more of a background scheduler type of deal.<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-X3.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-X3.png\" alt=\"Then, we failed over to a 2022 node\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">Then, we failed over to a 2022 node<\/em><\/p>\n<p>This is another state we didn&#8217;t want to be in for long. Not only were the lower version nodes unreadable but, also, the AG could no longer sync to those nodes. This means the primary couldn&#8217;t flush logs until the secondaries were all brought up to the same version (or removed from the AG). For expediency, we upgraded the 03 nodes to SQL Server 2022 in place; this isn\u2019t my favorite approach, but it sure is simpler than building yet more VMs and working those into the mix:<\/p>\n<p><a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-X4.png\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" style=\"width: 95%; min-width: 320px;\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2023\/09\/migr-X4.png\" alt=\"Finally, we upgraded the 2019 nodes\" \/><\/a><br \/>\n<em style=\"color: #777; font-size: 0.875rem;\">Finally, we upgraded the 2019 nodes<\/em><\/p>\n<p>If we needed to take a longer time to perform those upgrades, then in order to avoid undue duress on the primary, we would have just removed those nodes from the AG, and added them back when they (or their replacements) were ready.<\/p>\n<p>At this point, all four nodes are running on Windows Server 2019 and SQL Server 2022, and everything has been smooth so far. Hopefully there is some valuable information here that can help you in your next migration or upgrade.<\/p>\n<p>Next on the list: taking advantage of some of those SQL Server 2022 features, and following similar steps to modernize the public Q &amp; A platform.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Earlier this year, we migrated the entire Stack Overflow for Teams platform to Azure. This was a lengthy endeavour and Wouter de Kort, one of our core engineers, wrote about multiple technical aspects of the project in these posts: Journey to the cloud part I: Migrating Stack Overflow Teams to Azure Journey to the cloud&#8230;&hellip;<\/p>\n","protected":false},"author":341115,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[143527,53],"tags":[4309,5812],"coauthors":[158980],"class_list":["post-98447","post","type-post","status-publish","format-standard","hentry","category-database-administration-sql-server","category-featured","tag-migration","tag-upgrades"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/98447","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/341115"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=98447"}],"version-history":[{"count":55,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/98447\/revisions"}],"predecessor-version":[{"id":98761,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/98447\/revisions\/98761"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=98447"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=98447"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=98447"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=98447"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}