{"id":83781,"date":"2019-04-09T14:55:03","date_gmt":"2019-04-09T14:55:03","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/?p=83781"},"modified":"2022-04-24T15:41:54","modified_gmt":"2022-04-24T15:41:54","slug":"processing-data-using-azure-data-lake-azure-data-analytics-and-microsofts-big-data-language-u-sql","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/cloud\/azure\/processing-data-using-azure-data-lake-azure-data-analytics-and-microsofts-big-data-language-u-sql\/","title":{"rendered":"Processing Data Using Azure Data Lake, Azure Data Analytics, and Microsoft\u2019s Big Data Language U-SQL"},"content":{"rendered":"<p>Azure Data Lake Store is an extensive repository in Azure Cloud which can be thought of as a store of varied forms of data such as structured, unstructured, and semi-structured data. There are different ways this data can be loaded to the data store. You can use Azure Data Factory, Azure UI, using languages such as C# or Java SDKs, etc. Once the data is uploaded to Data Lake, you can use U-SQL scripts to process that data.<\/p>\n<p>When working on huge datasets, you might need more processing power and space. Azure Data lake provides an easy and simplified approach to improve development. The Azure Data Lake efficiently manages the data in HDFS (Hadoop Distributed File System). As you might be aware, HDFS brings in a lot of other benefits such as replication, scalability, and durability. This makes Azure Data Lake Store a very beneficial option when you have a huge amount of data being generated in your organization.<\/p>\n<p>I\u2019ll first explain what U-SQL is and how you can use this powerful big data query language to process the data.<\/p>\n<h2>U-SQL<\/h2>\n<p>The data being stored needs to be processed and analyzed to understand the current and future statistics by various departments within the organization. One such solution is U-SQL, Microsoft\u2019s big data query language which unifies the benefits of T-SQL and C#. The use of C# types makes it even more powerful and easy to write. This allows the developer to conceptualize the way data will be processed right at the time of writing the query. You won\u2019t need expertise in T-SQL or C# to write these queries, just the fundamental understanding of both these languages should be enough to move forward.<\/p>\n<p>You can visualize how the ETL process works and, after you have the picture in mind, you will be able to write the U-SQL scripts easily. U-SQL helps you to extract and transform data in the required form. You can write the scripts to perform these operations and get the results\/output back the way you want. U-SQL supports the extraction of values from various types of files such as txt, csv, etc., by using the concept of extractors. You can write your own extractors depending on the type of file you are using. Just like the way extractors are used to extract the data, there are outputters to output the data. The built-in extractor for tab separated files is <em>Extractors.Tsv<\/em> and for comma separated file it is <em>Extractors.Csv<\/em>. Similar to this, you can use <em>Outputters.Tsv<\/em> and <em>Outputters.Csv<\/em> depending on the format you want your output to have.<\/p>\n<p>There are two ways to run the U-SQL scripts, the first is to execute the U-SQL locally and the second is U-SQL Cloud execution. When running locally, the data read and written by the scripts will be present on your local computer. On the other hand, if you are using Cloud execution, the data and script will be executed on the Azure Cloud which means you are using the Azure resources and thus paying for compute and storage resources. You may want to choose the local execution over cloud execution, especially during development, as it doesn\u2019t cost you anything. In this article, I am going to demonstrate both the local execution path and Azure Data Lake execution. The first section of the article will show how to use Visual Studio to write and execute U-SQL scripts, and in the later section, you will see how to run the same U-SQL job using Azure Data Analytics and Azure Data Lake.<\/p>\n<h2>Creating and Executing the U-SQL Script Using Visual Studio<\/h2>\n<p>The first thing you should do is set up your environment, making sure that the proper workload is in place. Run the Visual Studio Installer. Navigate to the <em>Workloads<\/em> tab, <em>Data<\/em> <em>storage<\/em> <em>and<\/em> <em>processing<\/em> section. Select <em>Azure Data Lake and Stream Analytics Tools<\/em>. If this has not been installed, install it now.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1275\" height=\"710\" class=\"wp-image-83783\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/c-users-spande-appdata-local-microsoft-windows-in.jpeg\" alt=\"C:\\Users\\spande\\AppData\\Local\\Microsoft\\Windows\\INetCache\\Content.Word\\AZ1E.JPG\" \/><\/p>\n<p>After you have installed the <em>Azure Data lake and Stream Analytics Tools<\/em>, workload you can create a new project in Visual Studio by selecting the <em>U-SQL<\/em> option from the <em>Azure Data Lake<\/em> section.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"938\" height=\"649\" class=\"wp-image-83784\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-4.png\" \/><\/p>\n<p>You might have noticed that there are various project options available here that provide you the flexibility to create unit test projects or class library projects. The class library projects will be used when you want to create your own USQL objects in C#, like custom outputters or extracters. For the scope of this article, select the U-SQL Project and start writing your first U-SQL script.<\/p>\n<p>Before moving ahead and getting your hands dirty writing the U-SQL script, first, download the <a href=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/EmployeeInput.zip\">sample csv file<\/a>. This is how the csv file looks:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-83785\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-5.png\" width=\"433\" height=\"204\" \/><\/p>\n<p>To feed the data to your script, you can add a physical path to your SQL script, but this might not be a good idea if you want to deploy this script over to the Azure Cloud later. Because any paths that exist in your local file system won\u2019t exist in the Azure Data Lake account, this would eventually cause your script to fail. So instead of a physical path, you may want to use the relative paths which will help to run the paths locally as well as in Azure Data Lake. While running the script locally, the data file needs to be copied to an Azure Data Lake Catalogue which you can find by navigating to the Azure Data Lake Toolbar and clicking the options\/settings from the menu.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1035\" height=\"81\" class=\"wp-image-83786\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-6.png\" \/><\/p>\n<p>The file needs to be copied to this location motioned in Data root folder path highlighted below.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"743\" height=\"432\" class=\"wp-image-83787\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/c-users-spande-appdata-local-microsoft-windows-in-1.jpeg\" alt=\"C:\\Users\\spande\\AppData\\Local\\Microsoft\\Windows\\INetCache\\Content.Word\\AZ4E.JPG\" \/><\/p>\n<p>After you navigate to <em>USQLDataRoot<\/em> folder, create a folder called <em>InputFiles<\/em> and copy the <em>EmployeeInput.csv<\/em> file there. It can then be referenced by the U-SQL script.<\/p>\n<p>Another thing that you might notice here is that your U-SQL file comes with a C# code-behind file <em>Script.usql.cs<\/em> which can be used to add custom functions that can be used in your scripts.<\/p>\n<p>The next step is to rename the <em>Script.usql<\/em> file to <em>TestUsql.usql<\/em>. Here is the code for the script:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">\/\/Extract values from input \r\n@employeedetails = EXTRACT EmployeeID int,\r\n                      EmployeeName string,\r\n                      State string,\r\n                      Salary decimal,\r\n                      JoiningYear int,\r\n                      Title string\r\nFROM \"\/InputFiles\/EmployeeInput.csv\"\r\n\/\/Use in-built CSV extractor to read the CSV file and skip the first row\r\nUSING Extractors.Csv(skipFirstNRows:1);\r\n\/\/Query for calculating average on Salary field\r\n@average = SELECT State,\r\nAVG(Salary) AS AverageSalary  \r\nFROM @employeedetails\r\nGROUP BY State;\r\n\/\/specify output file and write the headers to the output file\r\nOUTPUT @average TO \"\/OutputFiles\/AverageSalaryResults.csv\"\r\nUSING Outputters.Csv(outputHeader:true);<\/pre>\n<h2>Understanding the U-SQL Script<\/h2>\n<p>The first step is to extract the data from csv file and then perform the required transformations on the data. After these transformations are complete, the results are finally written to the output file. For extraction of the data, U-SQL provides various extractors depending on the file type. Here, the input file is in csv format, so a csv extractor is required. To extract the data, supporting C# types are used depending on the type of data elements.<\/p>\n<p>For collecting the extracted values, the script uses a row-set variable <code>@employeedetails<\/code>. You might notice that the naming convention for variable declaration is similar to T-SQL naming convention with the <code>@<\/code> sign.<\/p>\n<p>A row-set variable <code>@employeedetails<\/code> will store the extracted values<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">@employeedetails = EXTRACT EmployeeID int,\r\n                      EmployeeName string,\r\n                      State string,\r\n                      Salary decimal,\r\n                      JoiningYear int,\r\n                      Title string\r\nFROM \"\/InputFiles\/EmployeeInput.csv\"<\/pre>\n<p>A CSV extractor is used, and to help the extractor to detect that row headers are present on the first row of the file you have to set <code>skipFirstNRows<\/code> to 1.<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">USING Extractors.Csv(skipFirstNRows:1); <\/pre>\n<p>For transformation, the SQL average function calculates the state-wide average. The results will then be collected into the @<code>average<\/code> variable.<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">@average = SELECT State,\r\nAVG(Salary) AS AverageSalary  \r\nFROM @employeedetails\r\nGROUP BY State;<\/pre>\n<p>The contents are written out to <em>AverageSalaryResults.csv<\/em> that will be created in the <em>OutputFiles<\/em> folder<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">OUTPUT @average TO \"\/OutputFiles\/AverageSalaryResults.csv\"<\/pre>\n<p><code>Outputters.Csv<\/code> is specified with the formatting using the Outputter command:<\/p>\n<pre class=\"lang:tsql theme:ssms2012-simple-talk\">USING Outputters.Csv(outputHeader:true);<\/pre>\n<h2>Running the Script<\/h2>\n<p>Normally, for running any solution, you use the Start button in the Visual Studio toolbar, but, in this case, you will use a U-SQL toolbar to run the U-SQL script. By doing this, you will see the step-by-step execution of the script.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"624\" height=\"466\" class=\"wp-image-83788\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/c-users-suhas-downloads-image-png.png\" alt=\"C:\\Users\\suhas\\Downloads\\image.png\" \/><\/p>\n<p>After the script runs, you will see the compile summary shown below.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"451\" height=\"269\" class=\"wp-image-83789\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-7.png\" \/><\/p>\n<p>By running with the <em>Submit<\/em> button, you will see detailed information about the job. For instance, the compile view will show you how the data has been processed from step1 to step n. Also, it shows the compile time details and easy to navigate to script options.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1632\" height=\"902\" class=\"wp-image-83790\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-8.png\" \/><\/p>\n<p>As mentioned, the output data should be written to the file <em>\/OutputFiles\/AverageSalaryResults.csv<\/em>. You can now go and check the same root directory where you created the <em>InputFiles<\/em> folder. You can also access the file using the local run results window just by right-clicking the output result path and selecting the <em>Preview<\/em> option to view the file.<\/p>\n<p>As you can see, the resulting file, <em>AverageSalaryResults.csv<\/em>, contains the state-wide average salary as shown in the image below.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-83791\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-9.png\" width=\"281\" height=\"204\" \/><\/p>\n<h2>Create and Run U-SQL Scripts Using Azure Data Lake Analytics Account<\/h2>\n<p>Running the scripts locally is often a good option since Visual Studio doesn\u2019t cost you anything for using the resources. In a real-life scenario, there might be many situations where you will want to use the Azure Portal to execute the U-SQL script. In the next section, I will show you how to execute a U-SQL script with an Azure Cloud account.<\/p>\n<p>You will load your input file into Azure Data Lake Storage and then run a U-SQL script job using an Azure Data Lake Analytics account.<\/p>\n<p>First login to <a href=\"https:\/\/portal.azure.com\/\">Azure Portal<\/a> and select the All services from the left options panel. Now choose the <em>Analytics<\/em> options. Select <em>Data Lake Storage Gen1<\/em> and create a new <em>Data Lake Storage<\/em> account.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1598\" height=\"778\" class=\"wp-image-83792\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-10.png\" \/><\/p>\n<p>Configure your new Data Lake Storage giving it a <em>Name<\/em>, your <em>Subscription<\/em>, and a <em>Resource<\/em> <em>Group<\/em>. Note that the name must be unique across Azure. You might want to create a new Resource Group so that cleaning up the resources is easy when you are done experimenting.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"629\" height=\"569\" class=\"wp-image-83793\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-11.png\" \/><\/p>\n<p>Once the Data Lake Storage has been deployed, you\u2019ll see it in the list.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1269\" height=\"289\" class=\"wp-image-83794\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-12.png\" \/><\/p>\n<p>Now that the storage is created, you will learn how to create an Azure Data Lake Analytics account. From the <em>Analytics<\/em> menu, select the <em>Data Lake Analytics<\/em> option.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1617\" height=\"755\" class=\"wp-image-83795\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-13.png\" \/><\/p>\n<p>Now add a new <em>Data<\/em> <em>Analytics<\/em> <em>account<\/em> and select the <em>Data<\/em> <em>Lake<\/em> <em>Storage<\/em> that you just created.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"624\" height=\"666\" class=\"wp-image-83796\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/c-users-suhas-downloads-image-1-png.png\" alt=\"C:\\Users\\suhas\\Downloads\\image (1).png\" \/><\/p>\n<p>Once completed, you will notice your new account has been added to the Data Analytics section.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1218\" height=\"292\" class=\"wp-image-83797\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-14.png\" \/><\/p>\n<p>&nbsp;<\/p>\n<h2>Uploading the Input File to Azure Data Lake Storage<\/h2>\n<p>Navigate back to the Azure Data Lake Storage that you just created and click on the <em>Data<\/em> <em>explorer<\/em> option. You will notice that there are two default folders (<em>catalog<\/em> and <em>system<\/em>) already present in the storage account. Add a new folder named <em>InputFiles<\/em> to upload the file so that the experiment files are separated from system files.<\/p>\n<p><strong><img loading=\"lazy\" decoding=\"async\" width=\"1562\" height=\"840\" class=\"wp-image-83798\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-15.png\" \/><\/strong><\/p>\n<p>After creating the new folder, click <em>Upload<\/em> and select the file and click <em>Add selected files<\/em>.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1698\" height=\"399\" class=\"wp-image-83799\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-16.png\" \/><\/p>\n<p>The next step is to create your first job with the Data Analytics account.<\/p>\n<h2>Adding a New Job to Azure Data Analytics<\/h2>\n<p>After you navigate to the Azure Data Analytics account and create a new job, be sure to name your new job so that you can keep track of it in the future.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1506\" height=\"525\" class=\"wp-image-83800\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-17.png\" \/><\/p>\n<p>Just for demonstration purposes, copy the same U-SQL script that you used in the previous section of the article. Make sure you use the correct path for the input and output files.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1006\" height=\"697\" class=\"wp-image-83801\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-18.png\" \/><\/p>\n<p>Once everything is ready, click on <em>Submit<\/em>. You will see the status of the job on the left-hand side section of the screen. This section contains all the details that you might want to know such as estimated cost, efficiency and the timestamp for all the steps involved in the process. It will take a few seconds to process the results. After every step has successfully executed, you will start seeing the Job graph. Yay! The job has run successfully, and you can see that everything has turned green.<\/p>\n<p><em><img loading=\"lazy\" decoding=\"async\" width=\"1401\" height=\"863\" class=\"wp-image-83802\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-19.png\" \/><\/em><\/p>\n<p>The job graph will show you the details of the job, where the input file is transformed, and that the results are written over to AverageSalaryResults.csv file. The easiest way to access the file is by clicking the AverageSalaryResults.csv in the graph.<\/p>\n<p>You can now download the file and edit it as per your business requirements. There are various options that are available such as providing access to people who need to download the file which are very helpful when you need to share your results within your organization.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"916\" height=\"384\" class=\"wp-image-83803\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/word-image-20.png\" \/><\/p>\n<p>Whenever you need to access the output file in the future, you can simply navigate to the Azure Data Lake Storage and then access the file in OutputFiles folder that was created through the U-SQL job.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"624\" height=\"120\" class=\"wp-image-83804\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/c-users-suhas-downloads-unnamed-png.png\" alt=\"C:\\Users\\suhas\\Downloads\\unnamed.png\" \/><\/p>\n<p>If you created a new Resource Group when you set everything up, you can delete the resources after you are done experimenting. For this, you can navigate to the Resource Group you have created and click the <em>Delete resource group<\/em> option. Make sure you have a copy of the input and output files downloaded before you delete the resource group.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"624\" height=\"307\" class=\"wp-image-83805\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/2019\/04\/c-users-suhas-downloads-image-3-png.png\" alt=\"C:\\Users\\suhas\\Downloads\\image (3).png\" \/><\/p>\n<h2>Summary<\/h2>\n<p>In this article, I showed how you can conveniently write U-SQL scripts using SQL and C# language types and constructs. ETL is made easy with these new approaches of writing the U-SQL script and running it locally or on the Azure Cloud. You have seen how you can build your script locally using the local Visual Studio setup which is free of cost as it does not require any cloud resources. On the other hand, you can write and run the U-SQL script on the Azure Data Analytics account which works based on a pay-per-use basis, depending on resources used while running the job and for using the storage as well.<\/p>\n<h2>References<\/h2>\n<p><a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/data-lake-analytics\/data-lake-analytics-u-sql-get-started\">https:\/\/docs.microsoft.com\/en-us\/azure\/data-lake-analytics\/data-lake-analytics-u-sql-get-started<\/a><\/p>\n<p><a href=\"https:\/\/azure.microsoft.com\/en-us\/solutions\/data-lake\/\">https:\/\/azure.microsoft.com\/en-us\/solutions\/data-lake\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Data analytics has become one of the powerful domains in the world of data science. An enormous amount of data is being generated by each organization in every sector. Computer science has found solutions to store and process this data in a smart way through a distributed file system. One such example is Azure Data Lake. It uses the Hadoop Distributed File System, and to perform analytics on this data, Azure Data Lake storage is integrated with Azure Data Analytics Service and HDInsight. In this article, Suhas Pande will explain how to store data using Azure Data Lake and how to perform data analysis on it using U-SQL, a big data SQL and C# language. &hellip;<\/p>\n","protected":false},"author":320462,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[137091],"tags":[],"coauthors":[60465],"class_list":["post-83781","post","type-post","status-publish","format-standard","hentry","category-azure"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/83781","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/320462"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=83781"}],"version-history":[{"count":10,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/83781\/revisions"}],"predecessor-version":[{"id":83819,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/83781\/revisions\/83819"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=83781"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=83781"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=83781"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=83781"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}