{"id":1114,"date":"2011-03-29T00:00:00","date_gmt":"2011-03-29T00:00:00","guid":{"rendered":"https:\/\/test.simple-talk.com\/uncategorized\/using-sql-server-integration-services-to-bulk-load-data\/"},"modified":"2021-05-17T18:33:21","modified_gmt":"2021-05-17T18:33:21","slug":"using-sql-server-integration-services-to-bulk-load-data","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/sql-server\/bi-sql-server\/using-sql-server-integration-services-to-bulk-load-data\/","title":{"rendered":"Using SQL Server Integration Services to Bulk Load Data"},"content":{"rendered":"<div id=\"pretty\">\n<p class=\"start\">In previous articles, I discussed ways in which you can use the bcp utility and the Transact-SQL statements BULK INSERT and INSERT&#8230;SELECT (with the OPENROWSET function) to bulk load external data into a SQL Server database. Another effective-and indeed the most flexible-method you can use to bulk load data is SQL Server Integration Services (SSIS). SSIS can read from a variety of data sources, data can be easily transformed in memory, and you can bulk load the data without needing to stage it. Because SSIS runs as a process separate from the database engine, much of the CPU-intensive operations can be preformed without taxing the database engine, and you can run SSIS on a separate computer. As a result, you can easily scale out your bulk load operations in order to achieve extremely high throughput.<\/p>\n<p>SSIS provides several task and destination components that facilitate bulk load operations:<\/p>\n<ul>\n<li><b>SQL Server <\/b>destination  <\/li>\n<li><b>OLE DB <\/b>destination  <\/li>\n<li><b>BULK INSERT <\/b>task<\/li>\n<\/ul>\n<p>In this article, I provide an overview of each of these components and show you how they work. To demonstrate the components, I first created the following three tables in the AdventureWorks2008 database:<\/p>\n<pre class=\"theme:ssms2012 lang:tsql\">IF OBJECT_ID('Employees1', 'U') IS NOT NULL  DROP TABLE dbo.Employees1;\nCREATE TABLE dbo.Employees1\n(\n&#160; EmployeeID INT NOT NULL,\n&#160; FirstName NVARCHAR(50) NOT NULL,\n&#160; LastName NVARCHAR(50) NOT NULL,\n&#160; JobTitle NVARCHAR(50) NOT NULL,\n&#160; City NVARCHAR(30) NOT NULL,\n&#160; StateProvince NVARCHAR(50) NOT NULL,\n&#160; CountryRegion NVARCHAR(50) NOT NULL,\n&#160; CONSTRAINT PK_Employees1 PRIMARY KEY CLUSTERED (EmployeeID ASC)\n);\n\nIF OBJECT_ID('Employees2', 'U') IS NOT NULL\nDROP TABLE dbo.Employees2;\nCREATE TABLE dbo.Employees2\n(\n&#160; EmployeeID INT NOT NULL,\n&#160; FirstName NVARCHAR(50) NOT NULL,\n&#160; LastName NVARCHAR(50) NOT NULL,\n&#160; JobTitle NVARCHAR(50) NOT NULL,\n&#160; City NVARCHAR(30) NOT NULL,\n&#160; StateProvince NVARCHAR(50) NOT NULL,\n&#160; CountryRegion NVARCHAR(50) NOT NULL,\n&#160; CONSTRAINT PK_Employees2 PRIMARY KEY CLUSTERED (EmployeeID ASC)\n);\n\nIF OBJECT_ID('Employees3', 'U') IS NOT NULL\nDROP TABLE dbo.Employees3;\nCREATE TABLE dbo.Employees3\n(\n&#160; EmployeeID INT NOT NULL,\n&#160; FirstName NVARCHAR(50) NOT NULL,\n&#160; LastName NVARCHAR(50) NOT NULL,\n&#160; JobTitle NVARCHAR(50) NOT NULL,\n&#160; City NVARCHAR(30) NOT NULL,\n&#160; StateProvince NVARCHAR(50) NOT NULL,\n&#160; CountryRegion NVARCHAR(50) NOT NULL,\n&#160; CONSTRAINT PK_Employees3 PRIMARY KEY CLUSTERED (EmployeeID ASC)\n);\n<\/pre>\n<p>The tables are identical except for their names and the names of the primary key constraints. After I added the tables to the AdventureWorks2008 database (on a local instance of SQL Server 2008), I ran the following bcp command to create a text file in a local folder:<\/p>\n<pre>bcp \"SELECT BusinessEntityID, FirstName, LastName, JobTitle, City, StateProvinceName, CountryRegionName FROM AdventureWorks2008.HumanResources.vEmployee ORDER BY BusinessEntityID\" queryout C:\\Data\\EmployeeData.csv -c -t, -S localhost\\SqlSrv2008 -T<\/pre>\n<p>The bcp command retrieves data from the vEmployee view in the AdventureWorks2008 database and saves it to the EmployeeData.csv file in the folder C:\\Data. The data is saved as character data and uses a comma-delimited format. I use the text file as the source data in order to demonstrate the three SSIS components.<\/p>\n<p>I next created an SSIS package named BulkLoadPkg.dtsx and added the following two connection managers:<\/p>\n<ul>\n<li><b>OLE DB<\/b>. Connects to the AdventureWorks2008 database on the local instance of SQL Server 2008. I named this connection manager <b>AdventureWorks2008<\/b>.  <\/li>\n<li><b>Flat File<\/b>. Connects to the EmployeeData.csv file in the C:\\Data folder. I named this connection manager <b>EmployeeData<\/b>.<\/li>\n<\/ul>\n<p>After I added the connection managers, I added three <b>Sequence<\/b> containers to the control flow, one for each bulk insert operation. Each operation is associated with one of the tables I created above. For example, the first Sequence container will contain the components necessary to bulk load data into the Employees1 table.<\/p>\n<p>To each container I added an <b>Execute SQL<\/b> task that includes a TRUNCATE TABLE statement. The statement truncates the table associated with that bulk load operation. This allows me to execute the container or package multiple times in order to test different configurations, without having to be concerned about primary key violations. I then added to each of the first two containers a <b>Data Flo<\/b>w task, and to the third container I added a <b>Bulk Insert<\/b> task. Figure 1 shows the control flow of the BulkLoadPkg.dtsx package. Notice that I connected the precedence constraint from each <b>Execute SQL<\/b> task to the <b>Data Flow<\/b> or <b>Bulk Insert<\/b> task.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob1.jpg\" alt=\"1265-Bob1.jpg\" \/><\/p>\n<p class=\"caption\">Figure 1: Control flow showing three options for bulk loading data<\/p>\n<p>After I created the basic package, I configured the <b>Data Flow <\/b>task and <b>Bulk Insert<\/b> task components, which I describe in the following sections. You can download the <a href=\"http:\/\/www.simple-talk.com\/content\/file.ashx?file=4929\">completed package<\/a>&#160; from the speech-bubble at the head of this article.&#160;In the meantime, you can find details about how to create an SSIS package, configure the control flow, set up the <b>Execute SQL <\/b>task, or add tasks and containers in SQL Server Books Online. Now let&#8217;s look at how to work with the components necessary to bulk load the data.<\/p>\n<h1>SQL Server Destination<\/h1>\n<p>The first SSIS component that we&#8217;ll look at is the <b>SQL Server<\/b> destination. You should consider using this component within the data flow if you must first transform or convert the data and bulk load that data into a local instance of SQL Server. You cannot use the <b>SQL Server<\/b> destination to connect to a remote instance of SQL Server.<\/p>\n<p>To demonstrate how the <b>SQL Server<\/b> destination works, I added a <b>Flat File <\/b>source and <b>Data Conversion <\/b>transformation to the data flow. The <b>Flat File <\/b>source uses the <b>Flat File<\/b> connection manager <b>EmployeeData<\/b> to connect the EmployeeData.csv file. <\/p>\n<p>The <b>Data Conversion<\/b> transformation converts the first column of the source data to four-byte signed integer-an SSIS data type-and renames the outputted column to BusinessEntityID (to match the source column in the vEmployee view). The transformation converts the other columns to the Unicode string data type and again renames the columns to match the column names in the view. In addition, I&#8217;ve set the length to 50 in all the string columns except City, which I&#8217;ve set to 30. The <b>Data Conversion Transformation Editor<\/b> should now look similar to what is shown in Figure 2.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob2.jpg\" alt=\"1265-Bob2.jpg\" \/><\/p>\n<p class=\"caption\">Figure 2: Data Conversion Transformation Editor<\/p>\n<p>After I configured the <b>Data Conversion<\/b> transformation, I added a <b>SQL Server <\/b>destination to the data flow. The data flow should now look similar to the data flow in Figure 3.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob3.jpg\" alt=\"1265-Bob3.jpg\" \/><\/p>\n<p class=\"caption\">Figure 3: Data flow that uses the SQL Server destination component to load data<\/p>\n<p>Now I can configure the <b>SQL Server <\/b>destination. To do so, I double-click the component to launch the <b>SQL Destination <\/b>editor, which opens in the <b>Connection Manager <\/b>screen. I then select the <b>OLE DB<\/b> connection manager I created when I first set up the SSIS package (<b>AdventureWorks2008<\/b>). Then I selected <b>Employees1 <\/b>as the destination table. Figure 4 shows the <b>Connection Manager <\/b>screen after it&#8217;s been configured.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob4.jpg\" alt=\"1265-Bob4.jpg\" \/><\/p>\n<p class=\"caption\">Figure 4: Connection Manager screen of the SQL Server Destination editor<\/p>\n<p>Next, I want to ensure that my source columns properly sync up with my destination columns. I do this on the <b>Mappings<\/b> screen of the <b>SQL Destination <\/b>editor and map the columns I outputted in the <b>Data Conversion<\/b> transformation with the columns in the Employee1 target table, as shown in Figure 5.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob5.jpg\" alt=\"1265-Bob5.jpg\" \/><\/p>\n<p class=\"caption\">Figure 5: Mappings screen of the SQL Server Destination editor<\/p>\n<p>Notice that I mapped the BusinessEntityID source column to the EmployeeID destination column. All other column names should match between the source and destination.<\/p>\n<p>After you ensure that the mapping is correct, you can configure the bulk load options, which you do on the <b>Advanced<\/b> screen of the <b>SQL Destination <\/b>editor, shown in Figure 6. On this screen, you can specify such options as whether to maintain the source identity values, apply a table-level lock during a bulk load operation, or retain null values.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob6.jpg\" alt=\"1265-Bob6.jpg\" \/><\/p>\n<p class=\"caption\">Figure 6: Advanced screen of the SQL Server Destination editor<\/p>\n<p>As you can see, for this bulk load operation, I am choosing to retain identity and null values and to apply a table-level lock on the destination table during the bulk load operation. In addition, I&#8217;m not checking constraints or firing triggers during the operation. I&#8217;m also specifying that the data is ordered according to the EmployeeID column. Because I sorted the data (based on the ID) when I exported the data to the CSV file, I can now use the <b>Order columns<\/b> option to specify that sort order. This works just like the ORDER option of the BULK INSERT statement.<\/p>\n<p>You might have noticed that the <b>Advanced<\/b> screen does not include any options related to batch sizes. SSIS handles batch sizes differently from other batch-loading options. By default, SSIS creates one batch per pipeline buffer and commits that batch when it flushes the buffer. You can override this behavior by modifying the <b>Maximum Insert Commit Size <\/b>property in the <b>SQL Server Destination<\/b> advanced editor. You access the editor by right-clicking the component and then clicking <b>Show Advanced Editor<\/b>. On the <b>Component Properties<\/b> tab, modify the property with the desired setting:<\/p>\n<ul>\n<li>A setting of 0 means the entire batch is committed in one large batch. This is the same as the BULK INSERT option of BATCHSIZE = 0.  <\/li>\n<li>A setting less than the buffer size but greater than 0 means that the rows are committed whenever the number is reached and also at the end of each buffer.  <\/li>\n<li>A setting greater than the buffer size is ignored. (The only way to work with batch sizes larger than the current buffer size is to modify the buffer size itself, which is done in the data flow properties.)<\/li>\n<\/ul>\n<p>For a complete description of how to configure the <b>SQL Server<\/b> destination, see the topic &#8220;SQL Server Destination&#8221; in SQL Server Books Online.<\/p>\n<h1>OLE DB Destination<\/h1>\n<p>The <b>OLE DB <\/b>destination is similar to the <b>SQL Server <\/b>destination except that you&#8217;re destination is not limited to a local instance of SQL Server (and you can connect to OLE DB target data sources other than SQL Server). One advantage of using this task is that you can run SSIS on a computer other than where the target table is located, which lets you more easily scale out your SSIS solution.<\/p>\n<p>To demonstrate how the <b>OLE DB <\/b>destination works, I set up a data flow similar to the one I set up for the <b>SQL Server <\/b>destination. As you can see in Figure 7, I&#8217;ve added a <b>Flat File <\/b>source and <b>Data Conversion<\/b> transformation, configured just as you saw above.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob7.jpg\" alt=\"1265-Bob7.jpg\" \/><\/p>\n<p class=\"caption\">Figure 7: Data flow that uses the OLE DB Destination component to load data<\/p>\n<p>After I added and configured the <b>Data Conversion<\/b> transformation, I added an <b>OLE DB <\/b>destination, opened the <b>OLE DB Destination<\/b> editor, and configured the settings on the <b>Connection Manager<\/b> screen, as shown in Figure 8.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob8.jpg\" alt=\"1265-Bob8.jpg\" \/><\/p>\n<p class=\"caption\">Figure 8: Connection Manager screen of the OLE DB Destination editor<\/p>\n<p>As you can see, I specified the <b>AdventureWorks2008<\/b> connection manager and <b>Employee2<\/b> as the target table. Notice also that in the <b>Data access mode<\/b> drop-down list, I selected the option <b>Table or view &#8211; fast load<\/b>. The <b>OLE DB <\/b>destination supports two fast-load options-the one I&#8217;ve selected and one that lets you specify the name of the table or view within a variable: <b>Table name or view name variable &#8211; fast load<\/b>. You must specify a fast-load option for data to be bulk inserted into the destination.<\/p>\n<p>When you select one of the fast-load options, you&#8217;re provided with options related to bulk loading the data, such as whether to maintain the identity or null values or whether to implement a table-level lock. Notice in Figure 8 that you can also specify the number of rows per batch, without having to access the advanced settings. As with the <b>SQL Server <\/b>destination, the rows per batch are tied to the SSIS buffer.<\/p>\n<div class=\"note\">\n<p class=\"note\"><b>NOTE:<\/b> The <b>OLE DB Destination<\/b> editor does not include an <b>Advanced<\/b> screen like the <b>SQL Destination<\/b> editor, but it does include an <b>Error Output<\/b> screen that lets you specify error handling options, something not available in the<b> SQL Destination<\/b> editor.<\/p>\n<\/div>\n<p>I next used the <b>Mappings<\/b> screen to ensure that my source columns properly sync up with my destination columns, as I did with the <b>SQL Server<\/b> destination. Figure 9 shows the mappings as they appear in the <b>OLE DB Destination<\/b> editor.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob9.jpg\" alt=\"1265-Bob9.jpg\" \/><\/p>\n<p class=\"caption\">Figure 9: Mappings screen of the OLE DB Destination editor<\/p>\n<p>Not all properties related to bulk loading are available through the <b>OLE DB Destination<\/b> editor. For instance, if you what to specify a sort order, as I did for the <b>SQL Server <\/b>destination, you must use the advanced editor. To access the editor, right-click the component and click <b>Show Advanced Editor<\/b>, and then select the <b>Component Properties<\/b> tab, shown in Figure 10.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob10.jpg\" alt=\"1265-Bob10.jpg\" \/><\/p>\n<p class=\"caption\">Figure 10: Advanced Editor for the OLE DB Destination editor<\/p>\n<p>Notice that the <b>FastLoadOptions<\/b> property setting is <b>TABLOCK, ORDER(EmployeeID ASC)<\/b>. The <b>TABLOCK <\/b>argument was added when I selected the <b>Table lock<\/b> option on the <b>Connection Manager<\/b> screen of the <b>OLE DB Destination <\/b>editor. However, I added the <b>ORDER <\/b>argument, along with the name of the column and the sort order (ASC). Also note that I used a comma to separate the <b>TABLOCK <\/b>argument from the <b>ORDER<\/b> argument. You can add other arguments as well. For a complete description of how to configure the <b>OLE DB<\/b> destination, see the topic &#8220;OLE DB Destination&#8221; in SQL Server Books Online.<\/p>\n<h1>Bulk Insert Task<\/h1>\n<p>Of those SSIS components related to bulk loading, the simplest to implement is the <b>Bulk Insert<\/b> task. What makes it so easy is the fact that you do not have to define a data flow. You define both the source and destination within the task itself. However, you can use the <b>Bulk Insert<\/b> task only for data that can be directly imported from the source text file. In other words, the data must not require any conversions or transformations and cannot originate from a source other than a text file.<\/p>\n<p>As you&#8217;ll recall from Figure 1, I added the <b>Bulk Insert<\/b> task to the third <b>Sequence<\/b> container, right after the <b>Execute SQL<\/b> task. To configure the task, double-click it to launch the <b>Bulk Insert Task<\/b> editor, which opens on the <b>General <\/b>screen, as shown in Figure 11.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob11.jpg\" alt=\"1265-Bob11.jpg\" \/><\/p>\n<p class=\"caption\">Figure 11: General screen of the Bulk Insert Task editor<\/p>\n<p>On the <b>General<\/b> screen, you simply provide a name and description for the task. It&#8217;s on the <b>Connection<\/b> screen, shown in Figure 12, that you specify how to connect to both the source and destination.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob12.jpg\" alt=\"1265-Bob12.jpg\" \/><\/p>\n<p class=\"caption\">Figure 12: Connection screen of the Bulk Insert Task editor<\/p>\n<p>As the figure indicates, I specified <b>AdventureWorks2008 <\/b>as the <b>OLE DB <\/b>connection manager. I also specified the Employees3 table as the target table.<\/p>\n<p>In the <b>Format<\/b> section of the <b>Connection <\/b>screen, I select the <b>Specify<\/b> option, which indicates that I will specify the format myself, rather than use a format file. If I wanted to use a format file, I would have selected the <b>Use File<\/b> option and then specified the format file to use. When you select the <b>Specify<\/b> option, you must also specify the row delimiter and column delimiter. In this case, I selected <b>{CR}{LF}<\/b> and <b>comma {,}<\/b>, respectively. These settings match how the source CSV file was created.<\/p>\n<p>Finally, in the <b>Source Connection<\/b> section of the <b>Connection<\/b> screen, I specify the name of the <b>Flat File<\/b> connection manager I created when I set up the package (<b>EmployeeData<\/b>). Note, however, that the <b>Bulk Insert<\/b> task editor uses the connection manager only to locate the source file. The task ignores other options you might have configured in the connection manager, which is why you must specify the row and column delimiters within the task.<\/p>\n<p>After I configured the <b>Connection<\/b> screen of the <b>Bulk Insert Task editor<\/b>, I selected the <b>Options<\/b> screen, as shown in Figure 13. The screen lets you configure the options related to your bulk load operation.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob13.jpg\" alt=\"1265-Bob13.jpg\" \/><\/p>\n<p class=\"caption\">Figure 13: Options screen of the Bulk Insert Task editor<\/p>\n<p>Notice that for the <b>DataFileType<\/b> property I selected <b>char<\/b> (character) because that&#8217;s how the source file was created. I also specified the EmployeeID column in the <b>SortedData<\/b> property because the source data was sorted by ID. Most of the other properties, I left with their default values. However, for the <b>Options<\/b> property, I selected specific bulk load options. To do so, I clicked the down arrow associated with the property to open a box of options, shown in Figure 14.<\/p>\n<p class=\"illustration\"><img decoding=\"async\" src=\"https:\/\/www.red-gate.com\/simple-talk\/wp-content\/uploads\/imported\/1265-Bob14.jpg\" alt=\"1265-Bob14.jpg\" \/><\/p>\n<p class=\"caption\">Figure 14: Selecting load options in the Bulk Insert Task editor<\/p>\n<p>As you can see, you can choose whether to fire triggers, check constraints, maintain null or identity values, or apply a table-level lock during the bulk load operation. The options you select are then listed in the <b>Options <\/b>box, with the options themselves separated by commas.<\/p>\n<p>Once you&#8217;ve configure your options, you&#8217;re ready to bulk load your data. For a complete description of how to configure the <b>Bulk Insert<\/b> task, see the topic &#8220;Bulk Insert Task&#8221; in SQL Server Books Online.<\/p>\n<h1>Bulk Inserting Data into a SQL Server Database<\/h1>\n<p>Clearly, the three SSIS components available for bulk loading data into a SQL Server database offer a great deal of flexibility in terms of loading the data and scaling out your solution. If you&#8217;re copying data out of a text file and that data does not need to be converted or transformed in any way, the <b>Bulk Insert <\/b>task is the simplest solution. However, you should use the <b>SQL Server<\/b> destination or <b>OLE DB <\/b>destination if you must perform any conversions or transformation or if you&#8217;re retrieving data from a source other than a text file. As for which of the two to choose, if you&#8217;re loading the data into a local instance of SQL Server and scaling out is not a consideration, you can probably stick with the <b>SQL Server <\/b>destination. On the other hand, if you want the ability to scale out your solution or you must load data into a remote instance of SQL Server, use the <b>OLE DB <\/b>destination. Keep in mind, however, that if your requirements are such that more than one scenario will work, you should consider testing them all and determining from there what solution is the most effective. You might find that simpler is not always better-or visa versa.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The most flexible way to bulk-load data into SQL Server is to use SSIS. It can also be the fastest, and scaleable way of doing so. There are three different components that can be used to do this, using SSIS, so which do you choose? As always, Rob Sheldon is here to explain the basics.<\/p>\n<p>&hellip;<\/p>\n","protected":false},"author":221841,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[143528],"tags":[4242,4379,5132,4150,5368,4151,4306],"coauthors":[],"class_list":["post-1114","post","type-post","status-publish","format-standard","hentry","category-bi-sql-server","tag-basics","tag-reporting-services","tag-rob-sheldon","tag-sql","tag-sql-integration-services","tag-sql-server","tag-ssis"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/1114","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/221841"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=1114"}],"version-history":[{"count":6,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/1114\/revisions"}],"predecessor-version":[{"id":91023,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/1114\/revisions\/91023"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=1114"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=1114"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=1114"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=1114"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}