{"id":73132,"date":"2016-02-17T10:40:03","date_gmt":"2016-02-17T10:40:03","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/uncategorized\/massive-deletes-part-2\/"},"modified":"2021-07-14T13:07:17","modified_gmt":"2021-07-14T13:07:17","slug":"massive-deletes-part-2","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/oracle-databases\/massive-deletes-part-2\/","title":{"rendered":"Massive Deletes &#8211; Part 2"},"content":{"rendered":"<p>In <a href=\"https:\/\/allthingsoracle.com\/massive-deletes-part-1\/\" target=\"_blank\">Part 1<\/a> of this short series I provided an informal description of a couple of scenarios where we might do a large-scale delete from a table. Without a concrete example though, it can be hard to imagine how the nature of the data deleted and the access paths available could affect the impact a large delete operation could have on your system, so I\u2019m going to spend most of this article talking about a couple of tests on a generated data set. The article may seem a bit long but quite a lot of the space will be taken up by tabular reports.<\/p>\n<h2>Sample Data<\/h2>\n<p>With the ever-increasing power and scale of hardware it becomes harder and harder to agree on what we might mean by a \u201clarge table\u201d or \u201cmassive delete\u201d \u2013 to one person 1 million rows might seem large, to another 100 million might seem fairly ordinary. I\u2019m going to go with a compromise 10 million rows representing an investment system that has been growing by 1M rows per year for 10 years and has reached a segment size of 1.6GB. This table is, of course, just one of several making up the full system and there will at some point be related data to worry about but, for the present, we\u2019re going to isolate just this table and consider only the table itself and the 4 indexes on it. Here\u2019s the code to generate the data set:<\/p>\n<pre>execute dbms_random.seed(0)\r\n\r\ncreate table t1 (\r\n\tid\t\tnot null,\r\n\tdate_open, \tdate_closed,\r\n\tdeal_type,\tclient_ref,\r\n\tsmall_vc,\tpadding\r\n)\r\nnologging\r\nas\r\nwith generator as (\r\n\tselect\t\/*+ materialize cardinality(1e4) *\/\r\n\t\trownum\tid \r\n\tfrom\tdual\r\n\tconnect by\r\n\t\trownum &lt;= 1e4\r\n)\r\nselect\r\n\t1e4 * (g1.id - 1) + g2.id\t\t\t\t\tid,\r\n\ttrunc(\r\n\t\tadd_months(sysdate, - 120) + \r\n\t\t\t(1e4 * (g1.id - 1) + g2.id)* 3652 \/ 1e7\r\n\t)\t\t\t\t\t\t\t\tdate_open,\r\n\ttrunc(\r\n\t\tadd_months(\r\n\t\t\tadd_months(sysdate, - 120) + \r\n\t\t\t\t(1e4 * (g1.id - 1) + g2.id) * 3652 \/ 1e7,\r\n\t\t\t12 * trunc(dbms_random.value(1,6))\r\n\t\t)\r\n\t)\t\t\t\t\t\t\t\tdate_closed,\r\n\tcast(dbms_random.string('U',1) as varchar2(1))\tdeal_type,\r\n\tcast(dbms_random.string('U',4) as varchar2(4))\tclient_ref,\r\n\tlpad(1e4 * (g1.id - 1) + g2.id,10)\t\tsmall_vc,\r\n\trpad('x',100,'x')\t\t\t\tpadding\r\nfrom\r\n\tgenerator\tg1,\r\n\tgenerator\tg2\r\nwhere\r\n\tg1.id &lt;= 1e3\r\nand     g2.id &lt;= 1e4\r\n;\r\n\r\nexecute dbms_stats.gather_table_stats(user,'t1',method_opt=&gt;'for all columns size 1')\r\n\r\nalter table t1 add constraint t1_pk primary key(id) using index nologging;\r\ncreate index t1_dt_open on t1(date_open) nologging;\r\ncreate index t1_dt_closed on t1(date_closed) nologging;\r\ncreate index t1_client on t1(client_ref) nologging;\r\n<\/pre>\n<p>It\u2019s not immediately obvious but the code generates 10 million rows; the <em><strong>date_open<\/strong><\/em> starts 120 months (10 years, 3,652 days) in the past and the arithmetic used to increment the value means the most recent entry is at the current date. The <em><strong>date_closed<\/strong><\/em> is an integer number of years between 1 and 5 (inclusive) added to the <em><strong>date_open<\/strong><\/em> (the table is a simple-minded model of recording fixed-term investments). The <em><strong>deal_type<\/strong><\/em> is a randomly-generated single uppercase character \u2013 producing 26 distinct values of equal volumes of data; the <em><strong>client_ref<\/strong><\/em> is randomly generated as a fixed length string of 4 uppercase characters, giving about half a million combinations and 20 rows per combination.<\/p>\n<p style=\"padding-left: 30px\"><em>As a side note \u2013 I\u2019ve generated the data set without using <strong>rownum<\/strong> anywhere in the high-volume select; this would make it possible for me to use parallel execution to generate the data more quickly (both the \u201clevel\u201d and the \u201crownum\u201d pseudo columns limit Oracle\u2019s ability to use parallel execution). In this case though, because I want the <strong>id<\/strong> column to model a sequentially generated value being stored in order of arrival, I\u2019m running the code serially.<\/em><\/p>\n<h3>Scale<\/h3>\n<p>On my laptop, running 12.1.0.2 on a Linux 5 VM I got the following times for creating the data, gathering stats and creating the indexes:<\/p>\n<pre>Table creation:\t7:06.40\r\nStats gathered:\t0:10.54\r\nPK added:\t0:10.94\r\nIndex created:\t0:10.79\t(date_open)\r\nIndex created:\t0:12.17\t(date_closed)\r\nIndex created:\t0:13.65\t(client_ref)\r\n<\/pre>\n<p>This, of course, is where we have to start asking questions about realism and how different systems might behave. The virtual machine had 4GB of memory allocated (of which 1.6GB was set aside for the <em><strong>memory_target<\/strong><\/em>), and a nominal 2 CPUs running at 2.8GHz from a quad core CPU \u2013 but possibly most significant was the fact that the machine has 1TB of solid state disc, so doesn\u2019t lose much time on physical I\/O. The database was configured with 3 redo log groups of 200MB each (to encourage some delays for log file checkpoint and log file switch waits), the logs were duplexed but the instance wasn\u2019t running in archivelog mode. After stats collection the table block count was roughly 204,000 blocks with 49 rows per block in most blocks, the PK and client indexes had about 22,000 leaf blocks and the two date indexes about 26,500 leaf blocks.<\/p>\n<h3>Quality<\/h3>\n<p>It\u2019s important when trying to use models like this to question how close to reality they get: so how many flaws can you spot in what I&#8217;ve done so far? First of all, the <em><strong>id<\/strong><\/em> column is too perfect \u2013 ids appear in perfect sequence in the table while in real-life with concurrent single-row inserts there would be a little \u2018jitter\u2019, with ranges of consecutive values being scattered over a small number of blocks; this is probably not terribly important. More significantly, I\u2019ve created the indexes after inserting all the data, which means the indexes are physically as perfect as they could be (and with 10% free space per leaf block). I really ought to have created the table and indexes empty then run several concurrent scripts that did single row inserts using a sequence to generate ids \u2013 but the last time I did that sort of thing the run time jumped by a factor of 40 (so that\u2019s around 5 hours per run, and I want to drop and recreate this data several times). Again that\u2019s not really terribly significant, though I ought to remember that on a live system the average available space in index leaf blocks might be closer to 30% at any moment, with a significant variation from block to block, and I might want (on a production system) to check <a href=\"https:\/\/jonathanlewis.wordpress.com\/index-efficiency-3\/\"><em><strong>the state of the index leaf blocks<\/strong><\/em><\/a> of the date-based indexes from time to time, and the <em><strong>date_open<\/strong><\/em> index in particular.<\/p>\n<h3>Scenarios<\/h3>\n<p>Despite the fact that any timing is highly dependent on machine configuration and resources and that the model is over-simplified we can still get some interesting information from a few basic tests. Let\u2019s start with a few scenarios relating to business-based decisions:<\/p>\n<ul>\n<li>Delete all deals that closed more than 5 years ago<\/li>\n<li>Delete all the deals where the <em><strong>client_ref<\/strong><\/em> starts with \u2018A\u2019 \u2013 \u2018E\u2019<\/li>\n<li>Delete all deals that opened more than 5 years ago<\/li>\n<\/ul>\n<p>We could imagine that option (a) is a basic archiving requirement \u2013 maybe the data has been copied to another table before deletion. Option (b) perhaps tells us that the <em><strong>client_ref<\/strong><\/em>\u00a0 has been (ab)used to encode some significant classification in the first letter for the reference and we\u2019re splitting the data into two processing sets. Option (c) could be part of a process aimed at partitioning the data by <em><strong>date_open<\/strong><\/em> (though I\u2019m not sure that it looks like a good way to go about partitioning in this case). Before doing anything that\u2019s likely to be expensive to an Oracle database it\u2019s always a good idea to see if you can visualize what Oracle will have to do and what the execution steps will be, and where the workload will appear. Are these scenarios all the same and, if not, how do they differ? If you don\u2019t know your data and the impact of your delete you can always ask the database \u2013 for example:<\/p>\n<pre>select\r\n        rows_in_block,\r\n        count(*)                                     blocks,\r\n        rows_in_block * count(*)                     row_count,\r\n        sum(count(*)) over (order by rows_in_block)                 running_blocks,\r\n        sum(rows_in_block * count(*)) over (order by rows_in_block) running_rows\r\nfrom\r\n        (\r\n        select \r\n                dbms_rowid.rowid_relative_fno(rowid), \r\n                dbms_rowid.rowid_block_number(rowid),\r\n                count(*)                                rows_in_block\r\n        from \r\n                t1\r\n--\r\n--      where   date_open &gt;= add_months(sysdate, -60)\r\n--      where   date_open &lt;  add_months(sysdate, -60)\r\n--\r\n--      where   date_closed &gt;= add_months(sysdate, -60)\r\n--      where   date_closed &lt;  add_months(sysdate, -60)\r\n--\r\n--      where   substr(client_ref,2,1)  &gt;= 'F'\r\n--      where   substr(client_ref,2,1)  &lt; 'F'\r\n--\r\n        group by \r\n                dbms_rowid.rowid_relative_fno(rowid), \r\n                dbms_rowid.rowid_block_number(rowid) \r\n        )\r\ngroup by\r\n        rows_in_block\r\norder by\r\n        rows_in_block\r\n;\r\n<\/pre>\n<p>You\u2019ll notice that I\u2019ve got six commented predicates (in three complementary pairs) in this query. The query is basically designed to give me a summary of how many blocks hold how many rows, but each pair of predicates gives me some idea of the effect of each of my scenarios \u2013 one from each pair tells me something about the volume and pattern of data I\u2019ll be deleting, the other tells me about the volume and pattern of data that would survive the delete. The former could give you some idea of how many different blocks you will have to modify and how much work that will take, the latter would help you understand how the system might behave afterwards. With a little bit of SQL*Plus formatting here\u2019s the output after creating the data:<\/p>\n<pre>                                              Blocks           Rows\r\nRows per block   Blocks         Rows   Running total   Running total\r\n-------------- -------- ------------   -------------   -------------\r\n            27        1           27               1              27\r\n            49  203,877    9,989,973         203,878       9,990,000\r\n            50      200       10,000         204,078      10,000,000\r\n               --------\r\nsum             204,078\r\n<\/pre>\n<p>And here\u2019s the output showing what the <strong><em>surviving<\/em><\/strong> data would look like if we delete all rows that opened more than 5 years ago (i.e. use the predicate <em>date_open &gt;= add_months(sysdate, -60)<\/em>).<\/p>\n<pre>                                              Blocks           Rows\r\nRows per block   Blocks           Rows Running total  Running total\r\n-------------- -------- -------------- ------------- --------------\r\n            27        1             27             1             27\r\n            42        1             42             2             69\r\n            49  102,014      4,998,686       102,016      4,998,755\r\n               --------\r\nsum             102,016\r\n<\/pre>\n<p>That\u2019s rather nice \u2013 roughly speaking we\u2019ve emptied out half the blocks in the table and left the other half untouched. If we tried a \u201cshrink space\u201d now we would simply be copying the rows from the 2nd half of the table to the first half \u2013 we\u2019d generate a <em>huge<\/em>\u00a0volume of undo and redo, but the clustering factor (or, specifically, the <strong><em>avg_data_blocks_per_key<\/em><\/strong> representation of the <em><strong>clustering_factor<\/strong><\/em>) of any indexes would probably be pretty much unchanged. Alternatively if we decide to leave the empty space as it is any new data would simply start filling the empty space very efficiently (almost as if it were using newly allocated extents) from the start of the table \u2013 again we\u2019d see the clustering factor (i.e. <em><strong>avg_data_blocks_per_key<\/strong><\/em>) of any indexes pretty much unchanged. Compare this with the consequences of deleting all the rows that closed more than 5 years ago (i.e. what do we see left if we use the predicate <em>date_closed &gt;= add_months(sysdate, -60)<\/em>) \u2013 the report is rather longer:<\/p>\n<pre>                                              Blocks           Rows\r\nRows per block   Blocks           Rows Running total  Running total\r\n-------------- -------- -------------- ------------- --------------\r\n             1        5              5             5              5\r\n             2       22             44            27             49\r\n             3      113            339           140            388\r\n             4      281          1,124           421          1,512\r\n             5      680          3,400         1,101          4,912\r\n             6    1,256          7,536         2,357         12,448\r\n             7    1,856         12,992         4,213         25,440\r\n             8    2,508         20,064         6,721         45,504\r\n             9    2,875         25,875         9,596         71,379\r\n            10    2,961         29,610        12,557        100,989\r\n            11    2,621         28,831        15,178        129,820\r\n            12    2,222         26,664        17,400        156,484\r\n            13    1,812         23,556        19,212        180,040\r\n            14    1,550         21,700        20,762        201,740\r\n            15    1,543         23,145        22,305        224,885\r\n            16    1,611         25,776        23,916        250,661\r\n            17    1,976         33,592        25,892        284,253\r\n            18    2,168         39,024        28,060        323,277\r\n            19    2,416         45,904        30,476        369,181\r\n            20    2,317         46,340        32,793        415,521\r\n            21    2,310         48,510        35,103        464,031\r\n            22    2,080         45,760        37,183        509,791\r\n            23    1,833         42,159        39,016        551,950\r\n            24    1,696         40,704        40,712        592,654\r\n            25    1,769         44,225        42,481        636,879\r\n            26    1,799         46,774        44,280        683,653\r\n            27    2,138         57,726        46,418        741,379\r\n            28    2,251         63,028        48,669        804,407\r\n            29    2,448         70,992        51,117        875,399\r\n            30    2,339         70,170        53,456        945,569\r\n            31    2,286         70,866        55,742      1,016,435\r\n            32    1,864         59,648        57,606      1,076,083\r\n            33    1,704         56,232        59,310      1,132,315\r\n            34    1,566         53,244        60,876      1,185,559\r\n            35    1,556         54,460        62,432      1,240,019\r\n            36    1,850         66,600        64,282      1,306,619\r\n            37    2,131         78,847        66,413      1,385,466\r\n            38    2,583         98,154        68,996      1,483,620\r\n            39    2,966        115,674        71,962      1,599,294\r\n            40    2,891        115,640        74,853      1,714,934\r\n            41    2,441        100,081        77,294      1,815,015\r\n            42    1,932         81,144        79,226      1,896,159\r\n            43    1,300         55,900        80,526      1,952,059\r\n            44      683         30,052        81,209      1,982,111\r\n            45      291         13,095        81,500      1,995,206\r\n            46      107          4,922        81,607      2,000,128\r\n            47       32          1,504        81,639      2,001,632\r\n            48        3            144        81,642      2,001,776\r\n            49  122,412      5,998,188       204,054      7,999,964\r\n               --------\r\nsum             204,054\r\n<\/pre>\n<p>In this case we have roughly 60% of the blocks still holding the original 49 rows, but the rest of the blocks in the table have suffered anything from virtually no deletions to complete emptying (if you check the total number of blocks reported against the total in the first report you\u2019ll notice that there must be a few blocks (24) which are now completely empty). How many of those blocks are now available for insert? Here\u2019s a quick calculation: we had 49 rows in most of our blocks which would have been filled to 90% (default pctree = 10), so a block will drop to the 75% mark (which is when ASSM will flag it as having free space) when it has less than 41 rows in it (49 * 75 \/ 90): of our 204,000 blocks roughly 75,000 match that criterion (checking the &#8220;Blocks Running Total&#8221; column). In practical terms this means that if we shrink the table the <em><strong>avg_data_blocks_per_key<\/strong><\/em> will increase significantly, and if we leave the table as is and allow new data to be inserted in the free space then the <em><strong>avg_data_blocks_per_key<\/strong><\/em> will also increase significantly.<\/p>\n<h3>Index Space<\/h3>\n<p>The previous section showed some simple SQL that gave you an idea of how space would appear (or data would remain) in the table \u2013 can we do something similar for indexes? Inevitably the answer is yes, but the code that answers the question <em>\u201cwhat would this index look like after I deleted data matching predicate X\u201d<\/em> is a more expensive piece of code to run than the equivalent for tables. To start with here\u2019s a simple piece of code to check just the current content of an index:<\/p>\n<pre>select\r\n        rows_per_leaf, count(*) leaf_blocks\r\nfrom    (\r\n        select\r\n                \/*+ index_ffs(t1(client_ref)) *\/\r\n                sys_op_lbid(94255, 'L', t1.rowid)       leaf_block,\r\n                count(*)                                rows_per_leaf\r\n        from\r\n                t1\r\n        where\r\n                client_ref is not null\r\n        group by\r\n                sys_op_lbid(94255, 'L', t1.rowid)\r\n        )\r\ngroup by\r\n        rows_per_leaf\r\norder by\r\n        rows_per_leaf\r\n;\r\n<\/pre>\n<p>The call to <em><strong>sys_op_lbid()<\/strong><\/em> takes a table rowid as one of its inputs and returns something that looks like the rowid for the first row of a block, and that block&#8217;s address is the address of the index leaf block that holds the index entry for the supplied table rowid. The other two parameters are the <em><strong>object_id<\/strong><\/em> of the index (or partition\/subpartition if the index is partitioned), and a flag which identifies the specific use of the function \u2013 an \u2018L\u2019 in our case. The hint to use an index fast full scan on the target index is necessary \u2013 any other path can return the wrong result \u2013 and the <em>\u201cclient_ref is not null\u201d<\/em> is necessary to ensure that the query can actually use the <em><strong>index_ffs<\/strong><\/em> path legally.<\/p>\n<p>For my initial data set the index had 448 index entries in every block except one (presumably the last, at 192 rows). As you can see, even this simple query has to be crafted to meet the requirements of each index \u2013 and because an index fast full scan is needed to get the correct result we have to do something even more unusual to see how our massive delete would affect the index. Here\u2019s a sample to show how we would find out what effect an attempt to delete the rows opened more than five years ago would have on the <em><strong>client_ref<\/strong><\/em> index:<\/p>\n<pre>select\r\n        rows_per_leaf,\r\n        count(*)                                    \tblocks,\r\n        rows_per_leaf * count(*)                    \trow_count,\r\n        sum(count(*)) over (order by rows_per_leaf)                 running_blocks,\r\n        sum(rows_per_leaf * count(*)) over (order by rows_per_leaf) running_rows\r\nfrom    (\r\n        select\r\n                \/*+ leading(v1 t1) use_hash(t1) *\/\r\n                leaf_block, count(*) rows_per_leaf\r\n        from    (\r\n                select\r\n                        \/*+ no_merge index_ffs(t1(client_ref)) *\/\r\n                        sys_op_lbid(94255, 'L', t1.rowid)       leaf_block,\r\n                        t1.rowid                                rid\r\n                from\r\n                        t1\r\n                where\r\n                        client_ref is not null\r\n                )       v1,\r\n                t1\r\n        where\r\n                t1.rowid = v1.rid\r\n        and     date_open &lt;  add_months(sysdate, -60)\r\n        group by\r\n                leaf_block\r\n        )\r\ngroup by\r\n        rows_per_leaf\r\norder by\r\n        rows_per_leaf\r\n;\r\n<\/pre>\n<p>As you can see we start off with an inline (hinted non-mergeable) view to attach the index leaf block id to every single table rowid, then we join that set of rowids back to the table \u2013 joining by rowid and forcing a hash join. I\u2019ve hinted the hash join because it\u2019s (probably) the most efficient strategy but although I\u2019ve put in a <em><strong>leading()<\/strong><\/em> hint I haven\u2019t included a hint about swapping (or not) the join inputs \u2013 I\u2019m going to let the optimizer decide which of the two data sets is the smaller and therefore more appropriate for building the hash table.<\/p>\n<p>In this particular case the optimizer was able to use an index-only access path to find all the rowids for the rows where <em><strong>date_open<\/strong><\/em> was earlier than 5 years ago, even so (partly because my <em><strong>pga_aggregate_target<\/strong><\/em> was relatively small and the hash join spilled to (solid state) disc) the query took 3 minutes 15 seconds to complete compared to the 1.5 seconds for the previous query which I happened to run while the entire index was cached. Here\u2019s an extract of the output:<\/p>\n<pre>                                             Blocks           Rows\r\nRows_per_leaf   Blocks           Rows Running total  Running total\r\n------------- -------- -------------- ------------- --------------\r\n          181        2            362             3            458\r\n          186        2            372             5            830\r\n          187        2            374             7          1,204\r\n          188        1            188             8          1,392\r\n...\r\n          210      346         72,660         2,312        474,882\r\n          211      401         84,611         2,713        559,493\r\n...\r\n          221      808        178,568         8,989      1,921,410\r\n          222      851        188,922         9,840      2,110,332\r\n          223      832        185,536        10,672      2,295,868\r\n...\r\n          242      216         52,272        21,320      4,756,575\r\n          243      173         42,039        21,493      4,798,614\r\n          244      156         38,064        21,649      4,836,678\r\n...\r\n          265        1            265        22,321      5,003,718\r\n          266        1            266        22,322      5,003,984\r\n<\/pre>\n<p>We&#8217;re going to modify 22,322 leaf blocks \u2013 that\u2019s every single leaf block in the index; and the number of rows we delete from a leaf block varies from just one to 266. I\u2019ve selected a few rows at a time from the 83 lines output, but you can probably still see that the pattern seems to follow the normal distribution, centered around the 222 (50%) mark. If we do that delete it should be clear that we&#8217;re going expend a lot of effort updating this index; even then the simple numbers of <em>\u201chow many rows deleted per leaf block\u201d<\/em> doesn\u2019t tell us the whole story about the work we\u2019ll do \u2013 we don\u2019t know whether we will (for example) delete all 266 index entries at the same time from the last block reported above, or whether we\u2019ll be jumping around the index extremely randomly and find ourselves continuously revisiting that block to delete one index entry at a time (we could write some SQL to get some idea of that aspect of the problem, and it\u2019s not particularly difficult to do, but I\u2019ll leave that as an exercise for the reader). The difference in workload could be dramatic and may be enough to make us drop (or mark unreadable) and rebuild some indexes rather than maintaining them during the delete. So in the next installment we\u2019ll look at what aspects of the workload we need to consider, and how different deletion strategies can have a significant impact on that workload.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In Part 1 of this short series I provided an informal description of a couple of scenarios where we might do a large-scale delete from a table. Without a concrete example though, it can be hard to imagine how the nature of the data deleted and the access paths available could affect the impact a large delete operation could hav&hellip;<\/p>\n","protected":false},"author":101205,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[143533],"tags":[48441],"coauthors":[],"class_list":["post-73132","post","type-post","status-publish","format-standard","hentry","category-oracle-databases","tag-massive-deletes"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73132","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/101205"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=73132"}],"version-history":[{"count":1,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73132\/revisions"}],"predecessor-version":[{"id":91629,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73132\/revisions\/91629"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=73132"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=73132"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=73132"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=73132"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}