{"id":73160,"date":"2015-06-25T15:16:41","date_gmt":"2015-06-25T15:16:41","guid":{"rendered":"https:\/\/www.red-gate.com\/simple-talk\/uncategorized\/basics-of-the-cost-based-optimizer-part-3\/"},"modified":"2021-07-14T13:07:23","modified_gmt":"2021-07-14T13:07:23","slug":"basics-of-the-cost-based-optimizer-part-3","status":"publish","type":"post","link":"https:\/\/www.red-gate.com\/simple-talk\/databases\/oracle-databases\/basics-of-the-cost-based-optimizer-part-3\/","title":{"rendered":"Basics of the Cost Based Optimizer &#8211; Part 3"},"content":{"rendered":"<p>In the <a href=\"https:\/\/allthingsoracle.com\/basics-of-the-cost-based-optimizer-part-2\/\">second installment<\/a> of this series we looked at individual access paths for the tables in a simple join query to highlight an important flaw in the default model that the optimizer uses for indexes. Having taken advantage of a recent enhancement that addresses that flaw we are now ready to move onto the problems that appear with the query when taken as a whole.<\/p>\n<h3>Reprise<\/h3>\n<p>Our query joins orders to order lines, and extracts data about orders placed in a given date range with order lines for a given product:<\/p>\n<pre>select\r\n        trunc(ord.date_ordered),\r\n        count(*),\r\n        sum(orl.quantity),\r\n        sum(orl.quantity * orl.unit_price) total_value\r\nfrom\r\n        orders          ord,    \r\n        order_lines     orl\r\nwhere\r\n        ord.date_ordered &gt;= trunc(sysdate) \u2013 7\r\nand     ord.date_ordered &lt;  trunc(sysdate)\r\nand     orl.order_id     =  ord.order_id\r\nand     orl.product_id   =  101234\r\ngroup by\r\n        trunc(ord.date_ordered)\r\norder by\r\n        total_value desc\r\n;\r\n\r\n<\/pre>\n<p>We have worked out that we have about 20,000 orders in the week (though the optimizer has estimated 11,500) that are very well packed near the end of the <em><strong>orders<\/strong><\/em> table, and about 1,000 order lines for the requested product (and the optimizer agrees with our estimate) scattered widely through the <em><strong>order_lines<\/strong><\/em> table.<\/p>\n<p>By setting the <em><strong>table_cached_blocks<\/strong><\/em> table preference to an appropriate value (on both tables though, for our purposes, the <em><strong>orders<\/strong><\/em> table was the critical one) we have allowed the optimizer to realize that the orders are well clustered so it is happy to use the (<em><strong>date_ordered<\/strong><\/em>) index to select the orders for the week; and even without adjustment the optimizer was happy to use the (<em><strong>product_id<\/strong><\/em>) index on the <em><strong>order_lines<\/strong><\/em> table to select all the order lines for the given product.<\/p>\n<h3>The best laid plans\u2026<\/h3>\n<p>So what happens when we try to join the tables and combine the two predicates? Here\u2019s the default plan on my system after (once again) setting the <em><strong>table_cached_blocks<\/strong><\/em> preference to a suitable value and collecting stats:<\/p>\n<pre>------------------------------------------------------------------------------------------------\r\n| Id  | Operation                       | Name         | Rows  | Bytes | Cost (%CPU)| Time     |\r\n------------------------------------------------------------------------------------------------\r\n|   0 | SELECT STATEMENT                |              |  1083 | 33573 |  1775   (1)| 00:00:09 |\r\n|   1 |  SORT ORDER BY                  |              |  1083 | 33573 |  1775   (1)| 00:00:09 |\r\n|   2 |   HASH GROUP BY                 |              |  1083 | 33573 |  1775   (1)| 00:00:09 |\r\n|*  3 |    FILTER                       |              |       |       |            |          |\r\n|*  4 |     HASH JOIN                   |              |  1083 | 33573 |  1773   (1)| 00:00:09 |\r\n|   5 |      TABLE ACCESS BY INDEX ROWID| ORDER_LINES  |  1083 | 18411 |  1064   (1)| 00:00:06 |\r\n|*  6 |       INDEX RANGE SCAN          | ORL_FK_PRD   |  1083 |       |     4   (0)| 00:00:01 |\r\n|   7 |      TABLE ACCESS BY INDEX ROWID| ORDERS       | 11525 |   157K|   709   (1)| 00:00:04 \r\n|*  8 |       INDEX RANGE SCAN          | ORD_DATE_ORD | 11525 |       |    34   (3)| 00:00:01 |\r\n------------------------------------------------------------------------------------------------\r\n\r\nPredicate Information (identified by operation id):\r\n---------------------------------------------------\r\n   3 - filter(TRUNC(SYSDATE@!)&gt;TRUNC(SYSDATE@!)-7)\r\n   4 - access(\"ORL\".\"ORDER_ID\"=\"ORD\".\"ORDER_ID\")\r\n   6 - access(\"ORL\".\"PRODUCT_ID\"=101234)\r\n   8 - access(\"ORD\".\"DATE_ORDERED\"&gt;=TRUNC(SYSDATE@!)-7 AND\r\n              \"ORD\".\"DATE_ORDERED\"&lt;TRUNC(SYSDATE@!))\r\n\r\n<\/pre>\n<p>I\u2019m not going to try and explain every detail of the execution plans we\u2019re going to look at but, as you can see in operations 4 through to 8, the optimizer has basically decided to run the two simpler queries I showed you in the previous installment and then joined the two results using a hash join.<\/p>\n<p>To do a hash join, Oracle acquires the whole of the first data set (known as the \u201cbuild\u201d table) and builds an in-memory hash table for it based on generating hash values for the columns used in <em><strong>equality<\/strong><\/em> join predicates; then it accesses the second data set (known as the \u201cprobe\u201d table) one row at a time, applies the same hashing function to the equality join columns, and probes the in-memory hash table to see if there are any possible matches. If there is a matching hash value Oracle then checks the actual values of all the join predicates to see if it is a proper match or simply a \u201chash collision\u201d.<\/p>\n<p style=\"padding-left: 60px\"><em>General Note: in an execution plan the first child (operation 5 in the above) to a hash join operation <strong>always<\/strong> identifies the build data set, the second child (operation 7 in the above) <strong>always<\/strong> identifies the probe data set.<\/em><\/p>\n<p>I\u2019ve said that the hash table will be in-memory, but if the hash table is too big to fit in the available memory Oracle has a mechanism for saving it to disc in batches (called partitions) and saving the probe table to disc in matching batches, then doing the join piecewise between matching batches. (See <a href=\"http:\/\/www.apress.com\/9781590596364\">Cost Based Oracle \u2013 Fundamentals<\/a>, Apress 2005, if you want a more detailed description).<\/p>\n<p>In our case Oracle has decided to use the <em><strong>order_lines<\/strong><\/em> data as the build table because it thinks that the total volume of data it will fetch will be only 18KB compared to 157KB for the <em><strong>orders<\/strong><\/em> table. (Note: the build table is chosen based on the number of bytes acquired, not the number of rows).<\/p>\n<p>From the arithmetic we can see that the optimizer assumes it will be able to keep that 18KB in memory: the cost of acquiring the data from <em><strong>order_lines<\/strong><\/em> is 1,064; the cost of acquiring the data from <em><strong>orders<\/strong><\/em> is 709, the total cost of the hash join is 1,773 \u2013 which is exactly 1064 + 709: the optimizer thinks that (to the nearest unit) the cost of doing the actual hash join is virtually free, i.e.: it\u2019s going to take a little CPU and a little memory, but no extra disc I\/O.<\/p>\n<h3>Alternatives:<\/h3>\n<p>In the first installment we consider two other possible plans \u2013 a nested loop join starting with <em><strong>orders<\/strong><\/em>, or a nested loop join starting with <em><strong>order_lines<\/strong><\/em>. We can get these plans through some fairly simple hinting:<\/p>\n<ul>\n<li>Nested loop starting with <em><strong>order_lines<\/strong><\/em>: \/*+ leading(orl ord) use_nl(ord) index(ord(order_id)) *\/<\/li>\n<li>Nested loop starting with <em><strong>orders<\/strong><\/em>: \/*+ leading(ord orl) use_nl(orl) index(orl(order_id)) *\/<\/li>\n<\/ul>\n<p>These hints tell the optimizer the single join order it is allowed to consider, tells it the join mechanism it must use to join to the table identified, and tells it to use an index that starts with a specific column when it joins to that table. Here\u2019s the resulting plan (minus predicate section) for the first of the two sets of hints:<\/p>\n<pre>------------------------------------------------------------------------------------------------\r\n| Id  | Operation                        | Name        | Rows  | Bytes | Cost (%CPU)| Time     |\r\n------------------------------------------------------------------------------------------------\r\n|   0 | SELECT STATEMENT                 |             |  1083 | 33573 |  3232   (1)| 00:00:17 |\r\n|   1 |  SORT ORDER BY                   |             |  1083 | 33573 |  3232   (1)| 00:00:17 |\r\n|   2 |   HASH GROUP BY                  |             |  1083 | 33573 |  3232   (1)| 00:00:17 |\r\n|*  3 |    FILTER                        |             |       |       |            |          |\r\n|   4 |     NESTED LOOPS                 |             |  1083 | 33573 |  3230   (1)| 00:00:17 |\r\n|   5 |      NESTED LOOPS                |             |  1083 | 33573 |  3230   (1)| 00:00:17 |\r\n|   6 |       TABLE ACCESS BY INDEX ROWID| ORDER_LINES |  1083 | 18411 |  1064   (1)| 00:00:06 |\r\n|*  7 |        INDEX RANGE SCAN          | ORL_FK_PRD  |  1083 |       |     4   (0)| 00:00:01 |\r\n|*  8 |       INDEX UNIQUE SCAN          | ORD_PK      |     1 |       |     1   (0)| 00:00:01 |\r\n|*  9 |      TABLE ACCESS BY INDEX ROWID | ORDERS      |     1 |    14 |     2   (0)| 00:00:01 |\r\n------------------------------------------------------------------------------------------------\r\n\r\n<\/pre>\n<p>An overriding feature here is that the cost of the plan is higher than the cost of the hash join; so it\u2019s not going to be taken by default.<\/p>\n<p>There is a little glitch in the figures produced by this execution plan, so bear with me while I explain the arithmetic. The join mechanism shown is one of the newer nested loop mechanisms (known as &#8220;NLJ Batching&#8221;) that Oracle can use, and it ends up being reported as two nested loop operations, the first to access the index the second to access the table.<\/p>\n<p>Unfortunately the Rows, Bytes, Cost and Time figures shown against the first nested loop (operation 5) report the figures that are due to the completion of the second nested loop (operation 4). Ideally the figures at operation 5 should read more like:<\/p>\n<pre>------------------------------------------------------------------------------------------------\r\n| Id  | Operation                        | Name        | Rows  | Bytes | Cost (%CPU)| Time     |\r\n------------------------------------------------------------------------------------------------\r\n|   5 |      NESTED LOOPS                |             |  1083 | 24909 |  2147   (1)| 00:00:11 |\r\n------------------------------------------------------------------------------------------------\r\n<\/pre>\n<p>So we\u2019re going to ignore that line and go straight to the figures at operation 4 (optional exercise: after reading the next couple of paragraphs, check that this theoretical line makes sense).<\/p>\n<p>Essentially the plan is telling us that for EACH ROW we receive at operation 6 from the table access to <em><strong>order_lines<\/strong><\/em> we\u2019re going to probe the <em><strong>ord_pk<\/strong><\/em> index (which the optimizer estimates will require one physical read \u2013 hence the 1 in the cost column of operation 8 \u2013 and then visit the <em><strong>orders<\/strong> <\/em>table for a second physical read \u2013 hence the 2 in the cost column of operation 9 (and technically we could argue that for complete consistency that really should be a 1, the 2 comes from the same historic hang-over that has allowed operation 5 to display misleading figures).<\/p>\n<p>With this description in hand we can see where the total cost of the nested loop comes from: it\u2019s the cost of getting the data from <em><strong>order_lines<\/strong><\/em> plus the cost of getting a row from the orders table 1,083 times, i.e.: 1064 + 1083 * 2 = 3,230. Let\u2019s flush the buffer cache and run the query to check how sensible this model is. Here\u2019s the execution plan pulled from memory after running with <em><strong>rowsource_execution_statistics<\/strong><\/em> enabled (though I&#8217;ve re-arranged the column order and deleted the memory-related information):<\/p>\n<pre>--------------------------------------------------------------------------------------------------------------------------------\r\n| Id  | Operation                        | Name        | Starts | E-Rows |  A-Rows |   A-Time   | Buffers | Reads  |Cost (%CPU)|\r\n--------------------------------------------------------------------------------------------------------------------------------\r\n|   0 | SELECT STATEMENT                 |             |      1 |        |       7 |00:00:00.76 |    4829 |   3189 | 3232 (100)|\r\n|   1 |  SORT ORDER BY                   |             |      1 |   1083 |       7 |00:00:00.76 |    4829 |   3189 | 3232   (1)|\r\n|   2 |   HASH GROUP BY                  |             |      1 |   1083 |       7 |00:00:00.76 |    4829 |   3189 | 3232   (1)|\r\n|*  3 |    FILTER                        |             |      1 |        |      17 |00:00:00.76 |    4829 |   3189 |           |\r\n|   4 |     NESTED LOOPS                 |             |      1 |   1083 |      17 |00:00:00.76 |    4829 |   3189 | 3230   (1)|\r\n|   5 |      NESTED LOOPS                |             |      1 |   1083 |    1215 |00:00:00.37 |    3623 |   2008 | 3230   (1)|\r\n|   6 |       TABLE ACCESS BY INDEX ROWID| ORDER_LINES |      1 |   1083 |    1215 |00:00:00.10 |    1191 |   1191 | 1064   (1)|\r\n|*  7 |        INDEX RANGE SCAN          | ORL_FK_PRD  |      1 |   1083 |    1215 |00:00:00.01 |       5 |      5 |    4   (0)|\r\n|*  8 |       INDEX UNIQUE SCAN          | ORD_PK      |   1215 |      1 |    1215 |00:00:00.26 |    2432 |    817 |    1   (0)|\r\n|*  9 |      TABLE ACCESS BY INDEX ROWID | ORDERS      |   1215 |      1 |      17 |00:00:00.38 |    1206 |   1181 |    2   (0)|\r\n--------------------------------------------------------------------------------------------------------------------------------\r\n\r\n<\/pre>\n<p>As we saw before, the optimizer\u2019s estimate of the number of <em><strong>order_lines<\/strong><\/em> was close to, but didn\u2019t match, our superior knowledge. We then see that for each (A-)row at operation 6 we have started operations 8 and 9, finding one row each time on the index probe, but only finding a suitable row occasionally (the few rows for the last week) on visiting the table. Note that \u2013 allowing for rounding errors <em>starts * E-rows = A-rows<\/em>, the significant rounding error being that the optimizer&#8217;s \u201creal (unrounded)\u201d estimate of E-rows for the table access was far less than one per visit.<\/p>\n<p>The most significant feature of the plan is the close correspondence between the <em><strong>Reads<\/strong><\/em> and the <em><strong>Cost<\/strong><\/em>. Everything we do is single block reads so we don\u2019t have to do any \u201cmulti-block\u201d adjustment to get from the read count to the optimizer\u2019s cost, there should be a one-to-one match. This is (reasonably accurately) what we see.<\/p>\n<p style=\"padding-left: 30px\">First we read 1191 blocks to get the 1215 <em><strong>order_lines<\/strong><\/em> A-rows; that\u2019s fairly close to Oracle\u2019s estimate of 1,064 reads to get a predicted 1,083 rows.<\/p>\n<p style=\"padding-left: 30px\">For each of those <em><strong>order_lines<\/strong><\/em> we access the <em><strong>ord_pk<\/strong><\/em> index \u2013 the optimizer predicted a single read each time, and we see a total of 817 reads, so some (though not most) of the index leaf blocks got into the cache and were re-used as the query ran.<\/p>\n<p style=\"padding-left: 30px\">For each index access we then visited the <em><strong>orders<\/strong><\/em> table, again we made 1,215 visits and Oracle reports a total of 1,181 physical reads for this step \u2013 we got hardly any benefit from revisiting blocks in the cache here. These two steps demonstrate a fairly general principle, by the way: when doing lots of access to randomly scattered data the index probes tended to get a lot more caching benefit than the table visits \u2026 except in special cases.<\/p>\n<p>Notice how the Reads column corroborates my comments about the \u201ccorrect\u201d values that should appear in the cost, rows, and bytes columns of operation 5. The reads for operation 5 are 1,191 + 817 = 2,008 (compared to my hypothetical 2,147); then the reads in operation 4 come to 2,008 + 1,181 = 3,189 closely matching the predicted cost of 3,230. Although the nested loop is following a new algorithm, the code to supply the numbers to the execution plan is still following the old (8i and earlier) nested loop model.<\/p>\n<p>A key performance feature of this plan (which also makes clearer a flaw in the hash join plan) is that we have to do a lot of random physical reads to acquire data we don\u2019t need. After doing 1,064 physical reads to collect <em><strong>order_lines<\/strong><\/em> (a workload done my both plans so far), this plan did a further 1,998 physical reads to find 1,215 rows, then discarded all but 17 of them. That\u2019s a lot of wasted work.<\/p>\n<h3>Trade-offs<\/h3>\n<p>If the calculations that appear in the first two plans are wonderfully accurate, what about the third? Here\u2019s the plan (again with some cosmetic re-arrangement) after running the query with rowsource execution s1tatistics enabled:<\/p>\n<pre>---------------------------------------------------------------------------------------------------------------------------------\r\n| Id  | Operation                        | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Cost (%CPU)|\r\n---------------------------------------------------------------------------------------------------------------------------------\r\n|   0 | SELECT STATEMENT                 |              |      1 |        |      7 |00:00:01.30 |   38533 |   1201 | 35339 (100)|\r\n|   1 |  SORT ORDER BY                   |              |      1 |   1083 |      7 |00:00:01.30 |   38533 |   1201 | 35339   (1)|\r\n|   2 |   HASH GROUP BY                  |              |      1 |   1083 |      7 |00:00:01.30 |   38533 |   1201 | 35339   (1)|\r\n|*  3 |    FILTER                        |              |      1 |        |     17 |00:00:01.30 |   38533 |   1201 |            |\r\n|   4 |     NESTED LOOPS                 |              |      1 |   1083 |     17 |00:00:01.30 |   38533 |   1201 | 35337   (1)|\r\n|   5 |      NESTED LOOPS                |              |      1 |  11525 |  23628 |00:00:00.83 |   19182 |    635 | 35337   (1)|\r\n|   6 |       TABLE ACCESS BY INDEX ROWID| ORDERS       |      1 |  11525 |  19667 |00:00:00.44 |   18425 |    570 |   709   (1)|\r\n|*  7 |        INDEX RANGE SCAN          | ORD_DATE_ORD |      1 |  11525 |  19667 |00:00:00.07 |      55 |     55 |    34   (3)|\r\n|*  8 |       INDEX RANGE SCAN           | ORL_PK       |  19667 |      1 |  23628 |00:00:00.17 |     757 |     65 |     2   (0)|\r\n|*  9 |      TABLE ACCESS BY INDEX ROWID | ORDER_LINES  |  23628 |      1 |     17 |00:00:00.32 |   19351 |    566 |     3   (0)|\r\n---------------------------------------------------------------------------------------------------------------------------------\r\n\r\n<\/pre>\n<p>If we start by examing the cost we can see the basic nested loop algorithm at work. Operations 6 and 7 are predicted to supply 11,525 rows at a cost of 709; in fact we got 19,667 rows by reading 570 blocks \u2013 so a reasonably good estimate from the optimizer. The table access with index range scan did require 18,425 buffer visits, though, which translates into CPU time and potential latch contention.<\/p>\n<p>For each row we do an index (range) probe into the primary key index on the order_lines table, then visit the order_lines table typically once but occasionally a few more times for a total of 23,628 visits discarding all but 17 rows from the table.<\/p>\n<p>Because the order_lines data is very well clustered by date (and order_id) we get a terrific caching benefit as this query is running so we read just 65 index blocks and 566 table blocks to get the data \u2013 for a total read requirement of 1,201 physical blocks read.<\/p>\n<p>Looking at the optimizer\u2019s costing, though: it has allowed 2 physical reads for <em><strong>each<\/strong><\/em> probe of the order_lines index, and one more read for the table visit. This would be a reasonable estimate for a single order, of course, but the optimizer has predicted 11,525 orders and simply multiplied the estimated cost by the number of rows for a total incremental cost 34,575 \u2013 giving a total cost of 34,575 + 709 = 35,284 (with a small rounding error introducing a reported cost estimate of 35,337).<\/p>\n<p>Critically the optimizer has not allowed for the massive \u201cself-caching\u201d benefit that has to appear as the query runs. The worst case scenario here is that we have to read all the blocks for orders placed in the last 7 days and all the blocks for order_lines created in the last 7 days \u2013 and both sets of data are well packed for a total of 1,201 blocks. Add to that the fact that the orders placed in the last 7 days are likely to be fairly well cached before we run the query (since order processing, packing, delivery, invoicing, payment receipt etc. all tend to happen over the course of a few days following order placement) and we have a massive overestimate of the cost of this query because Oracle doesn\u2019t understand the order processing business and has a simplistic nested loop model that takes a reasonable cost for \u2018one random event\u2019 and multiplies it up by \u2018number of driving rows\u2019 without realizing that the nature of the selection from the driving table can eliminate a huge degree of randomness.<\/p>\n<h3>Summary<\/h3>\n<p>For comparative purposes, here are the runtime activity summaries:<\/p>\n<ul>\n<li>Hash join (not shown): 1,191 + 570 = 1,761 physical reads; 19,600 buffer gets; cost = 1,775<\/li>\n<li>NLJ from order_lines: 1,191 + 1,998 = 3,189 physical reads; 4,829 buffer gets; cost = 3,232<\/li>\n<li>NLJ from orders: 570 + 631 = 1,201 physical reads; 38,533 buffer gets; cost = 35,339<\/li>\n<\/ul>\n<p>Ironically the costing is a good model for two of the three plans, while the third plan with a terrible cost estimate is likely to be the best plan.<\/p>\n<p>The cost of a typical hash join (when it is expected to complete in memory) is:<\/p>\n<p style=\"padding-left: 30px\">Cost of acquiring first data set + cost of acquiring second data set + a little bit<\/p>\n<p>The cost of <em>a <span style=\"text-decoration: underline\">typical<\/span> <\/em>nested loop join is:<\/p>\n<p style=\"padding-left: 60px\">Cost of acquiring first data set\u00a0 + (rows in first data set * cost of acquiring one related set of items from the second data set)<\/p>\n<p>Because Oracle cannot be fully informed about the prior caching of data and the self-caching that goes on as a query progresses it is very easy for Oracle to overestimate the total amount of physical I\/O that will take place as a nested loop join executes. This is a defect that cannot easily be overcome, except through hinting (or, to do it in the approved manner, supplying an SQL Baseline or Outline).<\/p>\n<h3>Footnote<\/h3>\n<p>If anyone is thinking at this point of fiddling with the parameters <em><strong>optimizer_index_caching<\/strong><\/em> (what percentage of ALL indexes should I assume to be cached) and <em><strong>optimizer_index_cost_adj<\/strong><\/em> (effectively, for values between 0 and 100, what percentage of ALL tables will NOT be cached when accesses through an index), then think again. When I set optimizer_index_caching = 95 and optimizer_index_cost_adj = 5 the default path was still the hash join; even when I set the parameters to 99 and 1 respectively the hash join still had the lowest cost.<\/p>\n<p><a href=\"http:\/\/jonathanlewis.wordpress.com\/cbo-series\/\">&#8211;&gt; Catalogue of current articles in CBO series.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the second installment of this series we looked at individual access paths for the tables in a simple join query to highlight an important flaw in the default model that the optimizer uses for indexes. Having taken advantage of a recent enhancement that addresses that flaw we are now ready to move onto the problems that appear with t&hellip;<\/p>\n","protected":false},"author":101205,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[143533],"tags":[],"coauthors":[],"class_list":["post-73160","post","type-post","status-publish","format-standard","hentry","category-oracle-databases"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73160","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/users\/101205"}],"replies":[{"embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/comments?post=73160"}],"version-history":[{"count":1,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73160\/revisions"}],"predecessor-version":[{"id":91653,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/posts\/73160\/revisions\/91653"}],"wp:attachment":[{"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/media?parent=73160"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/categories?post=73160"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/tags?post=73160"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.red-gate.com\/simple-talk\/wp-json\/wp\/v2\/coauthors?post=73160"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}