Problems with Indexes

In my book “Cost Based Oracle – Fundamentals” (Apress 2005) I described a problem with the cost calculation for the index fast full scan: When you generate statistics for a table, one of the results is the number of blocks below the high water mark. When you generate the statistics for an index, you get the number of leaf blocks

In my book “Cost Based Oracle – Fundamentals” (Apress 2005) I described a problem with the cost calculation for the index fast full scan:

When you generate statistics for a table, one of the results is the number of blocks below the high water mark. When you generate the statistics for an index, you get the number of leaf blocks and the index level; but you don’t get any information about the number of branch blocks, or the index segment’s high water mark (HWM).

So what number does the optimizer use as the basis for the cost of the index fast full scan? The answer seems to be the number of leaf blocks – which is fairly reasonable, because in a nice clean, randomly generated index, with no catastrophic updates and deletes, the number of leaf blocks is probably within one percent of the total number of blocks below the high water mark

It is possible though that in some scenarios you could push an index into an unusual state where the number of populated leaf blocks was much smaller than the number of blocks below the highwater mark, resulting in an inappropriate index fast full scan cost when a range scan would be more efficient

The obvious example where it might be better to use the high water mark rather than the leaf block count is the index that’s been used to emulate a FIFO (first in, first out) queue structure. When using an index as a FIFO queue it’s possible to end up with a very large index structure with virtually no data in it except for a few blocks at the high value end. In this state you probably want Oracle to do an index range scan of the full section of the index rather than an index fast full scan of the whole index – but if the cost of the index fast full scan is dictated by the tiny number of populated leaf blocks rather than the total number of index blocks you could get unlucky (and at this point users of Advanced Queueing, in particular, may start to remember odd cases where they’ve seen unexpected index fast full scans).

I don’t think there’s any generic claim one could make about whether counting (populated) leaf blocks or looking at the high water mark is the more appropriate thing to do in all circumstance – but it seems to me that if you’re going to do an index fast full scan you will actually be reading through all the blocks that have held data whether or not they are currently populated and so the cost calculation probably ought to allow for reading all the empty blocks.

Surprisingly, since 10.2.0.5 (and probably a little earlier) it has been possible to dictate to Oracle whether the statistics should record the high water mark or the  number of currently populated leaf blocks as the leaf_blocks value but it’s a feature that has slipped into place with very little documentation. The feature is enabled by a fix control (parameter “_fix_control”) that only gets mentioned in a couple of places in MoS.

If you want to change the way Oracle behaves you need to set the parameter before gathering stats on the index. Of course you could set it before gathering stats on a table – in which case it applies to all indexes on the table for that gather – or you could set it permanently at the session, system, or parameter file level.

Here’s a script demontrating the mechanism.

There’s nothing particularly cunning or clever about this code, although I have used a large pctfree on the index so that we can see a large cost of doing an index fast full scan. The only special bit, though is that I’ve included the cascade option when gathering stats on the table and I’ve explicitly used 100% instead of the auto_sample_size that I’d usually expect to see as the standard for 11g and beyond. Of course there is that one-liner where I set the “_fix_control” parameter to the suitable value.

Here, with a little editing to remove redundant data and a massive re-arrangement of the order of output, are the results from running the code on 11g (specifically 11.2.0.4). I got the same results from 12.1.0.2 and 10.2.0.5 but I don’t have any earlier versions of 10g to check. First the report of leaf_blocks from the three sections:

As you can see, after my delete of 90% of the rows the number recorded for leaf_blocks has dropped by nearly 90%; but when I gather stats again after setting the _fix_control the number recorded for leaf_blocks goes back up, ending higher than it was originally. The “excess” 33 blocks come from 5 branch blocks and, thanks to my use of ASSM with 1MB uniform extents, 28 space management blocks. Since I have three different values for leaf_blocks I get three different costs for the index fast full scan:

If you run the test code your costs will probably be different because of your settings for the system statistics, but you should still see that the first and last costs are similar while the cost in the second plan is around 10% of the cost of the other two.

You can appreciate that if the cost of an index fast full scan is much lower than it ought to be (relative to the number of blocks in the index, and in comparison to the relative cost of a tablescan) there will be cases where the optimizer does the wrong thing because it looks like the lowest cost. With this fix available you now have an option to target the problem if you ever meet it.