Index design: discard and sort

Designing an index is not done through “trial and error”. This kind of strategy seems to doom many engineered indexes to imprecision, explosion, and locking issues. Designing a precise index very often tends to be a “kill two birds with one stone” strategy.  In this article I am going to show you how to design a single index with which

Designing an index is not done through “trial and error”. This kind of strategy seems to doom many engineered indexes to imprecision, explosion, and locking issues. Designing a precise index very often tends to be a “kill two birds with one stone” strategy.  In this article I am going to show you how to design a single index with which I intend to cover a where clause predicate on column c1 and avoid a sorting cost of an order by operation on column c2.

The Quiz

What is the best index you would have come up with to cover the following query?

select * from t1 where c1 =:a order by c2;

A word of caution

An order by operation is problematic when we want to display a large volume of data in a sorted order. Or when a query generates a small number of ordered rows but is executed a huge number of times. Let’s suppose that we are in one of those kinds of situations and we are asked to optimize the above query with a single index.

Results presented in this article have been done on with a 32K block size. I have also conducted the same experiment in and with 8K block size. Although timing and number of generated buffers are not identical I came to approximately the same conclusion about the efficiency of the designed indexes.

The answer

To answer this question we need to observe carefully the predicate part which, in this case, is composed of a where clause on the c1 column coupled with an order by operation requested on a different column (c2).

Our main goal here would be, obviously, to help the CBO achieve a sorted result set without the overhead of obeying the requested order by operation.

We all know that rows are inserted in a precise order in an index. This is why the CBO is able to take advantage of these index ordered entries to satisfy the user requested sort.

This is what I am going to illustrate with the following example implemented on

I’ve created a simple table on which I have collected statistics without histograms:

  • (method_opt => 'FOR ALL COLUMNS SIZE 1')

And, as far as I have set the global preference approximate_ndv to true:

I left the sampling decision in the hands of Oracle:

  • (estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE)

As such, collecting statistics will be performed very quickly and with a far better precision when compared to the sampling I would have manually entered.

You might have already noticed that I managed to create a skewed id2 column on which I haven’t collected histograms:

Put this on hold, we will come back to it later in this article.

Now, I am going to execute the following query and display its corresponding execution plan:

Four immediate points can be derived from the above execution plan:

  • Oracle starts by doing an index full scan descending (ID = 2) on the primary key index and feeds back its parent operation (ID =1) with a bunch of 1000K rowids
  • Using these 1000k rowids Oracle scanned the t1 table(ID = 1)  and throw away 99,9% of the generated rows when applying the filter (id2 =42)
  • There is a massive increase of logical I/O (36232) during this table access
  • Thanks to the Full scan of the primary key index the order by operation has been avoided

I have highlighted the E-Rows and the A-Rows information of operation ID = 1 to show how the CBO has completely messed up its initial estimations. This is mainly due to the skewed column id2 on which I haven’t collected any histograms.

Back to our initial goal which is to engineer and index covering the predicate part on column id2 and avoiding the order by operation on column id1. Here below the first index I came up with:

In passing, spot how I’ve managed to name my index (t1_ind_id1_fbi) so that it immediately indicates its type (a function-based index because of the desc clause I have added in its definition).

Re-executing the original query gives us the following execution plan:

This execution plan highlights six points:

  • An index range scan access on the newly created function based index has been done
  • An absence of a sort operation thanks to the new index range scan access instead of the previous full scan primary key index
  • A noticeable reduction in the execution time
  • Thanks to the new precise index, Oracle started with a small number of rows (1,000 rows) and kept with this small number of rows all over the other operations.
  • A massive reduction of consistent gets from 36232 to 1005 buffers
  • The CBO is still wrongly estimating the operations that filter on the skewed id2 column (500K instead of 1000 rows)

This was a typical example on how we can design an index so that we can achieve two goals with one index:

  • Cover the predicate on column id2 with a precise index (where id2 =42)
  • Avoid the cost of sorting id1 column by taking advantage of the ordered key in the index (order by id1 desc)

However, I was still wondering what if I had created the following normal b-tree index instead of the above T1_IND_ID1_FBI function-based one?

Will this new index allow the CBO to avoid the order by operation?

Yes it does. The b-tree index (T1_IND_ID1_NI) is also able to allow the CBO to avoid the order by operation by scanning the index in descending order as shown by operation 2 in the immediate preceding execution plan. This has been made possible thanks to the existence of two pointers in the index leaf blocks (blocks where the real indexed values are located). Each leaf block (except the extreme ones) has a pointer backward and a pointer forward to its immediate neighbour block so that Oracle can traverse the index structure in both directions, ascending and descending.

The above engineered two types of composite indexes paved the way to the following question:

Since both indexes, the function-based index (T1_IND_ID1_FBI) and the b-tree one (T1_IND_ID1_NI), achieved the initial desire of discarding and sorting with approximatively the same amount of time and energy (in this case of course), which type of index would you prefer?

I have two reasons to prefer the normal b-tree index in this particular situation. The first one is that with the b-tree index I can eventually “kill three birds with one stone”. A very plausible situation is the presence of a foreign key (id2, id1) on the t1 table. In this case, with the b-tree index I have created above I would have been able to:

  • filter on id2
  • avoid the sorting cost of the order by id1 operation
  • cover the foreign key lock and deadlock threat when deleting from the parent table the t1 table is referencing.

The function-based index (id2, id1 desc) is not able to cover the t1 table foreign key threat because it doesn’t start with the couple of foreign key columns (id2,id1).

The second reason to prefer the b-tree index to the function-based one (in this case) is due to the hidden column which goes hand-in-hand with the function-based index creation:

I wrote above that the function-based index (id2, id1 desc) doesn’t start with the plausible foreign key column (id2, id1. Here is the proof:

Thanks to the desc clause in the index definition, the internal index columns expression looks now as (id2, SYS_NC00005$) instead of (id2, id1 desc). And this is the main reason why this index doesn’t cover the foreign key (id2, id1) lock and deadlock threat.

To take advantage of the function-based index, we very often need an extra step to collect statistics on the hidden column by means of the following call to the dbms_stats package:

Having exposed my b-tree index preference motivations (in this particular case), I still have a supplementary question:

What, if instead of a composite index on (id2, id1) I created only one single column index on (id2)?  Will the CBO be able to use the primary key index on (id1) in its attempt to avoid the order by operation and the second one on (id2) in its attempt to cover the where clause on id2?

Note how the CBO is unable to combine simultaneously the t1_ind_id2 (id2) index to filter t1 table using the predicate id2 = 42 and the t1_pk primary key index to avoid the sort operation. Using this pair of indexes, the initial high number of Logical I/O (36232) is back.

Even though the CBO visited the primary index and walks its leaf block in the key order (INDEX FULL SCAN ) without any sorting, it has no way to merge the ordered rowids of operation (ID =2) with those filtered from the t1 table access by rowid at operation (ID = 1).

There exists, nevertheless, a way to avoid the order by operation with the pair of indexes we have at our disposal. We need to re-write the query to give the CBO another possible path as shown below:

I built two “with subqueries“, one to get the ordered rowids by id1 (got_my_id1_rowid) and the other  to get the t1 table rowids to satisfy the predicate part where id2=42 (got_my_id2_rowid). I have also added the no_merge hint in the got_my_id1_rowid factored subquery so that it will not be merged with the remaining one.

And here is the execution plan I got in

The two ”with subqueries” have been hash joined together using their corresponding rowid as a join column. The probe table gives the rowids satisfying the id2 = 42 where clause using the corresponding index on (id2) while the build table gives the rowids ordered by id1 using the primary key index on (id1). The Logical I/O using this refactored query dropped to 1518 from 32232 original buffer gets.

When observing operation 5 in the above plan I asked myself: Why, despite the CBO wrongly estimating to buffer 500,000 rows at this step, does it nevertheless decide to use an index range scan to honour this operation? In passing with both and 8K block size, the same operation has been honored via a table full scan. I kept this question open for another investigation.

Finally, whether this query rewrite represents a good strategy or not depends on your specific case and what you will be confronted to. I have to confess that there are cases (particularly under first_rows mode) where the CBO prefers an INDEX FULL SCAN access to avoid an order by operation whatever the cost of that INDEX FULL SCAN operation is.

Tackling the CBO wrong estimation

Because of the data skew of the id2 column, the CBO has been wrongly estimating, in almost all the above execution plans whatever the index it has used, the cardinality of the operations that filter on the id2 column. One solution would be to gather adequate histograms on this column. But being in an index design path, I would prefer to tackle this issue using a virtual column which I will be adequately indexing. Something like this:

SQL> alter table t1 add derived_id2 number generated always as (case when id2 = 42 then 42 else null end) virtual;

I added a virtual column on t1 table which will hold the value 42 when it is filled up with 42 and null when filled up with a different value:

The next step is to create a new composite index using the virtual column coupled with the id1 column on which we want a sort:

SQL> create index t1_derived_id2_ind on t1(derived_id2, id1) compress;

And to collect fresh statistics for the virtual column and its associated index:

Now that I have put the scenario in place I am going to realize it

Note how, with the help of the virtual column and its corresponding index, the CBO is doing a perfect estimation which ultimately leads almost always to an optimal execution plan.

The last observation

I have managed to create the DERIVED_ID2 virtual column so that it holds only my ID2 critical value (42) which I needed to cover. All other id2 values are seen by this new virtual column as null. However when I’ve created the composite T1_DERIVED_ID2_IND(DERIVED_ID2,ID1) index, the “null” id2 values have been inserted in this index because they are protected by the id1 not null column. The immediate consequence is that I lost the advantage of not indexing the huge number of “null” DERIVED_ID2 column. This paved again the path for my last question:

What if in this case, I opted for a single index on derived_id2 column? Will I take advantage of the small size of that index?

There is a reduction of the number of consistent gets using this new index that is worth checking before taking the final index design decision.


Engineering an index should be dictated first by the query predicate part (where clause, group by and order by). Look carefully to the column you will use as the leading edge of the index. They should be the ones on which an equality predicate is applied. You should also have a “kill two birds with one stone” design strategy as far as with one index you can cover multiple queries, avoid redundant indexes and cover the foreign key lock threat. Do not forget the benefit an indexed virtual column could have on helping the CBO make good guesses (estimations) and producing attractive small indexes.

If a switch in the column order is still able to guaranty the precision and the use of the index then start your index with the column having the lowest number of distinct values. As such you can efficiently compress your index and give a CBO an extra possible path represented by the index skip scan.