Execution Plans Part 10: Guesswork

Comments 0

Share to social media

In part 9 we saw an example of a simple nested loop join where the basic method for interpreting the numbers in an execution plan started to break down because the arithmetic used by the optimizer to calculate the resource costs for the plan wasn’t consistent with the visual representation of the mechanics of the run-time path. In this instalment we’re going to look at an example where some of the numbers are generated by guesswork, and some by estimates that aren’t visible in the plan.

Example

I’ve created a custom emp table holding 20,000 employees across 6 departments, and I’ve set myself the task of listing the employees who earn more than the average salary for their department. Here’s a possible query and the execution plan it might produce:

You’ll notice that I’ve included a /*+ no_unnest */ hint in the code; this isn’t a good strategy for the query in question and I’ve only used it because it will produce a particular execution plan that demonstrates a couple of points I’d like to talk about.

The order of operation (rowsource generation) of this plan is 2, 4, 3, 1, 0: Line 2 does a full tablescan (first operation to generate a rowsource) to supply rows to line 1; in principle line 1 then calls line 3 for each row of that rowsource; but line 3 calls line 4 to do a table scan (second operation to generate a rowsource) and supply the salary of every employee in the target department; line 3 calculates the average salary for this rowsource and passes the result up to line 1 as a rowsource containing a single row; line 1 compares this average with the salary of the current row and, if it matches, passes the row up to be output to the client.

Filling the gaps

You’ll notice immediately that there are gaps in the numbers. The filter operation has no numeric information at all but perhaps we can live with that (in this case) because it’s the only child of the select operation at line 0 so it’s reasonable to guess that the values it ought to have should match the values of line 0.

But then you notice the sort aggregate operation at line 3 has no cost, and without a cost of line 3 we don’t know how the optimizer derived a cost for line 1. Since a sort aggregate is just keeping running totals we could probably assume that the incremental cost of the operation is very small. (In fact a 10053 trace file of this query seems to suggest that the optimizer doesn’t even bother to do any calculations relating to the sort, which may explain why the line is blank.) So we’ll assume that the cost of line 3 is simply the same as the cost of its only child, i.e. 48.

Guessing execution counts

Where does this leave us when we try to justify the (inferred) cost of 334 at line 1? Allowing for rounding errors  (the 10053 trace file reported 47.61 and 48.75 respectively for the 48 and 49) the arithmetic is simple: 6 * 48 + 49 = 337 … which is close. We can infer that the optimizer has assumed the subquery will execute six times.

Note: if you’re wondering why the two operations that do a tablescan of emp (lines 2 and 4) have different costs it’s because they have to process different columns, and line 2 processes more columns than line 4, so it uses more CPU, which means a (slightly) higher cost.

This assumption of six executions brings us to an important stage of interpreting executions plans – the optimizer doesn’t know how many times that subquery might run. Pick any whole number between 6 and 20,000 and I could construct a data set (on 6 departments) that made the query run that many times. In fact it’s quite likely (in the absence of deliberate engineering) that the subquery will indeed run just six times in this case; but with a few experiments generating department codes randomly you’d eventually create an example where the number of executions was in the order of a couple of thousand. The optimizer has chosen six as the multiplier because it knows from the object stats that there are six department ids in the table; the justification for the calculation is a mechanism known as scalar subquery caching.

So here’s an important point when reviewing execution plans – they’re designed tell you the natures of the steps that Oracle is going to take at run time, but they can’t tell you how many times Oracle is going to take those steps; and the number executions of a step can have a very large impact on the total run-time of the query.

Guessing Volume

By introducing the /* push_subq */ hint into the subquery we can use the same example to demonstrate another important point about reviewing execution plans. Here’s the resulting execution plan (still running on 11.2.0.4):

As we saw in part 6 subquery pushing “breaks” the basic rule we use to read execution plans, so when we see a plan like this we have to check carefully whether the subquery is a driving subquery or a filter subquery (does the subquery text appear in the predicate sections under the access() label or the filter() label). In this case we know that it’s a filter subquery anyway, so we know that we should read the plan as “line 1 calls line 2 for each row of the tablescan”.

The intent of pushing a subquery is to make it run at the earliest possible moment – which isn’t particularly relevant in our case since there is only one possible moment at which the subquery can run but even so it has resulted in the optimizer taking a different route through its code. At run time the amount of work our query does is not going to change nor (obviously – bugs excluded) is the volume of data returned by the query going to change: but look what has happened to the numbers:

The optimizer is now predicting we will get 1,000 rows instead of 167,  at a total cost of 96 rather than 344. From the cost figures we can infer that the optimizer has estimated a single execution of the subquery at line 2 rather than 6 executions. (We might also notice that line 1 predicts 1,000 rows from the full tablescan rather than 20,000 – but that’s the optimizer’s prediction of the row count after the execution of the subquery while the previous plan was predicting what the rowcount would be before execution of the subquery ).

I could go into some detail about how Oracle derived different sets of numbers – but then you’d only ask why the developers did it that way, and that’s a question I can’t answer; however, the specific details aren’t hugely important, the key point in this case is that we can see very clearly that different optimizer code paths produce different predictions of volume (cardinality) and cost along the way. At least one of those predictions must be wrong. Changes in the actual volume of data result in changes in the actual amount of work needed to handle the data – and we can easily find cases where different execution plans for the same query predict different volumes, so how can we be confident that any prediction is going to be consistent with the work that eventually takes place.

So here’s the second important point when reviewing execution plans – there’s a lot of estimation (guesswork) going into the predictions of cardinality – and cardinality makes a big difference to the effectiveness of the execution plan. The optimizer tells you what it’s going to do – but it’s only guessing when it predicts the volume it’s going to get and (therefore) how much work it’s going to take to handle that volume.

Conclusion

Looking at just one (carefully engineered) example we can see how much scope there is in the optimizer for producing predictions that aren’t even self-consistent. This should be taken as a warning that the cardinality (Rows), cost and time you see in a predicted execution plan should not be taken as a solid prediction.

What you get from the optimizer’s prediction is a statement of the mechanics that will be used to operate the query, and some information about the volume of data that will be generated each time an operation is executed. If you know your data well, you will have an idea of how accurate the individual estimates of volume are and how many times each operation is likely to be executed; it is this comparison of the optimizer’s guesswork with your understanding of the data that allows you to know where the variation between prediction and reality is going to appear.

In the next instalments we’ll look at ways of comparing predictions with the work done at run time, and show how this can help us to find with improving query performance.

Load comments

About the author

Jonathan Lewis

See Profile

Jonathan Lewis is a well-known figure in the Oracle world with more than 30 years’ experience using the software. He has published three books about Oracle – the most recent being “Oracle Core” published by Apress Nov 2011 – and contributed to three others. He has posted more than 1,400 articles on his blog and has spent a lot of his time contributing to newsgroups, forums, and User Group magazines and events around the world. Jonathan has been self-employed for most of his time in the IT industry. For the last 25 years he has specialised in short-term assignments – typically of a design, review, or troubleshooting nature – often spending no more than two or three days on a client site to address problems. After visiting more than 50 countries (and more than a dozen US states) to talk about or troubleshoot Oracle systems, he has finally gone into near-retirement, though still writes articles and does a little consulting over the internet. Despite the differences in the software, he finds that the fundamental principles of solving performance issues don't really seem to change as you move from Oracle to SQL Server. His blog can be found at http://jonathanlewis.wordpress.com.