Investigating Indexes – 2

In an earlier article on investigating indexes I described some methods for looking at the internal structure of an index to get an idea of its current state of health. Although this type of knowledge helps you to make informed decisions about which indexes might need special treatment (like being permanently dropped, perhaps) it doesn’t immediately answer questions like: “what’s

In an earlier article on investigating indexes I described some methods for looking at the internal structure of an index to get an idea of its current state of health. Although this type of knowledge helps you to make informed decisions about which indexes might need special treatment (like being permanently dropped, perhaps) it doesn’t immediately answer questions like: “what’s the (negative) impact on my batch job of the indexes on tableX?”

This is an important question and one that’s often overlooked in tuning exercises. In fact the default response to poor performance is more likely to head in the opposite direction and try to find indexes to add rather than considering the possibility that there may be indexes that could be dropped.

So what kind of negative impact do you get from indexes – particularly if you have too many? There are several problems; for example:

  • Excessive numbers of indexes increase the chances of the optimizer producing a bad execution plan leading to extremely poor performance.
  • It takes the optimizer longer to optimize a statement – especially in 12c where a lot of time could be spent on dynamic sampling to determine which of several similar indexes is the best (or least worst) one to use.
  • Real-time index updates use resources and take time. Not only does this mean increased execution time for your task it also means resources denied to other tasks and, potentially, time spent by other processes in queues waiting for your task to complete and release locks.

The first two issues listed above are relatively easy to address – the first has been around for a long time and the second is just a new variation on that same old theme: DBAs often spend time trying to optimize the execution paths for specific queries and once you’ve fixed a set path for a statement the presence of redundant indexes doesn’t really matter any more. The third problem, though, is rather harder to address; in fact it was a recent question on the OTN Database Forum (and the first couple of responses) that prompted me to write this note. To quote the question:

I would like to be able to distinguish the time spent for inserting into an indexed table, and the time spent for updating the associated indexes.

Can anybody suggest a way to get this information ?

The first answer carried the implication: “Why would anyone want to measure that?” and the second carried the implication “What difference would it make if you knew?” Both questions make sense if you always work in an environment where the database design is perfect, the hardware is perfectly configured, and application code is exquisite beyond belief. If your set of indexes is exactly the set you need and they’re cached just the way they need to be then Oracle has to update them in real-time and the cost of updates is a necessary part of what you do and can’t be improved.

Most of us, however, don’t sit in front of perfect systems so occasionally there will be times when it is worth thinking about how we could identify the work done and time spent by Oracle updating specific indexes so that we could justify spending some time in re-assessing the set of indexes that had been created on a particular table.

As with many aspects of addressing performance issues there may only be one or two scenarios where we would want to spend any time even thinking about HOW to identify the necessary metrics, let alone spending the time to set up a suitable mechanism and taking the measurements – nevertheless it’s relatively easy to spot the scenario if it appears in front of you (or to imagine it even if you’ve never seen it). After all, it’s only relatively recently that questions of the following type stopped appearing regularly on forums: “I have a batch process that inserts 1 million rows of data into a table every night. But it’s very slow and I see lots of waits for ‘db file sequential read’ while it’s going on. Why is this happening?” The answer is almost invariably: “Too many indexes. Real-time index maintenance in a batch environment often entails a large volume of random I/O”.

In fact, of course, excessive indexing often results in a large volume of random I/O during OLTP processing as well, but it’s less likely to have an immediately visible impact from the user perspective because any one user is only likely to see a few tenths of a second of wait time in any one transaction so the impact is too small to notice.

The Batch process

The big insert/update/delete in a batch is the obvious scenario where you may end up trying to find how much time was index maintenance time and which indexes were the prime culprits. Of course there’s some scope for logical inference as I’ve mentioned in earlier articles: an index on order(date_entered) is likely to have a small number of “recent” leaf blocks that stay well buffered as you insert a big batch of new orders, while an index on order_lines(product_id) is likely to see new entries being scattered randomly across the entire width of the index as new order lines are inserted and is therefore more likely to be subject to far more “db file sequential read” waits as Oracle searches for the leaf block that needs to be updated.

Generally we don’t see user sessions waiting to write blocks to disc (with the exception of direct path writes, of course) but with a large number of randonly updated indexes the global impact of index maintenance could be so great that we get interference between the database writer and user sessions trying to do updates and seeing waits for event “write complete waits” (or even, in newer versions of Oracle “write complete waits: flash cache”).

Using tkprof

Part of being good at troubleshooting comes down to identifying activities where it’s worth spending some of your time. If you think that there may be a big enough performance gain at some point in a batch process then it’s worth doing some work to investigate it – and a good starting point is simply to enable extended tracing for the duration of the important statement(s). This may not tell you all you need to know but in many cases it will be a big step in the right direction.

Running tkprof against the trace file will tell you how much time was spent on CPU and how much on wait time, and how the wait time was broken down by wait event. If you see a large fraction of the wait time spent on db file sequential reads (and write complete waits) then it’s worth looking for more detail about those events. You may find from the execution plan (“Row Source Operation” section) in the tkprof output that most of the reads could be attributed to the select part of the statement, but if they weren’t you could then examine the raw trace file to check which objects were being read while the data was being inserted.

Here, for example, is the tkprof output due to inserting 50,000 rows into a table of 6 million rows with three indexes. One of the indexes is on a sequence number, one is on a highly randomised numeric column and one is on a fairly randomised date column.

I’ve used the standard “connect by” on the dual table to generate 50,000 rows, so the execution plan has a few lines about the materialized factored subquery before the “LOAD TABLE CONVENTIONAL” operation, but when you get to that operation you can see Oracle reporting 21,899 blocks read (“pr” is short for “physical read”) even though the work leading up to that LOAD is only 16 physical reads.

It’s safe to assume that the difference is all about putting the data into the table and (mostly) maintaining the indexes. So at this point it looks as if it will be worth spending the time to look at the trace file to see if we can associate reads to the object being read.

The first thing is to find the cursor number for the insert statement – which requires a simple text search:

Having found cursor number “#140637786381144” I used grep to pick out all the WAIT lines for the cursor that were for a “db file sequential read” (I’d checked that the cursor wasn’t closed and the cursor number re-used before doing so) directing the output into another file:

At this point I had a text file with lots of lines of the pattern:

So I used vi with a couple simple commands to delete everything up to and including the “obj’=”, and everything from the ” tim=” onwards to end up with a list of nothing but object numbers so I could finish off with the command “!sort -n | uniq -c “. This is the result:

There are a couple of reads of system objects and then, in numeric order of object id, the table, the index on the sequence number, the index on the highly randomised number, and the index on the fairly randomised data column. As I indicated earlier on, the nature of the thing being indexed (order_date compared to product_id) can make a significant difference to the volume of I/O required to maintain that index.

Technically, of course, the number of events doesn’t necessarily equate to the time spent – but if you take another look at the extract from the trace file you’ll see that each wait includes an “ela=” entry. I happen to have a little awk script that I can use to calculate a running total of a list of numbers so it’s a matter of moments to grep out the lines for each object in turn, strip them back to just the elapsed time and then sum.

Segment Statistics

It’s a bit of a pain to have to mess around with trace files so if you’re in a position to monitor parts of the batch job as it runs there are alernative strategies that will give you the same level of information as I’ve extracted by analysing the trace file.

I have a simple PL/SQL package that can take snapshots of the contents of v$segstat – the structure that stores statistics about segment activity, and I used this to take a snapshot that started just before the insert started and ended just after the insert completed. Removing all the extraneous bits, this is the output I got:

I don’t know why the t1 and t1_i1 (sequential index) numbers are larger than the figures from the trace files – but they’re pretty insignificant anyway so I’m not going to worry about that right now. It’s easy to see the impact of the changes on the two key indexes, though.

If you’re wondering why the number of “db block changes” isn’t 50,000 across the board it’s because Oracle has an array-based optimisation – multiple consecutive changes to the same block can be collapsed into a single change – so we can see how totally random the data for the t1_n1 index is (no opportunities for optimisation) while the t1_d1 index isn’t quite so random and has managed to achieve a little optimisation.

Active Session History

Troubleshooting is always easier if you’re in exactly the right place at exactly the right time to take exactly the perfect steps. But if you can’t be on the spot at the right moment you can still get fairly pointed information shortly after the event. The dynamic performance view v$active_session_history is like a sample of the 10046 trace across every active session taken once every second and, provided your instance isn’t short of memory, each sample is kept for at least an hour after it has been taken – and dba_hist_active_sess_history reduces this to one sample in 10 (i.e. every tenth second) kept for several days by default. You do have to license the diagnostic pack to access these view, though.

Since I knew the session id of the session executing the insert, I ran the following query shortly after the insert had completed (and then I eliminated some of the irrelevant output):

The query is asking “what did you see me doing in in the last 10 minutes?”, and this is what Oracle saw:

We have the SQL ID of the insert statement so we can pull the plan from memory and check what operations 6 and 8 are about. We also see, anyway, that operation 6 is the load table and that (statistically speaking) it must be doing quite a lot of single block reads –  rather more against object 251318 than against 251319 which, fortunately, agrees with our detailed observation from the trace file.

Given the we have only 15 samples with 9 single block reads when the trace file shows tens of thousands of reads we need to be aware that this ASH sample for a single short session isn’t really something we can place a lot of faith in – but for a longer session, or for a handful of connnections, the sampling does give us a reasonable indication of activity. Even in this case there’s a fairly strong intimation of where most of the time is probably being spent in the data loading.

The OLTP system

There isn’t really a lot to add to this note if you want to switch your attention from particular batch tasks to the general overhead of the OLTP side of processing. You can use the same tools, slightly differently, with slightly less confidence in the results, to look for suspects.

Using SQL trace globally is probably not a viable strategy – but 11g allows you to enable tracing for a specified SQL ID whenever it runs, and that might be a useful strategy.

Over a longer period of time, with many active users, the statistical sample that is the active session history can give you some reasonably sound indications of the workload for specific objects (summing by sample_id, current_obj# and event; or sql_id, current_obj# and event).

Similarly taking snapshots of v$segstat (or v$segment_statistics), perhaps restricted to a particular table and its indexes over a period of time may give you some useful pointers about which indexes on a table are the most resource intensive. (These stats are available in the dba_hist views, of course, and reported in the “Top 5 segments by …” of the AWR.)

The main difficulty in using this type of information for OLTP analysis is the lack of precision, so you should treat the results purely as a pointer towards the indexes that might be worth further investigation,

Footnote:

It would be nice if Oracle’s execution plans did something that SQL Server has been doing for a long time which is to report (a) any indexes on other tables that are accessed to test for referential integrity and (b) any indexes on this table that are updated as the data changes. It would cut out all the fiddling I’ve had to describe above, and make it much more obvious to the “casual” observer that the statement they were executing had performance side effects that could easily be forgotten until you started to look closely at the various monitoring tools that Oracle makes available.