Window Functions in SQL

Comments 0

Share to social media

Windowing functions were added to the ANSI/ISO Standard SQL:2003 and then extended in ANSI/ISO Standard SQL:2008. Microsoft was late to this game. DB2, Oracle, Sybase, PostgreSQL and other products have had full implementations for years. SQL Server did not catch up until SQL 2012. Better late than never.

Here is a skeleton table we can use for the rest of this article. It has the personnel, their salary amounts and the department to which they are assigned.

Load it with dummy data.

Just as people annoyingly say “Sequel” instead of “S-Q-L”, the window functions usually get called “over functions” because the set of rows to which these functions apply are defined using the syntax:

Partition is a set theory concept which we have see in the GROUP BY clause of the SELECT..FROM.. statement. Think of it as a local version of grouping.

But we start losing naive set theory after that. The ORDER BY is the ordering concept we have seen in cursors done within a partition. I like to think it came from von Neuman’s definition of ordinal numbers using nested sets. I was a math major and it shows. Some functions are easier to define with an ordering than with von Neuman’s ordinals.

The [ROW|RANGE] subclause is trickier. It is a local cursor. Yes, Celko said cursor. We start with the current row where the window function is invoked. This is where the imaginary read head is resting.

This subclause defines a frame extent of ordered rows that precede or follow the current row. Let’s do this step by step:

There is a subtle difference between ROWS and RANGE. ROWS uses a count of the rows and it is the most common usage. RANGE use a count of distinct values. Think of a SELECT DISTINCT operation. None of this makes sense without an ORDER BY subclause.

The next parts of the sub clause specify the rows preceding and the rows following the current row. If no following value is given, the default is the current row.

The keyword BETWEEN is used here with the expected meaning; the <window frame preceding> rows precede the <window frame following> rows.

The keyword UNBOUNDED PRECEDING is a shorthand for the start of the partition. current_row explains itself. You can also provide a constant count of rows with the PRECEDING option.

The keyword UNBOUNDED FOLLOWING is a shorthand for the end of the partition. current_row explains itself. You can also provide a constant count of rows with the FOLLOWING option.

The most common usages are for running totals:

and for the entire partition as a unit:

Aggregate Functions

The <window function> can be a common aggregate function, which include SUM(), AVG(), MIN(), MAX() and COUNT(). By default the PARTITION BY subclause is always implied. When it is not explicitly given, the whole result set in the FROM clause of the query is used as the partition. For those common aggregate function, the ORDER BY subclause makes no sense when you’re dealing with the whole window . The computations are commutative; remember that word from High School algebra? The results do not depend on ordering. However, if I’m looking at a window defined by a ROW or RANGE clause, then the ORDER BY clause can be very important. 

For example, let’s take a log of bids in an auction. Each bidder gets to make one bid daily. His bid has to be more than zero dollars.

I want to query that shows me what the highest bid amount was on a particular day. The window is created with the order by clause from the start of the log up to whatever the current row is.

You have to “nest” basic aggregate functions with CTEs or derived tables to preserve information. For example, the largest average department salary is done with this query:

Did you notice I lost the dept_name with the basic GROUP BY? But I can write:

Ranking Functions

The next <window function>s are the ranking functions; ROW_NUMBER(), RANK(), DENSE_RANK() and NTILE(n). By now, you have probably used ROW_NUMBER() or DENSE_RANK() to get a column to use for picking subsets from partitions of a result set. The classic example is to find the top (n) salaries by department.

You cannot nest aggregate function inside each other. A function returns a scalar value; that would mean the innermost function would pass a scalar to the containing aggregate, not a set.

Here is an example of a DENSE_RANK() that is passed to a containing query that lets you pick the top (n) salaries in each department.

Analytic Functions

The third group of <window function>s are the Analytic Functions. These are somewhat like the aggregate functions, but they do not make sense without the ORDER BY. But rather than doing a computation like the aggregate functions, or an ordinal integer like the ranking functions, these return a value from the partition.

The two simplest such functions are the FIRST_VALUE and LAST_VALUE analytic functions. The FIRST_VALUE function returns the first value from a sorted partition, and the LAST_VALUE function returns the last value. In the following example, the SELECT statement uses both of these functions to calculate salary_amt values in each sales group:

The results are:

emp_name

dept_name

salary_amt

top_salary

bottom_salary

Absalom  

acct

5500

5500

5500

Abel  

acct

3500

5500

3500

Aaron 

acct

3000

5500

3000

Abaddon

acct

3000

5500

3000

Abbott

acct

3000

5500

3000

Hamilton

HR

2300

2300

2300

Hamlet

HR

1200

2300

1200

Hamish

HR

1000

2300

1000

Hanan 

HR

1000

2300

1000

Hammond

HR

800

2300

800

Hamuel

HR

700

2300

700

Hanael

HR

600

2300

600

Sheldon

ship

5500

5500

5500

Shelby

ship

4500

5500

4500

Sheamus

ship

4000

5500

4000

Shelah

ship

3000

5500

3000

Shaquille

ship

3000

5500

3000

Shannon

ship

2000

5500

2000

Shannen

ship

1000

5500

1000

As expected, the PARTITION BY subclause gives us departmental sets, then the ORDER BY subclause puts the salary_amt values in descending order. Look at the output. There are problems. The result set is grouped by the dept_name column, with the salary_amt values in each group sorted. Look at the output from the sample data. The top_salary in each department is a constant and it is the highest salary for that that department. But the bottom_salary is a mess; just seeing multiple values tells you something is wrong.

The FIRST_VALUE and LAST_VALUE functions support the [ROWS|RANGE] subclause and there is a default that makes no sense. Look at the accounting department where the top salary is $5500.00 and bottom salary is $3000.00. The department partition is being scanned in the order given by the ORDER BY clause. This fine for the FIRST_VALUE function with a DESC order, but it makes the LAST_VALUE() behave as “last value so far” function. This can be handled with the [ROWS|RANGE] subclause applied to the entire partition.

The first ROWS subclause is redundant, but it documents the query.

The remaining two analytic functions are LAG() and LEAD(). These functions extend the [ROWS|RANGE] concept of a CURRENT_ROW to let us assume the rows are in a sorted order and we can move the imaginary cursor from the current row to a preceding row, LAG(), or from the current row to a following row, LEAD().

The LEAD(<column>, [<step size>], [<default>]) function retrieves a value from a row that is <step size> rows following the current one. The LAG(<column>, [<step size>], [<default>]) function retrieves a value from a row that is <step size> rows preceding the current one.

Obviously, the first row in a partition has no preceding row and the last row in a partition has no following row. As expected in SQL, these missing values are returned as NULLs. Well, not quite. If you specify a <default>, then it is used.

dept_name

salary_amt

 emp_name

preceding_salary

following_salary

acct

5500

Absalom 

0

3500

acct

3500

Abel    

5500

3000

acct

3000

Aaron    

3500

3000

acct

3000

Abaddon  

3000

3000

acct

3000

Abbott   

3000

0

HR

2300

Hamilton

0

1200

HR

1200

Hamlet 

2300

1000

HR

1000

Hamish 

1200

1000

HR

1000

Hanan  

1000

800

HR

800

Hammond 

1000

700

HR

700

Hamuel 

800

600

HR

600

Hanael 

700

0

ship

5500

Sheldon 

0

4500

ship

4500

Shelby 

5500

4000

ship

4000

Sheamus 

4500

3000

ship

3000

Shelah 

4000

3000

ship

3000

Shaquille

3000

2000

ship

2000

Shannon 

3000

1000

ship

1000

Shannen 

2000

0

While the parameters can be expressions, the usual values are a step size of one and perhaps zero for the default, if NULL would make computations or display problems.

As the results show, each row displays salary_amt values from the previous and following rows within each partition, unless it is a first or last row, in which case NULL is returned.

Charles Babbage and the Difference Engine

If you do not know who Charles Babbage is, It is time to Google! What you probably do not know is the underlying math of his “difference engine” which is based on difference equations not differential equations. If you like to get the math in painful details, look at

  1. “Schaum’s Outline of Calculus of Finite Differences and Difference Equations” by Murray Spiegel; ISBN: 978-0070602182.
  2. “Introduction to Difference Equations” by Samuel Goldberg; ISBN: 978-0486650845.
  3. “Finite Difference Equations” by H. Levy; ISBN: 978-0486672601.

This is an older, pre-calculus method to interpolate or tabulate functions by using a small set of polynomial coefficients. Both logarithmic and trigonometric functions can be approximated by polynomials, so this was a handy way to generate mathematical tables by teams of people doing simple operations. It took a long time, and those people were not error free. Before you laugh at this technology, it was used for scientific computation well into the 1950’s. We did not have cheap digital computers and analog computers were rare and expensive to configure because you had to hardwire them (but they were very fast once they were “programmed”). The  improvements were mimeographed work sheets and the use of Smith-Corona Marchant mechanical calculators (they had multiplication and a government contract).

 Babbage’s goal was to automate the process with a programmable digital computer in the 1870’s! Talk about being ahead of your time.

The math is simple. Given a series of values of a polynomial, {x1, x2, x3, x4, .. }, we can subtract consecutive values and get what is called the first difference or first delta. This is written as Îxn = (xn+1 – xn). As expected, the second delta is Î2xn = Î(Îxn) and so forth.

This is easier to see with an example of a simple polynomial, p(x) = 2x² – 3x + 2.

x

p(x)

Îp(x)

βp(x)

0

2

-1

——

3

——

7

——

11

4

 

4

 

4

1

1

2

4

3

11

4

22

To calculate p(5) use the pattern from the bottom cells to get (4 +11 + 22) = 37. You can verify this with algebra: (2*5*5) – (3*5) + 2 = 37. The advantage is that we did not use anything but addition and subtraction. Those operations are a lot easier to implement in mechanical devices than multiplication and division.

Those of you who still remember freshman calculus will see that this is the basis for derivatives of polynomials. We just need to take limits instead of deltas and we have calculus! Obviously, I have used extremely small data set.

We can do deltas in SQL now. Consider this generic table for trend data. Since we already worked out a textbook example, let’s use it.

This could be reading from a meter, stock prices over time, or whatever. Using SQL to translate the mathematics to SQL is pretty straightforward.

As expected, we get the same results. I can get delta_n by repeating the second CTE pattern.

time_point

delta_1

delta_2

0

NULL

NULL

1

-1

NULL

2

3

4

3

7

4

4

11

4

In the real world, the data is not this clean. But the principle still holds. Look at what happens when we “dirt up” the data.

The results are in this table:

time_point

delta_1

delta_2

 

2

NULL

1

1.1

NULL

2

3.75

3.55

3

11.2

4.8

4

22.9

4.25

If you have graphing calculator, you can show the original polynomial and overlay these slightly off data points.

The third delta gives you -1.80 instead of zero, but that is measure of the goodness of fit of this data to a second degree polynomial. If I had rounded the values, this would be smaller. In fact, I can find the average of the second deltas and subtract it from each delta, ((3.55 – 4.20) + (4.80 – 4.20) + (4.25 – 4.20))/3.0 is virtually zero. This is a better measurement of the fit.

There has been no advanced math used. This is also fast enough to use  in real time to monitor a process. Bet you did not think SQL could be used for time series this way.

Filling in Gaps

This is not the only use for LAG() and LEAD(). Imagine you have a test that is automatically conducts every x-minutes, but sometimes we do not get a reading, so it shows up as a NULL.

Our business requirement is to calculate the delta on a newer row (based on the test_timestamp) from a value in the preceding row, except when the preceding row is NULL and then I need to look back until I can find a value.

test_id

test_timestamp

test_score

test_delta

2

‘2012-01-01 15:15:00’

15

NULL

2

‘2012-01-01 16:10:00’

12

-3

2

‘2012-01-01 16:30:00’

12

0

2

‘2012-01-01 16:45:00’

9

-3

2

‘2012-01-01 17:12:00’

10

1

2

‘2012-01-01 15:25:00’

NULL

0

2

‘2012-01-01 15:55:00’

NULL

0

2

‘2012-01-01 16:55:00’

NULL

0

Looking at Old Code

We have to maintain old code, so it is worth looking at how we did analytic functions without the current syntax. The short answer is self-joins from hell. When you find this pattern, it is worth the effort to replace it.

The trick was to build a CTE that adds a row number to the original data. This number is then used in outer self-joins with a step size, usually one. If you wanted to get a default, you can add a COALESCE()  to the appropriate columns. Here is a skeleton that will need a lot of work.

The FIRST() and LAST() are done in a similar fashion. This skeleton uses the ASC/DESC option in the ROW_NUMBER() to get a value of one in the first and last positions of the original data.

About the author

Joe Celko is one of the most widely read of all writers about SQL, and was the winner of the DBMS Magazine Reader's Choice Award four consecutive years. He is an independent consultant living in Austin, TX. He has taught SQL in the US, UK, the Nordic countries, South America and Africa.
He served 10 years on ANSI/ISO SQL Standards Committee and contributed to the SQL-89 and SQL-92 Standards.
He has written over 800 columns in the computer trade and academic press, mostly dealing with data and databases. He is the author of eight books on SQL for Morgan-Kaufmann, including the best selling SQL FOR SMARTIES.
Joe is a well-known figure on Newsgroups and Forums, and he is famous for his his dry wit. He is also interested in Science Fiction.