Where in the Application Should Data Validation be Done?

Whereabouts in the application should the business logic of data-validation checks be made? The advantages of a layered approach to validation that includes database constraints, would seem to outweigh the disadvantages. William Sisson explains some interesting issues.


As a database designer, I see the database as being the ultimate arbiter of the validity for data. You may well disagree with this position, but I hope I can persuade you of its virtues – or at least that you might find the discussion interesting.

This is not to say that all validation and error handling should be done in the database. The database level validation is necessary, but not sufficient for building effective and usable systems.

Of course, when you are updating data external to your organization (say placing an order over web service), then clearly the handling of errors raised by the web service must be handled in an application.

More significantly, if we want to build usable systems then we have to put validation in the user interface. We tend to think of validation as something that constrains or inhibits the user; but used effectively it can help users to avoid making mistakes in the first place, and provide useful advice for correcting errors that occur.

If we just rely on the database (or the middleware for that matter) to apply validation then the user won’t know that a problem has occurred until they send the data over the network. This is not just user unfriendly, but may have performance implications too. The network is, after all, one of the slowest components of the system and it is shared by everyone, so the less traffic, the better response times for all.

Duplicate Code

One implication of having the validation in both the database and the user interface (and possibly the middleware too) is that the same logic is expressed in more than one place.

In the database world, the logical duplication of the same data is generally considered to be undesirable. Many of us have also been taught that one should avoid duplicating logic but instead should re-use existing routines. Not everyone is in agreement with this position. Some safety critical systems like aircraft control systems are based on N-Version Programming. In N-Version programming several teams of programmers all write code based on the same specification, all versions are implemented and a decision algorithm checks if the different versions produce the same result in real time, signaling an error if they are in disagreement.

The method is similar to writing automated tests where you write a second program that checks the result of the first and raises an error if there is a discrepancy. Of course neither this method, nor N-Version programming can detect when all the versions are in error. Also the decision algorithm could have bugs in it, so though in some ways I find this approach to safety critical systems reassuring, it is far from foolproof (of course I am not going to think about this the next time I step into an airliner).

That aside, if we are going to have duplicate code then we do have to be sure that everyone is working from the same specification and that this specification is based on the real business requirements (which is actually, I think,  the very hardest task in software development).

With multiple layers of validation we also face the challenge that the code will be written in multiple different programming languages using different programming methods; declarative in the database, object oriented in the middle ware and either procedural or functional in the user interface (depending in what style you choose to write your JavaScript).

We can infer from the methods underlying N-Version programming that code written to the same specification can and does lead to discrepancies. After all, if all versions of the code produced the same results then there would be no need for the decision algorithm. We therefore need to look at what the implications of these discrepancies in the logic might be.

For the sake of illustration, we will assume a system with validation repeated in the user interface, the middleware and the database.

There are four possible conditions:

  1.  The input data is either valid or invalid.
  2. The data is accepted or rejected in the user interface.
  3. The data is accepted or rejected in the middle ware.
  4. The data is accepted or rejected in the database.

The possible outcomes are as follows:

  1. The result is correct.
  2. The result is in error – under all circumstances.
  3. The result is in error for a middleware program.
  4. The result of direct input into the database will be error (for example by a bulk insert).
  5.  The result is in error, but a workaround is possible.
  6. The user receives a different error message from the one they usually expect.
  7.  The programmer receives a different error message from the one they usually expect.

The following table shows the possible values for the conditions and the corresponding outcomes:

Data valid

Accepted by UI

Accepted by middle ware

Accepted by database


Error under all circumstances

Middleware error only

Bulk Insert Error only


User surprised

Programmer surprised

Clearly, any case of ‘false positives’ where valid data is rejected by the database is an error. If invalid data is accepted by all layers of validation then this is also clearly an error.

In cases where valid data is mistakenly rejected by the user interface or the middleware, then there is always the workaround of changing the data directly in the database. Usually a business manager will chase around until they find a helpful but somewhat unwilling DBA who can be persuaded to load the data from an Excel spreadsheet.

If invalid data is accepted by the user interface or the middle ware then the user or programmer will receive a database error message, which might be different from the error message that they would usually expect.

Database error messages are generally considered to be unfriendly by users and programmers, but this need not be the case. The unfriendliness is partly a consequence of lack of sufficient care and attention by the database designer but also the result of certain inadequacies in the error handling in current SQL-DBMS products. If the designer fails to name constraints then we can get message in SQL Server like

Ideally the name of the constraint should be usable as an error message, in this case we might call the constraint “Sales Lead ID must be unique”.  Of course we still face the challenge of parsing the error message to extract the constraint name (a task which is made more difficult in a multilingual environment).  Furthermore, if you are working with Oracle then you only have 30 characters to play with in order to create a meaningful constraint name, which probably isn’t enough under most circumstances.

There are a number of measures that SQL-DBMS vendors could undertake to improve constraint handling and error handling, but there isn’t space to go into that here.

Some people might argue, that it doesn’t really matter if invalid data is accepted by the database, provided the validation works correctly in the user interface and middle ware. You just need to make a rule that the database is only allowed to be updated over particular routines in the middle ware. The problem with this argument is that you are now dependent upon a manual rule to enforce the validation. Every developer must remember to only update the database only over the designated routine. A developer might break the rule through ignorance, negligence or rebelliousness. It is also worth remembering that the database may underlie many separate systems and development groups, not all of which may have the same standards.

In any case, situations always arise where data has to be loaded directly into the database. Usually this needs to be done in a hurry, as there is not time to enter the data manually through the user interface or to write a program to load it over the standard middle ware routines. There will probably also be insufficient time to check the consistency of the data by writing reports.

So I would argue that validation at the database level is absolutely indispensable.

The Specification

When writing the specification, we need to bridge the gap between those business requirements that are written in English (or any other natural language) and the validation rules expressed in a strictly formal language that the computer can “understand”.

As we have seen, N-Version Programming makes the assumption that different programs written by different teams to the same specification will necessarily product different results. Is this a reasonable assumption to make? Could one not sensibly argue that two programs written to the same specification and tested against that specification should always product the same result?

Of course a lot depends upon the accuracy and precision of the specification and therefore how open it is to different interpretations. In situations where extreme accuracy is required (such as in the design of firmware in chips) the specification needs to be defined very formally, leaving little room for ambiguity, [1] gives an example of such a method. Even if most of us were to take the trouble to understand the mathematics (which is challenging for a non-mathematician like myself, even if the mathematics are not really very difficult mathematics), it is unlikely that we would garner management support for adopting such methods in most commercial environments.

I think a middle way may be possible and I want to propose such an approach here. I make no claims that this is a definitive solution and I have deliberately concentrated on the database and user interface aspects as these are the components that I understand best. Perhaps someone who is more expert in the middle ware area could advise as to whether this method might be feasible in their area too.

A Validation Framework

This isn’t a software framework (you might be relieved to hear) but an intellectual framework for examining how and where validation should be applied. It is based on relational principles, but it is also useful for analysing how we can apply validation in the user interface to make the user’s experience more comfortable.

Validation Levels

  • Type – The validation restricts what value a column may take by type (date, integer, character etc.)
  • Column – The validation restricts what value a particular column may take. For example, the agreed price for a contract must be between â¬5,000 and â¬10,000.
  • Row – The validation defines a dependency between two columns in the same row. For example, the end date of a contract must be after the start date.
  • Table – The validation defines a dependency between different rows in the table. The most obvious example is a primary key. The primary key columns must always be unique within the table.
  • Database – A validation that involves more than one table. A foreign key constraint is an example of such a validation.
  • Transition constraints. All the validations mentioned previously operate on a static state of the database. A transition constraint defines how a value is allowed to change. For example, a status column may be changed from “in progress” to “finished” but not directly from “to do” to “finished”.

Mapping Database Validation to the User Interface

Validation Level

Database Implementation

User Interface Implementation


The type of a column (integer, varchar etc).

An edit mask that will only allow the user to enter valid values. For example the edit mask for a numeric(5, 2) column will only allow numbers with three digits to the left of the decimal point and two to the right.


Check constraints defined on the column.

This validation can be handled either by an edit mask or by validation that is called when the user moves away from the field, depending on the type of constraint.


Check constraints, triggers.

The user interface can suggest suitable values to the user. If the end date of a contract must be after the start date then the interface could suggest the date following the entered start date for the end date.


Primary key constraints, triggers.

Generally this type of constraint requires that the data be sent to the database for validation, so the possibilities for client side validation are limited or non-existent.


Foreign key constraints, triggers.

Drop down lists can restrict the possible values that can be entered in a field, based on foreign key restrictions.

In addition, as the user enters input they can be prompted for matching primary key values from the referenced table.

Transition constraints


The user interface can restrict the possible values the user can enter. If a task in a work-flow system is currently in status “to do” then only the “in progress” or “canceled” statuses will be available for user selection.

It is also worth noting that the database server can assist the user in other ways, for example foreign key constraints can be defined with cascade options such that instead of a delete or update violating the constraint the database will automatically execute compensating actions to preserve integrity. Likewise inserts that violate primary key constraints could under some circumstances be automatically be treated as updates.


We have seen how the need to write the same logic in at least two different places leads us to some challenges for the specification and testing of the software. I have briefly presented a proposed systematic (though not strictly formal) approach to the problem. You can read more about the layered approach to database validation in references [2] and [3].

Is there any way that we could remove the duplication of the logic completely? I have some thoughts about this, but you may want to think about it yourselves first.


  1. [1] Leslie Lamport “Specifying Systems” : http://research.microsoft.com/en-us/um/people/lamport/tla/book-02-08-08.pdf
  2. [2] Fabian Pascal “Practical Issues in Database Management“.
  3. [3] Toon Koppelaars and Lex de Haan”Applied Mathematics for Database Professionals“.