Importing Text-based data: Workbench

Robyn and Phil return with some fresh ideas about how to import text files into SQL Server, without resorting to DTS or SSIS scripting. They go on to show how much can be done in TSQL



It is hard to estimate the enormous number of unnecessary and unmaintainable SSIS and DTS files that are written merely to import data from text into SQL Server. For performance, and for the sanity of the DBA, it is usually better to allow SQL Server to import text and to pummel it into normalised relational tables, rather than rely on procedural techniques.

There are many ways to read text into SQL Server including, amongst others, BCP, BULK INSERT, OPENROWSET, OPENDATASOURCE, OPENQUERY, or by setting up a linked server.

Normally, for reading in a table from an external source such as a text file, one would use an OpenRowSet, which can be referenced in the FROM clause of a query as though it were a table name. This is a topic that would take too long to tackle in this workbench, though we’ll show you an example of its use for reading in a CSV file. Perhaps one day we’ll do an OpenRowSet Workbench!…

Fast import with the Quirky Update technique

So, you think you’re good at importing text-based data into SQL Server? A friend of ours made that mistake too, recently, when he tried to get a highly paid consultancy job in London. The interviewer guided him to an installation of SQL Server and asked him to import a text file. It had a million rows in it which were rather poorly formatted. As our friend stared at the data, his confident laugh turned to a gurgle of panic, as he suddenly realised that he wasn’t looking at simple columnar data, or delimited stuff, but something else, and something that looked tricky. Our friend realised too late that it was a ‘curved ball’ and floundered embarassingly. Let’s simulate a few of the million rows just so you can see the problem.

There are, of course, several different approaches to turning this sort of mess into a table. we can BCP or BULK INPUT it into an imput table, in order to pummel it into shape. Actually, where record-lengths are short, one can do it even more simply this way.

And so the answer to the interview question was perfectly simple. With a million rows, one daren’t hang about, so here is a solution that does the trick quickly without a cursor in sight. Can you spot a neater method? Neither Phil nor I can.

Which gives:

Of course, this needs a bit of explanation. What we are doing is to use the ‘Quirky Update’ syntax in Sybase and SQL Server to allow us to update some special columns in the import table that tell us the column positions of the various pieces of data for each row, as they will be different in every row.

The first column is terminated by the number (number of sales), so we need to use PATINDEX to tell us where this is. Then we have to look for the next number. The trouble with PATINDEX is that one cannot specify the start (or end) position of the search, so you have to use SUBSTRING for that. Finally we need to find that pesky character at the end.

Now we have the column positions we can then parse it all neatly with a select statement.

You’ll see that it would work even with spurious characters in the way such as [ ], and so on.

Sometimes, one gets strange delimiters in data. Here is an example of how one might input a file from a monitoring system.

CSV Importing- Comma-delimited and Comedy-Limited.

CSV, if done properly, is actually a very good way of representing a table as an ASCII file, even though its use has now been overtaken by XML. CSV is different from a simple comma-delimited format. The simple use of commas as field separators is often called ‘Comedy Limited’, because it is so incredibly useless and limiting.

The real CSV allows commas or linebreaks in fields: well anything actually. It is described in The Comma Separated Value (CSV) File Format, or CSV Files

BCP is not a good way of reading CSV files; Unless you use a Format file, it will only do ‘comedy-limited’ files. A much better method is to use ADODB provider MSDASQL, which does it properly.

This assumes that the first row is the header, so you may need to add a first row.

The ODBC TEXT driver will not output a table as a CSV file, unfortunately. The reason for this is mysterious. It would have been very useful.

Sometimes, for a special purpose where a simple method like this won’t do, you have to develop a TSQL way. Sometimes, for example, you will find that records are separated by ‘[]’ markers, or that comment or header lines are inserted with a prepended ‘#’. Sometimes quotes are ‘escaped’ by a ‘\’ character.

The first stage is to read the entire file into a SQL Server variable. Reading text into a VARCHAR(MAX) is very easy in SQL Server 2005. (For other ways in SQL Server 7 and 2000, see Reading and Writing Files in SQL Server using T-SQL

For this test, we’ll put the CSV file in a VARCHAR(MAX) variable.

/*here is the XML version by comparison

Unrotating a CSV Pivot-table on import

We’ll end up with one of Phil’s real life routines that is used to get daily exchange rate information for a multi-currency ecommerce site. This gets a text file which is in Comedy-limited format (comma-separated) which is gotten from the Bank of Canada’s internet site. There are several comment lines starting with a # character and the first non-comment line contains the headings.

And we want to ‘unpivot’ it into back into a table in the format …..

You’ll see that it is simple to start an archive of daily currency fluctuations with something like this:

To start with we will need to install CURL on the server. CURL is extraordinarily useful as a way of getting text into SQL Server from awkward places such as secure FTP sites, or simply from internet sites. Then we will need a couple of utility functions which as provided below. You’ll see how easy it is to ‘unpivot’ a pivot table back into a data table!

(this was originally in one of Phil’s blogs)

So, we hope we’ve given you a few ideas on how to deal with importing text into a database without resorting to a whole lot of scripting. We’ve only tackled a few examples and steered clear of thorny topics such as BCP, DTS and SSIS. We’d be interested to hear of any sort of text-based format that you feel would be too hard for TSQL to deal with.

Further Reading