Consuming JSON Strings in SQL Server

It has always seemed strange to Phil that SQL Server has such complete support for XML, yet is completely devoid of any support for JSON. In the end, he was forced, by a website project, into doing something about it. The result is this article, an iconoclastic romp around the representation of hierarchical structures, and some code to get you started.

Last updated 1st July 2019

Articles by Phil Factor about JSON and SQL Server:

  1. Consuming JSON Strings in SQL Server (Nov 2012)
  2. SQL Server JSON to Table and Table to JSON (March 2013)
  3. Producing JSON Documents From SQL Server Queries via TSQL (May 2014)
  4. Consuming hierarchical JSON documents in SQL Server using OpenJSON (Sept 2017)
  5. Importing JSON data from Web Services and Applications into SQL Server(October 2017)

“The best thing about XML is what it shares with JSON, being human readable. That turns out to be important, not because people should be reading it, because we shouldn’t, but because it avoids interoperability problems caused by fussy binary encoding issues.

Beyond that, there is not much to like. It is not very good as a data format. And it is not very good as a document format. If it were a good document format, then wikis would use it.”

Doug Crockford March 2010

This article describes a TSQL JSON parser and its evil twin, a JSON outputter, and provides the source. It is also designed to illustrate a number of string manipulation techniques in TSQL. With it you can do things like this to extract the data from a JSON document:

And get:

1176-JSON1.jpg

…or you can do the round trip:

To get:

Background

TSQL isn’t really designed for doing complex string parsing, particularly where strings represent nested data structures such as XML, JSON, YAML, or XHTML. 

You can do it but it is not a pretty sight; but why would you ever want to do it anyway? Surely, if anything was meant for the ‘application layer’ in C# or VB.net, then this is it. ‘Oh yes’, will chime in the application thought police, ‘this is far better done in the application or with a CLR.’ Not necessarily.

Sometimes, you just need to do something inappropriate in TSQL. (note: You can now do this rather more easily using SQL Server 2016’s built-in JSON support. See ‘Consuming hierarchical JSON documents in SQL Server using OpenJSON‘ which I wrote more recently)

There are a whole lot of reasons why this might happen to you. It could be that your DBA doesn’t allow a CLR, for example, or you lack the necessary skills with procedural code. Sometimes, there isn’t any application, or you want to run code unobtrusively across databases or servers.

I needed to interpret or ‘shred’ JSON data. JSON is one of the most popular lightweight markup languages, and is probably the best choice for transfer of object data from a web page. It is, in fact, executable JavaScript that is very quick to code in the browser in order to dump the contents of a JavaScript object, and is lightning-fast to populate the browser object from the database since you are passing it executable code (you need to parse it first for security reasons – passing executable code around is potentially very risky). AJAX can use JSON rather than XML so you have an opportunity to have a much simpler route for data between database and browser, with less opportunity for error.

The conventional way of dealing with data like this is to let a separate business layer parse a JSON ‘document’ into some tree structure and then update the database by making a series of calls to it. This is fine, but can get more complicated if you need to ensure that the updates to the database are wrapped into one transaction so that if anything goes wrong, then the whole operation can be rolled back. This is why a CLR or TSQL approach has advantages.

“Sometimes, you just
need to do something
inappropriate in TSQL…”

I wrote the parser as a prototype because it was the quickest way to determine what was involved in the process, so I could then re-write something as a CLR in a .NET language.  It takes a JSON string and produces a result in the form of an adjacency list representation of that hierarchy. In the end, the code did what I wanted with adequate performance (It reads a json file of  540 name\value pairs and creates the SQL  hierarchy table  in 4 seconds) so I didn’t bother with the added complexity of maintaining a CLR routine. In order to test more thoroughly what I’d done, I wrote a JSON generator that used the same Adjacency list, so you can now import and export data via JSON!

These markup languages such as JSON and XML all represent object data as hierarchies. Although it looks very different to the entity-relational model, it isn’t. It is rather more a different perspective on the same model. The first trick is to represent it as a Adjacency list hierarchy in a table, and then use the contents of this table to update the database. This Adjacency list is really the Database equivalent of any of the nested data structures that are used for the interchange of serialized information with the application, and can be used to create XML, OSX Property lists, Python nested structures or YAML as easily as JSON.

Adjacency list tables have the same structure whatever the data in them. This means that you can define a single Table-Valued  Type and pass data structures around between stored procedures. However, they are best held at arms-length from the data, since they are not relational tables, but something more like the dreaded EAV (Entity-Attribute-Value) tables. Converting the data from its Hierarchical table form will be different for each application, but is easy with a CTE. You can, alternatively, convert the hierarchical table into XML and interrogate that with XQuery.

JSON format.

JSON is designed to be as lightweight as possible and so it has only two structures. The first, delimited by curly brackets, is a collection of name/value pairs, separated by commas. The name is followed by a colon. This structure is generally implemented in the application-level as an object, record, struct, dictionary, hash table, keyed list, or associative array. The other structure is an ordered list of values, separated by commas. This is usually manifested as an array, vector, list, or sequence.

“Using recursion in TSQL is
like Sumo Wrestlers doing Ballet.
It is possible but not pretty.”

The first snag for TSQL is that the curly or square brackets are not ‘escaped’ within a string, so that there is no way of shredding a JSON ‘document’ simply. It is difficult to  differentiate a bracket used as the delimiter of an array or structure, and one that is within a string. Also, interpreting a string into a SQL String isn’t entirely straightforward since hex codes can be embedded anywhere to represent complex Unicode characters, and all the old C-style escaped characters are used. The second complication is that, unlike YAML, the datatypes of values can’t be explicitly declared. You have to sniff them out from applying the rules from the JSON Specification.

Obviously, structures can be embedded in structures, so recursion is a natural way of making life easy. Using recursion in TSQL is like Sumo Wrestlers doing Ballet. It is possible but not pretty.

The implementation

Although the code for the JSON Parser/Shredder will run in SQL Server 2005, and even in SQL Server 2000 (with some modifications required), I couldn’t resist using a TVP (Table Valued Parameter) to pass a hierarchical table to the function, ToJSON, that produces a JSON ‘document’. Writing a SQL Server 2005 version should not be too hard.

First the function replaces all strings with tokens of the form @Stringxx, where xx is the foreign key of the table variable where the strings are held. This takes them, and their potentially difficult embedded brackets, out of the way. Names are  always strings in JSON as well as  string values.

Then, the routine iteratively finds the next structure that has no structure contained within it, (and is, by definition the leaf structure), and parses it, replacing it with an object token of the form ‘@Objectxxx‘, or ‘@arrayxxx‘, where xxx is the object id assigned to it. The values, or name/value pairs are retrieved from the string table and stored in the hierarchy table. Gradually, the JSON document is eaten until there is just a single root object left.

The JSON outputter is a great deal simpler, since one can be surer of the input, but essentially it does the reverse process, working from the root to the leaves. The only complication is working out the indent of the formatted output string.

In the implementation, you’ll see a fairly heavy use of PATINDEX. This uses a poor man’s RegEx, a starving man’s RegEx. However, it is all we have, and can be pressed into service by chopping the string it is searching (if only it had an optional third parameter like CHARINDEX that specified the index of the start position of the search!). The STUFF function is also a godsend for this sort of string-manipulation work.

So once we have a hierarchy, we can pass it to a stored procedure. As the output is an adjacency list, it should be easy to access the data. You might find it handy to create a table type if you are using SQL Server 2008. Here is what I use. (Note that if you drop a Table Valued Parameter type, you will have to drop any dependent functions or procedures first, and re-create them afterwards).

ToJSON. A function that creates JSON Documents

Firstly, we need a simple utility function:

And now, the function that takes a JSON Hierarchy table and converts it to a JSON string.

ToXML. A function that creates XML

The function that converts a hierarchy  table to XML gives us a JSON to XML converter. It is surprisingly similar to the previous function

This provides you the means of converting a JSON string into XML

This gives the result…

Wrap-up

The so-called ‘impedence-mismatch’ between applications and databases is, I reckon, an illusion. The object-oriented nested data-structures that we receive from applications are, if the developer has understood the data correctly,  merely a perspective from a particular entity of the relationships it is involved with. Whereas it is easy to shred XML documents to get the data from it to update the database, it has been trickier with other formats such as JSON. By using techniques like this, it should be possible to liberate the application, or website, programmer from having to do the mapping from the object model to the relational, and spraying the database with ad-hoc TSQL  that uses the base tables or updateable views.  If the database can be provided with the JSON, or the Table-Valued parameter, then there is a better chance of  maintaining full transactional integrity for the more complex updates.

The database developer already has the tools to do the work with XML, but why not the simpler, and more practical JSON? I hope these two routines get you started with experimenting with this.

Interesting JSON-related articles and sites

Since writing this article, Phil has also developed a CSV parser and output and an XML parser (Producing JSON Documents from SQL Server queries via TSQL)