An interview with Kent Tegels
by Douglas Reilly
Kent Tegels is the man behind the Enjoy Every Sandwich blog on the SqlJunkies.com web site. He is also an instructor for DevelopMentor, teaching courses on .NET and SQL Server, with emphasis on SQL Server 2005. He is one of the co-authors of Beginning ASP.NET Databases Using VB.NET, in which he uses his background as an instructor to provide explanations for programming data-driven web sites.
The following questions were answered by Kent via email.
Doug: One of the biggest issues among SQL Server developers is whether or not to use stored procedures. Where do you come down on this issue?
Kent: Absolutes are hard to come by in this business, so what works well in a given scenario varies. Conceptually I’m pro-stored procedures for the same reasons I’m pro-modular programming: Compiled code typically outperforms interpreted; there’s inherent value to layering security; and there is the general goodness of abstraction and encapsulation.
There are times when you need to get down to the bare metal and write the best performing code. Calling parameterized procedures means comparative overhead. Human cycles are more expensive and scarce than machine cycles. Spending hundreds or thousands of dollars to write “better” code that saves tens or hundreds of dollars in application utility seems economically insane.
Doug: Do you think the addition of LINQ (Language Integrated Query) will change the way developers create database-oriented applications?
Kent: Yes and no. One of the more pragmatic bosses I’ve had was fond of pointing out that things are “the same but different.” As far as applications are instances of design patterns, LINQ doesn’t change much. But at the code level, you could write really different code to do the same work. Whereas today we have to work with classes that are first-order mappings of database concepts, LINQ gives us an implicit way of doing the same thing.
In ADO.NET, for example, we’re presented with the results of work as result sets mapped to datatables. DLINQ doesn’t do that. It gives us results as an enumerable sequence of typed objects. This second-order mapping is where LINQ’s real power lies. It enables programmers to leverage a small amount of knowledge and skill to a much greater datascape. Once one learns the LINQ pattern, implementing DLINQ and XLINQ can be understood and applied.
Doug: Why is VB.NET your favored .NET language?
Kent: Actually, the first .NET book I worked on was Data-Centric .NET Programming with C# for the original Wrox press, so my roots are in C#. What I like most about .NET is that it really means that language can be a lifestyle choice. While my first programming languages started with the letter A, it was C that had the greatest influence on me.
I wrote a lot of C/C++ for a number of years. Then I spent time with Java and Perl. Same but different again. Even with Classic VB, my code was more in the K&R style. So when I’m writing formal code for the Microsoft platform, I prefer C#. But when I’m tinkering or learning, I like VB.NET, mostly because I’m a sloppy typist with little patience. VB.NET enables those behaviors nicely <grin>.
Doug: What do you think of the interest in bringing back Microsoft support for VB6?
Kent: I suppose the politically correct answer is, I feel your pain, change is hard, blahibity, blahibity, blah. Straight up, though, I’d rather remove my own spleen with a hand grenade than go back. Get over it already, OK?
Doug: Why do you blog, and do you think blogging has changed the face of software development?
Kent: Another boss I had now signs his email with a quote from Aaron Copland: “…the composer who is frightened of losing his artistic integrity through contact with a mass audience is no longer aware of the meaning of the word art.” It is no different for developers. The public performance of our art is our programs, but the back channel of critics and growth is the community.
Blogging is a powerful channel that has a deep influence on the art. But it is not the only one. There are also user groups, mailing lists, news groups, forums, instant messaging, code camps, instructor lead training, and so on. So you can’t trivialize it as “surface effect.” All channels have an influence at many levels.
A more recent mentor of mine nailed it when he said there are two types of bloggers: reflectors and generators. Reflectors refer to the works of others and maybe add some value by commenting on the referred-to posts. Reflectors influence by amplifying the work of generators, who publish their works and ideas as exemplars for others. Nobody who blogs is exclusively one or the other.
I blog to generate new discussions that are propagated through the reflections of my readers. I still read more than 1,000 blogs a day and frequently point out things that should be reflected to the community. That’s not why I started, though. My original reason for blogging was to practice writing when I wasn’t writing books or articles. It takes daily practice to grow as a writer, just like you have to write code every day to grow as a developer.
My former boss, Ted Kooser, and my mentor, Dan Sullivan, have proven themselves right. I hope to achieve a similar degree of success by implementing the same patterns.
Doug: How did you start working on databases? What was your first database-related job?
Kent: They were one and the same. I was working my way through college at an insurance wholesaler. To achieve a greater degree of scale, the company needed systems for tracking recruiting, generating material shipment orders, and building market intelligence. At the time, our tool of choice was a program called Q&A. Considering how primitive the technology was, it was an amazing work: powerful query, excellent UX and agile reporting. And it didn’t crash every day.
Doug: Where in the development world do you think XML best fits? What have you used it for, in addition to web services?
Kent: I had to laugh when I first heard Don “DonXML” Demsak tell the story of how his nickname evolved, as I’m pretty sure folks were calling me “KentXML” behind my back for the same reason. XML is like a nuclear force in that it binds data particles together. I use it for almost everything in some way or another.
Databases have the same behavior. Same but different again. The big difference in XML is that it has lower infrastructure requirements in exchange for less scale under load. Databases are the exact opposite of that. The real touchstone is how much more you need to do and how quickly, and where you can do it. I think both are useful things with which developers should equip themselves.
If the question is where does XML belong in the database, that’s harder. The same principal applies: The more performance you need at greater scale, the better it is to normalize the XML into relational tables. The less of that you need to do, the more you should leave data sources such as XML alone and use as is in an XML data type instance. That’s unless you wind up doing nothing with the process, in which case it is best to store it as a compressed BLOB or even a network-addressable file reference.
Doug: Have you read any good database-related or general software development books lately?
Kent: I’ve read a couple of amazing books recently. The first is Roger Wolter’s Rational Guide to SQL Server 2005 Service Broker from Rational Press. When I met Roger, he earned my respect by being calm about, yet deeply experienced, in the topic. When I thought about all of the other responsibilities he had in guiding several teams for SQL Server 2005, and at the same time he was working on that book, respect quickly turned to awe.
The other book is Donald Farm’s SQL Server 2005 Integration Services, also from Rational Press. Donald’s work is an exemplar of having a good story to tell and telling it well.
I am also finding Passin’s The Explorers’ Guide to the Semantic Web hard to put down.
Another book has nothing to do with software development per se, except that it is written by a developer. I’m talking about Jeff Hawkin’s On Intelligence. Somewhere down the road that book is going to influence the next generation’s “Gang of Four” the way Alexander’s book on building architecture did. If it hasn’t already, that is.
Doug: What do you think about using VB.NET or C# for stored procedures, user-defined functions and triggers? What guidance can you give people looking to leverage the ability to use procedural code in SQL Server 2005?
Kent: Nothing makes my blood boil more than that topic! Anybody who works with the technology understands how wrong the “marketing spin” is that says we can do that, or more egregiously, how wrong the predictions proved to be of the demise of T-SQL. You can’t begin to use SQLCLR objects unless you have T-SQL, and that’s a good thing. The value in SQLCLR is in extending T-SQL, not in data access.
The best use of SQLCLR is to create assemblies leveraged as user-defined functions (UDFs). Doing some types of complex, procedural calculations is an obvious fit. The neatest use, though, is the ability to easily leverage parts of the .NET framework base class library and user libraries as alternatives to extended stored procedures.
A couple of examples come to mind. I once needed to do schematic validation of stored XML over and above what could be accomplished using XML schema collections. Being able to call System.XML.Schema quickly and easily was a simple thing when using SQLCLR. I was able to “nail jelly to a tree” using the technology.
In another case, I wrote a C# implementation of Huffman’s adaptive compression algorithm when I needed to store long text and XML instances in a database, but the data was opaque in terms of query. I could have written that code in T-SQL, but it would have been a lot harder for me. SQLCLR enabled me to be appropriately lazy.
Triggers have similar value but are more rarely needed since a trigger can efficiently use a function instead. Why tie up logic into a specialized use like a trigger instead of making it available for general use as a function?
CLR stored procedures are also less useful than first thought for the same reason. From a performance point of view, it is probably better to write the logic required in such a way that it calls UDFs from T-SQL. The difference between a stored procedure and a function is that the procedure persistently stores a pre-built execution tree and enables you to perform operations (select, insert, delete, update) where functions aren’t allowed to change the database state.
Stored procedures are acceptable for a few cases in which you have procedural generation or modification of row sets, or when you need to leverage a .NET framework function that alters the database state. Calling web services would be an example of that. Otherwise T-SQL is the better choice most of the time.
User-defined types give me the most concern. They are best when used to extend the existing T-SQL type system, like when you need to work with data containing complex numbers. There’s no other way to create rich data types with their own methods for operations. I become concerned when people insist that they should use this feature to represent business objects. The Java and Oracle folks went down that path before with dismal results, so I’m in no hurry to repeat those “learning opportunities.”
The most curious creatures in the SQLCLR menagerie are user-defined aggregators. They have obvious uses over the data types you define as UDTs. There are probably use cases for them over normal data types, as well, like taking a limited set of strings and concatenating them into a single, comma-delimited instance.
If there’s a critical piece of advice I’d offer about all of this, it’s to make sure you fully understand what you can do with T-SQL and how to do that before getting “stupid drunk” on SQLCLR Kool-Aid. There’s a lot you can learn from the work of folks like Ken Henderson, Joe Celko and Kevin Kline.
Doug: As a developer teaches and writes more, it’s difficult to keep a hand in development. What percentage of your time is spent developing code for clients and what percentage is teaching and writing?
Kent: The only real-world code I get to enjoy lately comes from helping people through mailing lists, newsgroups, conferences and talks. The month before last, I worked more than 80 hours a week for many weeks writing production code. I suppose it is a one-third to two-thirds split of real work to teaching and writing.
When I’m teaching and writing, I write code too. I write code every day, usually for two or three hours. Like any art, you have to practice to get better. It’s just that there are no clients to bitch out for lousy specs when you’re writing code for yourself <grin>.
Doug: Long after HTML was created by Tim Berners-Lee, I independently “discovered” markup languages. Have you ever “invented” anything, only to discover much better prior art?
Kent: I’m sure we all have. Remember I mentioned that I created a UDF that implemented Huffman adaptive compression? I presented that at a local conference in June. The talk was well received, but somebody came up and asked why I went to all of the effort of using that code rather than using System.IO.Compression. Of course, I hadn’t a clue that this namespace existed at the time. Ouch.
Doug: Can you think of a cool tip or trick, especially in SQL Server 2005, that many database developers do not know about?
Kent: A couple. First, fire up SQL Server 2005’s Books Online and have a look at forced parameterization. While it has a number of constraints about when it gets used and how, it can be a good way to simplify messy performance problems caused by unfixable code. But for exactly the same reasons we talked about in your first question, it could introduce performance issues that are buggers to track down.
Second, there are three distributed management views that developers should know about and use to tweak their server-side code. The first is sys.dm_exec_query_stats, which can quickly help you find queries that have the longest execution times. You can use the information provided by this query to examine the query plan by looking at sys.dm_exec_query_plan, and the T-SQL text of the query by looking at sys.dm_exec_sql_text. This could be useful for regression testing of code changes as well.
Doug: I have to ask, is the title of your Blog, Enjoy Every Sandwich, a reference to the Warren Zevon CD?
Kent: Not literally, but yes. I’ve been a fan of Zevon’s work for at least 25 years. The title of that CD is a reference to a quote Zevon made during his final appearance on the Late Show with David Letterman a few weeks before he passed way.
To me, it has many meanings. We should enjoy every sandwich. Eating should be enjoyed. In fact, any meal shared with friends and loved ones, even sandwiches, should be enjoyed. It is also a statement about being present in each moment and aware of the world around you. A very Zen thought.
Do you know someone who deserves to be a Database Geek of the Week? Or perhaps that someone is you? Send me an email at email@example.com and include “Database Geek of the Week suggestion” in the subject line.