How In-Memory Database Objects Affect Database Design: Or does it?

Comments 0

Share to social media

This part is part of an ongoing series of blogs I am writing while preparing to give a presentation based on the prefix of this blog’s title. It is a work in progress as I explore the effects of the new in-memory model. I would love comments that tell me that I am wrong (especially if it turns out that I am!) 

Before I start describing what I have discovered so far along the way, let’s make a few educated guesses. I have my data model from the previous blog entry (here), and I have a plan of attack for entering data. Basically, I plan to take the “normal” way that people deal with data like this, and enter data into each table separately, outside of and overarching transaction.

So I will create or retrieve, the Customer first, then the CustomerAddress, then I will create the SalesOrder and ever how many SalesOrderItems that the customer (or my data generation using RedGate’s Data Generator!) wants. I may dress it up over time, but for my initial experiments, this is the plan.

For a database with UNIQUE constraints on alternate keys, and FOREIGN KEY constraints on relationships, this is all pretty easy and safe. I know I can’t duplicate a customer, or violate foreign key constraints. How this will play out in the code is still just a bunch of guesses, with my only goal to basically be to not have the ETL architect trying to decide if running me over with a car is punishment enough for me introducing more bad data to deal with.

But the title of the session ends in “…Database Design”. The code isn’t database design. Rather, as an architect, it is important to realize that the database design affects the code. So will we need different data structures or will the normal normal model suffice? My initial feeling is that the logical model doesn’t change. The actual implementation details (such as the lack of foreign, unique, and check constraints) will be a difference, and the hardware/implementation layer changes immensely, and this affects your code in ways you had best understand.

I “think” that what I determine will be that the basic data architects output remains rather constant. The rules of normalization (briefly described here), don’t change at all. You still design the tables and columns based on the same factors of cardinality you would before. The physical model will be the same, and if your data modeling tool supports the in-memory structures, it is just a matter of choosing which tables should be in-mem and which don’t really merit it, followed by a lot of testing (a lot).

For my first tests, I only made the four “hot” tables in-mem:

 

This design will work fine when dealing with interop code, even if I want to implement data integrity checks to the domain tables. If I want to use native code, then all of the tables used will need to be in-memory. Are data integrity checks that important? Well, they are to me, but not so much to a lot of folks who trust their other layers to get it right. My scenario, where all of the domain tables are “static” enable this scenario to work just fine. As long as ProductType never changes, and the other code layers have only the right values, you can easily say “this works” (as long as it does… and your code has been tested for what happens if anything crashes on any given statement to the database…which is not an easy task.).

In my next post, I will share at least one of the effects the change to in-memory code has had on the code I am using, and what I am considering doing about it.

Load comments

About the author

Louis Davidson

See Profile

Louis is the former editor of Simple-Talk. Prior to that, has was a corporate database developer and data architect for a non-profit organization for 25 years! Louis has been a Microsoft MVP since 2004, and is the author of a series of SQL Server Database Design books, most recently Pro SQL Server Relational Database Design and Implementation.