One of the primary reasons to implement RLS is to facilitate reporting and ease the administrative burden. This section covers some considerations for using RLS with the primary Microsoft reporting engines and gives you an idea of things to look for in your reporting engine. Some anti patterns and alternatives to RLS are also examined.
Power BI integrates directly with RLS when using direct query access. Most department level implementations of Power BI will likely use the built in caching, which makes performance tuning much easier. In larger enterprise level implementations of Power BI, RLS may fit the requirements. Nothing special is needed to integrate it with Power BI, other than direct queries.
RLS is also available as a separate mechanism in Power BI. This allows data cached by Power BI to also be controlled by RLS. If RLS is used in the database tier, it must also be implemented in the Power BI tier when using caching. This results in a redundant implementation, but data is protected at all levels. It also allows the caching benefits of Power BI to be used while still using RLS.
It is recommended to use the same RLS rules in both the database and Power BI. This will reduce the amount of troubleshooting and ensures the business requirements are met. This is a good thing to keep in mind when designing your RLS implementation. Business users won’t understand the architectural implications of a design, so it is your job to help this as much as possible. Even less technical users on the team or those not as familiar with SQL and RLS will need guidance. A simple design that meets the requirements is usually the best design choice.
SQL Server Reporting Services (SSRS)
SSRS connections using pass through authentication to SQL Server use RLS seamlessly. SSRS has internal caching mechanisms in the SSRS temp database, but it doesn’t impact the data returned to users, unlike the Power BI caching mechanism. As with all uses of RLS, any type of access to the data is protected when hitting it via SSRS. If the connection string for a report uses a service account, it will not be protected by RLS. To be more precise, it will be protected by RLS, but all users of the report will have the same access and that access will correspond to the authorization of the service account.
It is generally a bad idea to have users directly access transactional systems, but applications are able to use RLS via a few different methods. The SESSION_CONTEXT can be set or a specific user can be designated using EXECUTE AS USER syntax. This is one of the methods used in the WideWorldImporters sample. Using RLS for application security can simplify some development tasks, but may not be worth the extra effort. If your application requires additional layers of security, such as screen level access or if on-screen functionality differs depending on the current user, adding RLS may be redundant. If the application is simple and requires immediate access to data for reports, RLS may fit the need. If used in a transactional system, be sure to test scenarios for administrators and service accounts for your API layer. Performance tuning is important for any RLS application, but it becomes critical for user facing applications.
Service accounts with elevated access bypass RLS for report users. In this scenario, users have the same access as the service account. This can be a feature or a bug depending on your implementation. This is related to the caution about cached data. Cached data used by some report engines will effectively bypass RLS if a service account is used. There are generally additional security mechanisms in these report engines that allow user access to be controlled, but it must be considered. Testing should include report access for different users and database level access if appropriate.
There are instances when RLS isn’t a good choice for a project. This will vary, and there are always exceptions, but if you find yourself using RLS with one of the following patterns you should stop and consider if it is the correct choice. When working outside of normal patterns it is always a good idea to have enough reasons to be able to defend your decisions.
Direct user access to transactional systems is a poor choice in most scenarios. Using RLS doesn’t change this pattern. Potential issues with blocking, untested queries, excessive I/O and CPU, security concerns, extra exposure to the database are all potential issues.
RLS can be used in transactional systems without granting direct access to the system. This can be used as a way to implement restricted security without much of the custom coding typically associated with restricted access. It depends on the specific business needs, but an API layer can set the executing user via SESSION_CONTEXT() and as a consequence, different rows can be returned. This wouldn’t work for all scenarios and would start to get more complicated as actual pages or functionality needs to be restricted, but it’s an interesting pattern. An example of this is given in the WideWorldImporters sample database and associated code.
Databases without direct user access
Databases that don’t allow direct user access aren’t likely candidates for RLS, since the whole point of row level security is direct access control. There are patterns for RLS that would work in these scenarios, but they are generally managed at an API level.
Less experienced teams
Setup and long-term maintenance and troubleshooting can be challenging with RLS. Teams that are inexperienced with SQL Server may not be a good choice for implementing RLS. All teams can be trained, but there are some teams that focus on development or business groups that also manage a database. There isn’t a checklist to determine if a team can manage RLS, but experience of the team and available support after implementation are key factors in the decision process.
I would recommend against using RLS as the security mechanism for multi-tenant transactional client systems. Access for these systems will generally not be direct data level authorization. If users are allowed to directly query the data, they can impact the performance of other clients. If implemented properly (no direct access and no ad-hoc queries), data won’t be exposed, but it does make maintenance and administration more difficult.
Data warehouses used by multiple affiliated departments or groups work well with RLS and are generally more tolerant of potential issues. Typically, reporting and ad-hoc queries are also less sensitive to performance variations. Properly modeled data also helps with performance. The table design can take RLS into account and include the RLS columns in the base design, greatly decreasing performance impacts. Take special care with security and consider additional audits if the data is especially sensitive.
Considering the potential for side-channel attacks, I wouldn’t recommend RLS for direct access with multiple external clients, even in a data warehouse scenario. If you do go down this path, be sure contracts support this and clients are aware, as some require separate physical servers with no chance of errors.
Staging / load databases
Staging or load databases are a common ETL pattern. A staging database is used to house data as it is prepared for the final warehouse. It is a landing zone where data can be manipulated and analyzed before it hits the production system. The same functionality can be achieved with schemas and locked down security in the production database, but a staging database offers some advantages. The entire database can be locked down to include only developers and the ETL service account, backups in the primary system will be smaller, performance can be tuned separately for each database (i.e., separate tiers in Azure or separate drives on a physical server), and no RLS is needed at this layer if users are properly restricted. This can ease the performance load involved with ETL, help testing, make reloads easier, and improve data analysis and profiling without impacting production data. There are almost no scenarios where end users would be allowed access to the staging database. The data in staging databases, or even just staging tables, tends to have more chaotic data. This data may not be curated to the degree necessary to implement RLS. RLS is not a good fit for staging databases. It isn’t even a good fit for staging tables. These databases or staging tables should be excluded from direct user access. Only ETL service accounts, developers and testers need access to these database objects. RLS requires extra work, configuration and testing. Removing that requirement for a load process can reduce the number of errors in the development process and during loads. When service accounts are changed it is easy to have ETL scenarios resulting in duplicates, excess deletes, or general errors. Having a locked-down zone without this requirement makes each part of the process easier.
Other databases or systems that don’t need RLS
SQL Server has robust security structures and mechanisms with great flexibility. Standard security structures and design should be considered first. In fact, security requirements are a first part of any SQL database design and should be considered at each step of the process and with each business requirement. The standard security structures, such as system roles, application roles, schema level security, execute as, etc. should all be evaluated and applied to meet requirements first. They require virtually no coding and minimal testing after initial setup and validation. Programming objects such as stored procedures, views and functions can also be used as a security layer, but they do require more testing and maintenance.
Small user base
Systems with a small user base likely don’t require RLS to keep them secure. There are some exceptions, but small systems should be kept uncomplicated when possible. A simple view layer or stored procedures usually will meet the needs of these systems. Smaller systems generally mean smaller support systems and that should be accounted for in the design.
There are multiple ways to bypass the need for RLS. Some are easier than others, but most are more administrative or developer intensive than using RLS. Whenever choosing a solution, it’s always a good idea to at least consider other options. I include this advice for all database design decisions down to the level of a single column definition. You should be able to defend each choice in a database design and RLS is no different. Considering the security and performance implications and the development overhead, it is especially true for RLS. This isn’t an inclusive list of alternatives, but some common solutions.
Alternatives to RLS
There are multiple ways to bypass the need for RLS. Some are easier than others, but most are more administrative or developer intensive than using RLS. Whenever choosing a solution, it’s always a good idea to at least consider other options. I include this advice for all database design decisions down to the level of a single column definition. You should be able to defend each choice in a database design and RLS is no different. Considering the security implications and the development overhead, it is especially true for RLS. This isn’t an inclusive list of alternatives, but some common solutions.
For multiple clients or user groups, creating a separate database for each client can be a viable option. Separate databases can be an especially tempting method when there are regulatory issues involved or if data is stored for competing companies. Care must still be taken with this method. In a server environment, it is easy for a single login to access multiple databases. Administrators of multi-tenant systems should be familiar with the security considerations for their particular system. Additional and regular audits are recommended to ensure everything is configured correctly.
Creating a separate database for each group or set of users requiring access can be a tedious process and difficult to maintain. Objects need to be synchronized, bugs found in one database likely impact the other databases, performance related issues need to be synchronized, users need to be maintained, and all administrative tasks need to be configured. Patterns and templates should be established for creating new databases, objects, and maintenance tasks. DevOps deployments with a common code base also make this easier.
A database project deployed via standard DevOps methods is recommended to keep multiple databases with the same objects synchronized. This includes objects such as indexes and constraint names. It is common in projects with duplicated databases for different clients to have custom objects in each database. These should also be managed in a database project, allowing for automatic deployments. Separate branches or even projects can be used for these customizations. Whatever method is used to maintain each database, consistency is critical. Methods need to be created to synchronize not only objects, but to maintain users, configure administrative tasks and possibly implement maintenance tasks.
Another method to provide different data based on users is to create different queries and reports for each distinct set of users. This offers some simplicity in the initial design, but long-term maintenance can be more complicated or at least tedious. This is a very straight-forward solution that is easily understood by less-technical members of the team. Once a base report is designed, it is very simple to copy that report and change the items specific to a group of users.
If using this design pattern for your project, there are a few things that need to be addressed. The primary concern is how to replicate or copy reports and keep them synchronized. The initial creation of reports is relatively easy, but future changes are the challenge. It is easy to put a rule in place that all reports in a group must be updated at the same time, but harder to ensure that it happens. I don’t have a great solution for this problem. Development can be chaotic, especially when there are team changes or when a high-priority bug is fixed. Anyone with data experience knows that some issues only happen with particular sets of data. They might be outliers or they might be errors. In a critical situation, the group of users, meaning one particular report, will be fixed first. Ideally the other reports based on the same template should then be updated, but it may not always happen. Code reviews, automated tools and testing are the common ways to manage this.
The next item for copied reports is determining the testing strategy. A testing strategy is needed for all development tasks, but this one can be deceptively difficult. The assumption might be that all reports based on the same core report are the same other than some WHERE clauses. But there is no guarantee that all reports are synchronized, so a thorough testing pattern must be established. There are some programmatic methods to decrease this burden, but those methods must be created and followed for every report. This can include creating a template for each group of reports or doing a diff on each group of reports. But an extreme amount of rigor must be employed in development to allow testing shortcuts. It can be useful to bring the test team into the conversation early in the development phases to ensure their requirements are considered.
Another consideration for creating multiple reports is the method for exposing those reports to end users and general organization. This depends on the report engine or custom method for creating reports. This also can be the primary method for securing the data, so be sure the method you choose is correct for your architecture.
Custom code in SQL
SQL Server objects are a very common method for restricting access to data and are often considered a security mechanism when used in this way. Almost any SQL object can be used in this way. Common methods are listed below.
Stored procedure logic
Only allowing access to data through stored procedures is a common security mechanism for direct SQL access. Any logic check or lookup can be performed in the stored procedure. It is also possible to create different stored procedures for different groups of users. If groups of stored procedures are used to limit access, you will want to organize those procedures in a way that makes it easier to manage them. Creating a schema for each group and creating procedures in those schemas greatly simplifies the management. This approach can also be layered with the reporting strategy so that each report for a particular group hits different stored procedures in their designated schema.
Creating different views for groups of users is another effective way to control access to data. As with stored procedures, it is useful to group views in some way. Naming conventions or different schemas are both useful. I like to create a different schema for specific users or groups. This simplifies administration greatly since I only need to apply security to the schema. This also makes is easy to tell what is happening in the database just by looking at the schemas. Views can be combined with stored procedures and used by the reporting layer.
Triggers can’t be used with SELECT statements, but they can be used to control INSERT, UPDATE, and DELETE statements. They can also be used with views using the INSTEAD OF option. The use case for this is likely small, but it’s worth considering if DML (INSERT, UPDATE, DELETE) is needed with the solution. An INSTEAD OF trigger can be combined with stored procedures or views to manage the SELECT portion of the requirements.
This isn’t a solution I would focus on developing and is an anti-pattern. A more robust solution is to focus on automating ETL processes and eliminating the need for direct user updates.
Scalar Functions / Table Valued Functions
Functions, both scalar and TVF can be used to limit the data returned to users. On the surface, this can be a tempting pattern. You could create a function that limits data in a way similar to RLS and use it in all of your stored procedures. If you only allow access via stored procedures, data is locked down and it is a simpler solution than RLS.
The main issue with this design pattern is that functions have a tendency to be performance killers. You might encounter hidden RBAR (row-by-agonizing-row), which is similar to a hidden cursor. Using a scalar function is generally tied to using it in the WHERE clause or in a JOIN. This also causes performance issues. This alone is a good enough reason to avoid functions. Security misconfigurations are also a possibility and are discussed in the next section.
Disadvantages of Custom Code
Even though it must be tested and validated, an advantage of RLS is that it is within a standard framework and portions of it can be validated with data management views (DMV). RLS is non-prescriptive, but there are requirements that must be met to implement RLS. And you will generally follow a standard pattern for RLS within a database project. This allows automated validation and makes it easier to perform code reviews and object validation.
This is in contrast to custom code. As with RLS, it must be tested thoroughly, but there aren’t generic patterns that can be discussed. It all depends on the individual implementation. There will be standards that are followed, but each user group will have a different set of criteria. This makes it challenging to automate validation.
As mentioned, custom code must be tested extensively. Since new objects are created for each user group, the coding and testing effort can be extensive. Not only does each new object or report need to be tested, each group added also creates a set of database objects or reports that need validation. This adds a maintenance burden to the developer team and to the test team.
Performance must be carefully managed when using custom code. This is no different from any solution, including RLS, but there are some methods that might appear tempting to use that can cause issues. If generic objects are created to manage access and ease management, especially functions, they can cause hidden RBAR in the query plan.
Another potential issue is related to misconfigurations. This is subjective, but I think there is a greater chance of security gaps when using custom objects than with RLS. If security administrators aren’t aware of the custom code strategy getting used, they could grant direct access to tables using db_datareader. This would allow users to bypass the SQL objects. I have seen administrators try to troubleshoot RLS issues by granting db_datareader and even db_owner role access. With RLS, no extra data is returned. With SQL objects, this type of misconfiguration leads to all data getting exposed.
The other thing to keep in mind is that users often find a way around limitations. If data is so restricted that users don’t use the system, they will probably find the data in another system or use an extract. This can actually make the overall system less secure. Discuss needs with users and make sure the solution works for users while still keeping the data secure.
Letting an application control access to the data is the standard pattern for modern architectures, so this isn’t much of an alternative. It is probably more realistic to call RLS the alternative to this pattern, but it is included here for the sake of completeness. There are many ways that applications control access to data. For larger enterprise applications, this is usually table-based controls, security groups, or token-based access of some type. If users don’t need direct access to the database, this is usually a safe choice.
RLS is a great solution for some projects, but previous methods for differentiating access might be a better fit for some projects. Any method selected, including RLS, need to be thoroughly tested and the code needs to be managed. Each method has challenges and the specific needs of the project will dictate what is needed. Usually there are multiple viable options and the general technical stack used and team skills will be a big driver of what is implemented.
RLS seamlessly integrates with most report engines with some caveats. It is still the responsibility of the development team to ensure that data is protected if external teams and report engines access the data. The primary concern with external access is caching and sharing of data that effectively bypass RLS if the same standards aren’t enforced. Replicating RLS in the destination system or locking down the data to authorized users are common ways to enforce security in any destination system.
RLS doesn’t fit all scenarios and there is a fair amount of administrative overhead in the design, implementation, and maintenance of RLS. It may not fit transactional systems, staging databases, small systems or systems without administrative support. Security and proper database design should take these factors into consideration.
RLS section Links:
Part 4 – Integration, Antipatterns, and Alternatives to RLS
Part 6 – Mitigations to RLS Attacks