Layers of Trust: How to Protect Financial Data from the Inside Out
This article was featured in Financial IT Winter Edition 2025. Read the full magazine here.
Prior to working for a software company, I spent most of my career working for financial organizations. I have lots of friends who still do. Talking with one the other day, the question came up, what keeps you up at night? Her one word response was a little surprising: Fraud. Understand, she’s in charge of managing data at a bank. You’d expect maybe uptime, performance, high availability, any of the standard data management worries. Instead, it’s fraud. Who has access to what kind of data within their databases is one of her biggest concerns. Let’s talk about data security for just a minute.
It Goes Way Past Logins
You’re worried about who has access to your database? Not a problem. Make sure there’s a strong password on the application login and also one on the administrators' login and you’re good to go, right? Not even a little bit. We’re not going to stop with only talking about logins, but, let’s start there. Are you following the least privilege principles? Meaning, you only give the bare minimum of permissions to the system. People who connect to your systems, regardless of where they’re coming from (remember, this threat, fraud and thievery is an internal concern as well as external), should only be able to do the single thing they are connected to do, nothing more. You’ll need a way to understand who can connect to your database, for certain, but you’ll also need a way to understand what they can do when connected. Further, you’re going to want to know when permissions on your databases change. If access is suddenly changed at 3AM on a Saturday, was that intentional?
However, it goes beyond simple logins and permissions as I said. It’s also about database and server configurations. Take for example Microsoft SQL Server databases. A mechanism introduced, but not widely adopted, was the ability to run external code through a process called Common Language Runtime (CLR). Some organizations found uses for this (I know of a bank that ran some pretty serious math as part of a data validation routine, that could only be done efficiently through the use of the CLR). However, most places have disabled the CLR because of two reasons. First, they’re simply not using it. Second, it’s a possible attack vector for infiltrating a system. You should have a mechanism in place that allows you to monitor your server configurations and report to you when they have changed, especially when it’s not something relatively benign like a memory allocation or some other performance related setting.
Further still, what kind of data are you using to build and test your systems? The very best data for development and testing is, of course, production data. However, it should be insanely obvious that moving production data outside the secure fortress that is your production system, means that you’re opening yourself up to another attack vector. If anyone in IT can access production data outside of production, then your systems are certainly open to fraud of all sorts. But hey, let’s say we have implicit trust in our people, so it's OK if they have access, right? Well, are your non-production systems as locked down as production? Do they have as many layers of protection? Are people working offsite, maybe leaving their laptop open at home? Anyone ever left one in the coffee shop or on the train? It's absolutely happened, and with it, your data, maybe your access, and certainly the ability for bad actors to fraud the system. So, generated or sanitized data is a must for your non-production systems.
We can easily keep going. I think it’s legally required that I mention AI, but the Large Language Models (LLM) do enable people who don’t even have any skills to attempt to attack your systems for the purposes of stealing. Further, the LLM tools you’re no doubt creating for internal and external use are subject to Prompt Injection (similar to SQL Injection & other coding attack vectors), model poisoning, or even a lack of output filtering. Outdated and unpatched software exposes your systems to attack. There’s still more that could be listed here, but I think this hammers home the point.
What’s Needed?
The simple question is, what do we do? The answer is not as simple as the question. What is required is defense in depth. Defense in depth is a strategy that employs multiple, layered, security measures in order to ensure that should one defense get breached, others are in place to mitigate threats and reduce the overall risk. This means that, yes, secure passwords are a great place to start. Moving on then to least privilege principles adds to it. Having monitoring in place to let you know what version your software is on, who has access, when that access changes, what the server configuration is and whether or not that is being changed is a vital aspect of defense in depth. Then, following coding best practices to sanitize both input and output. Ensuring that you’re not moving sensitive out into the wider world also will help.
Feel free to check out Redgate Monitor and let’s connect on how we help the biggest banks and financial institutions in the world protect their data, reduce risk, and sleep a little easier. You may still lose sleep over these problems, but you can ensure that you don’t lose your data.
Tools in this post
Redgate Monitor
Real-time multi-platform performance monitoring, with alerts and diagnostics





