PostgreSQL is an open-source relational database management system valued for its power, standards-compliance, and extensibility. Whether you’re building a microservice, spinning up test environments, or maintaining a dev copy of production, it’s often easier and cleaner to run Postgres in a container than to install it locally.
Running Postgres in Docker is great for a quick test but what if you want it to behave like a proper, production-style setup with SSL encryption, certificate-based authentication, persistent volumes, and custom configurations? In this article, we’ll find out how, tackling the various tasks involved such as:
- Generating and using self-signed SSL certificates with Postgres.
- Setting up a PostgreSQL Docker container that uses those certs for encrypted client connections.
- Configuring authentication for both automated services and human users.
- Controling the behavior of your Postgres instance using mounted config files.
We’ll do all of this with security-conscious file ownership and permission control. By the end, you’ll have a containerized PostgreSQL instance that behaves more like a hardened server than a throwaway development toy.
Why Secure Your Dockerized PostgreSQL?
If you’re just experimenting, the default Docker Postgres image is fine. But when you’re developing an application that will later use a properly secured production database, it’s worth aligning your local environment now, because:
- SSL support means traffic is encrypted, even locally.
- Certificate-based auth avoids relying on plaintext passwords or ENV variables.
- Custom config files give you control over connection rules, logging, and behavior.
- Mounted volumes let you persist data between container runs and simplify backups.
It also makes your setup portable — anyone else on your team can run the same containerized environment and be sure it behaves the same way.
Prerequisites for a Secure Setup
To get started, you’ll need:
- Docker (tested with engine 28.2
- Postgres client 17
- The latest version of OpenSSL, 3.5.0 LTS.
- A Unix-based OS (e.g., Ubuntu 24.04)
- Basic knowledge of shell commands and file permissions
Step 1: Set Up SSL/TLS for Encrypted Connections
PostgreSQL supports SSL, but it doesn’t come enabled in the default Docker image. You’ll need to create the necessary certificate and key files first – it’s not difficult, just a few OpenSSL commands – and then configure the container to use them.
We’ll start by generating self-signed certificates using OpenSSL. Later, you could replace them with ones from a trusted certificate authority like Let’s Encrypt if you plan to expose your database to the outside world.
Create Your Certificate Authority (CA)
1 2 3 4 |
mkdir certs && cd certs # Create CA private key openssl genrsa -out ca-key.pem 4096 |
This creates an RSA private key via the OpenSSL command-line tool. It sets the key length to 4096 and saves the resulting private key in ca-key.pem.
Create Server Certificate
This is what the PostgreSQL container will present to clients when they connect:
1 2 3 4 5 6 7 |
# Certificate signing request openssl req -new -key server-key.pem -out server-req.pem \ -subj "/C=US/ST=State/L=City/O=MyOrg/OU=Dev/CN=postgres" # Sign the request with your CA openssl x509 -req -in server-req.pem -CA ca-cert.pem -CAkey ca-key.pem \ -CAcreateserial -out server-cert.pem -days 365 |
This creates a self-signed CA using its private key (ca-key.pem). Its validity is set to 365 days and saved in ca-cert.pem. It also:
- Generates an X.509 certificate.
- Embeds metadata about the CA using the -subj flag.
- C: CountryST: StateL: City or localityO: Organization nameOU: Organizational unit
- CN: Common Name (in our case, postgres, but the common name can be anything).
If you omit the -subj flag, OpenSSL will ask you to provide the metadata for your CA. It’s usually very convenient to provide them upfront with the -subj flag.
Create Client Certificate
This certificate will be used by a trusted client (such as a web app or a CI process) to authenticate.
1 2 3 4 5 6 7 8 9 10 |
# Server private key openssl genrsa -out server-key.pem 4096 # Certificate signing request: openssl req -new -key server-key.pem -out server-req.pem \ -subj "/C=US/ST=State/L=City/O=MyOrg/OU=Dev/CN=postgres" # Sign the request with your CA openssl x509 -req -in server-req.pem -CA ca-cert.pem -CAkey ca-key.pem \ -CAcreateserial -out server-cert.pem -days 365 |
Set File Permissions
Make sure these certificates are usable by Postgres, but not world-readable.
1 2 3 4 5 6 7 8 |
chmod 600 server-key.pem client-key.pem ca-key.pem chmod 644 server-cert.pem client-cert.pem ca-cert.pem # PostgreSQL container uses UID 999 — assign ownership to the postgres user for server certificates: sudo chown 999:999 server-key.pem server-cert.pem # Let your host user keep access (you) forto the client certificates and CA certificate:certs sudo chown $(whoami):$(whoami) client-key.pem client-cert.pem ca-cert.pem |
Step 2: Configuring PostgreSQL for SSL and Authentication
We now have all the certificates we need, but PostgreSQL won’t use them unless we tell it to — via configuration files. We’ll create two key files:
- postgresql.conf — controls server behaviour, including SSL and logging
- pg_hba.conf — controls authentication rules and client access
These will be mounted into the container at runtime, so Postgres picks them up on launch.
Create the Config Directory
1 2 3 |
cd .. # Back to project root mkdir config cd config |
Create pg_hba.conf
This file tells PostgreSQL which users can connect, from where, using which authentication method.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
cat > pg_hba.conf << 'EOF' # PostgreSQL Client Authentication Configuration File # TYPE DATABASE USER ADDRESS METHOD # Require client certificate for pguser hostssl all pguser 0.0.0.0/0 cert # Allow other users with password (SCRAM) hostssl all all 0.0.0.0/0 scram-sha-256 # Local Unix socket access local all postgres peer local all all peer # IPv4/IPv6 localhost access host all all 127.0.0.1/32 scram-sha-256 host all all ::1/128 scram-sha-256 # Explicitly reject non-SSL connections host all all 0.0.0.0/0 reject EOF |
What’s going on here?
- The pguser role will authenticate using its client certificate — there’s no password required.
- Other users (like human users. The user pguser is for automated connections where passwords are inconvenient and less secure. Certificates provide stronger authentication for service-to-service communication, applications can’t forget certificates, and they also can’t be easily intercepted like passwords. Humans (developers), on the other hand, can still use passwords, secured by SSL.
- Local socket access (e.g. via docker exec) is allowed with no password – a handy “back door” for manual admin tasks when you’re inside the container.
It’s a common and practical split: strong certificate-based auth for automated systems (they won’t forget a cert), and simpler password auth for developers.
Create postgresql.conf
This is PostgreSQL’s main configuration file. We’ll use it to enable SSL and point to the right files.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
cat > postgresql.conf << 'EOF' # Enable SSL ssl = on ssl_cert_file = '/certs/server-cert.pem' ssl_key_file = '/certs/server-key.pem' ssl_ca_file = '/certs/ca-cert.pem' # Networking listen_addresses = '*' port = 5432 # Logging log_connections = on log_disconnections = on # File paths hba_file = '/config/pg_hba.conf' data_directory = '/pgdata' EOF |
Why set listen_addresses = ‘*’?
This allows connections from any IP, which is useful in development or if other containers will connect via a custom location – /pgdata instead of the container’s default locationDocker network. In production, you’d narrow this.
Fix Permissions
PostgreSQL runs inside the container as UID 999, so it must own the config files. Otherwise, you’ll likely run into permissions errors that stop the container from starting or accepting connection:
1 2 |
sudo chown 999:999 postgresql.conf pg_hba.conf sudo chmod 644 postgresql.conf pg_hba.conf |
Step 3: Prepare the Data Directory
Postgres needs a place to store databases, tables, logs, and so on – without it, Postgres will simply not operate. By default, it uses /var/lib/postgresql/data, but we’re going to use a custom path and mount it as a Docker container, which will look for or initialize all its data under the /pgdata directory inside the container. For your PostgreSQL container to use the custom /pgdata directory, you need to create the directory (on your host machine) and ensure that only the postgres user inside the PostgreSQL container has access to the /pgata directoryvolume.
1 2 3 4 |
cd .. # Back to project root mkdir pgdata sudo chown -R 999:999 pgdata sudo chmod 700 pgdata |
Step 4: Launch the Postgres UserContainer
With certificates, configs, and data directory in place, it’s time to spin up the container.
Add Yourself to the Docker Group (Optional)
So you can run Docker without sudo:
1 2 |
sudo usermod -aG docker $USER # Then restart your shell for the command above to take effect.terminal session |
Create a Docker Network
This allows Postgres to be easily connected to other containers (like an app or pgAdmin):
1 |
docker network create postgres-network |
Start the PostgreSQL Container
1 2 3 4 5 6 7 8 9 10 11 |
docker run -d \ --name postgres \ --network postgres-network \ -e POSTGRES_PASSWORD=admin123 \ -e PGDATA="/pgdata" \ -v $(pwd)/config:/config \ -v $(pwd)/pgdata:/pgdata \ -v $(pwd)/certs:/certs \ -p 5432:5432 \ postgres:latest \ -c 'config_file=/config/postgresql.conf' |
This starts the container (with the name postgres) in detached mode. It also:
- Uses our custom postgresql.conf and pg_hba.conf files.
- Mounts certs, configs, and data volumes.
- Makes Postgres accessible on port 5432.
- Sets the default postgres user password (required even if not used).
Verify It’s Running
1 |
docker ps |
You should see something like this:
1 2 3 |
nginx CONTAINER ID IMAGE ... PORTS NAMES be901fac6eb9 postgres:latest ... 0.0.0.0:5432->5432/tcp postgres |
Step 5: Creating Database Users
Now that our PostgreSQL container is up, we need to create the database users that will match our authentication setup. We will Create `pguser` for cert-based service access and `john` for human password-based access.
First, open a shell inside the container:
1 2 3 |
docker exec -it postgres bash Switch to the postgres user (the default superuser inside the container): su - postgres |
Open the PostgreSQL client:
1 |
psql |
Create two users — one for automated service-to-service connections (pguser, matching the CN in the client certificate), and one for general development (john):
1 2 3 4 |
BEGIN; CREATE USER pguser; CREATE USER john WITH PASSWORD 'john123'; COMMIT; |
Exit psql and the container shell:
1 2 3 |
/q exit exit |
Step 6: Testing SSL Connections
Let’s confirm that pguser can connect using its certificate and that john can connect using a password over SSL.
From the host machine, run:
1 2 3 4 5 |
psql "host=localhost port=5432 dbname=postgres user=pguser \ sslmode=require \ sslcert=certs/client-cert.pem \ sslkey=certs/client-key.pem \ sslrootcert=certs/ca-cert.pem" |
You should see something like:
1 2 3 4 |
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384) Type "help" for help. postgres=> |
Notice that no password was requested — the client certificate handled authentication.
Test a Password-Based User
If you connect as john, the container will fall back to password authentication, but still require SSL:
1 |
psql "host=localhost port=5432 dbname=postgres user=john sslmode=require" |
You’ll be prompted for john123 (or whatever password you set).
Step 7: Switching to Let’s Encrypt (For Real-World Deployments)
Our self-signed certificates are fine for development, but production should use trusted certificates from a CA like Let’s Encrypt. We’ll end by showing how you’d swap to Let’s Encrypt in production.
The workflow is very similar:
- Use certbot (or an equivalent ACME client) to request your certificates.
- Replace the files server-cert.pem and server-key.pem in /certs.
- Update postgresql.conf if the file paths change.
- Restart the container.
If you’re running Postgres behind a reverse proxy or load balancer, you can often terminate SSL at that layer and connect to Postgres over a secure network, simplifying the setup.
Conclusion
If you’ve followed this workflow, you should now have a containerized PostgreSQL instance with:
- SSL/TLS encryption and certificate-based authentication.
- Custom pg_hba.conf and postgresql.conf for fine-grained control.
- Persistent volumes to keep your data across container runs.
- A clear path to production-grade certificates via Let’s Encrypt.
You’ve just built something much closer to a real-world PostgreSQL setup — SSL, cert-based auth, persistent data — but still with all the convenience of Docker. Not bad for a few shell commands. It’s portable, reproducible, and secure enough to mirror a real-world environment, making your database development and testing much more relevant and meaningful.
Load comments