Docker Logging Guide Part 2: Advanced Concepts & Best Practices

Comments 0

Share to social media

Welcome to part 2 of our Docker Logging Guide series. In this article, we will cover Docker logging in detail. We will explore advanced concepts and Docker logging best practices to help you optimize your logging strategy.

In Part 1 of this series, we covered the fundamentals of Docker logging, including basic concepts, logging drivers, and why it’s important to perform Docker logging when building containers. Building upon that foundation, Part 2 will cover advanced logging concepts and introduce best practices you can use when implementing Docker logging. So, let’s get started:

Docker Concepts Advanced Concepts

In this section I will introduce some of the concepts you will need as you start to make sure what kind of logging configuration you need.

Standard Output (stdout) and Standard Error (stderr)

In Docker, standard output (stdout) and standard error (stderr) help capture logs generated by containerized applications. Understanding how stdout and stderr function is beneficial for effective logging and troubleshooting in Docker environments.

Standard Output (stdout)

Standard output (stdout) is the default destination for normal program output in Docker containers. It captures the regular operational messages. They also capture informational logs and success messages generated by the application running inside the container.

Many applications and services are configured to use stdout to log information about their operations, status, and events. Tools like Docker logs or Kubernetes logs also collect stdout streams from containers to provide real-time visibility into application behavior.

Standard Error (stderr)

The standard error (stderr) is the default destination for error messages and diagnostics output in Docker containers. It captures error messages, warnings, and stack traces. It also captures other critical information indicating problems or abnormalities in the application.

Any unexpected behavior or runtime errors encountered by the application are typically logged to stderr. Developers and operators rely on stderr to identify and diagnose issues within containerized applications.

How stdout and stderr Work Together

stdout and stderr provide a clear separation between normal operational messages and error-related messages. They allow for easier parsing and analysis of logs. Docker captures both stdout and stderr streams from containers. This allows users to view and manage logs using tools like docker logs.

Docker’s logging drivers allow users to redirect container logs from stdout and stderr to different destinations, such as log files, syslog, or external log management systems.

Practical Example

Consider a Docker container running a web server application. The application logs informational messages about incoming requests and responses to stdout. While error messages related to server crashes or resource exhaustion are logged to stderr.

Volumes

Volumes in Docker provide a flexible mechanism for persisting data generated by containers, including logs. Developers use volumes to persist logs outside the Container in Docker. This ensures that log data remains accessible and persistent even after the container is stopped or removed.

Definition of Volumes

Volumes in Docker are directories or file systems that exist outside of the container’s Union File System (UFS). They can be mounted into containers. This allows data to be shared and persisted across container lifecycles.

Unlike data stored within a container’s writable layer, data stored in volumes persists even if the container is removed. This makes volumes an ideal choice for persisting logs and other data generated by containers.

Creating and Mounting Volumes

Volumes can be created using the docker volume create command. They can also be automatically created when specified in a container’s configuration.

Volumes can then be mounted into containers at runtime using the -v or --mount flag when running the container. This allows the container to read from and write to the volume as if it were a local directory.

Using Volumes for Log Persistence

When running a container that generates log data, you can mount a volume to a directory within the container where logs are written.

For example, mounting a volume to the /var/log directory within the container. This allows logs generated by the application running in the container to be persisted outside of the container.

Benefits of Using Volumes for Log Persistence

  • Data Integrity: By persisting logs outside the container, data integrity is maintained even if the container is stopped, restarted, or removed.
  • Ease of Access: Log data stored in volumes can be easily accessed and analyzed using external tools or processes. This is independent of the container.
  • Scalability: Volumes allow for scalable storage solutions to be used for log persistence. Such as network-attached storage (NAS), cloud storage, or distributed file systems.

Practical Example

Consider a Docker container running a web server application that generates access logs. Instead of storing the logs within the container, a volume can be mounted to the directory where the access logs are written. This ensures that the logs persist even if the container is stopped or removed.

Docker container logs

Docker container logs represent a collection of records detailing events, messages, and output produced by applications operating within Docker containers. These logs serve as invaluable resources for grasping the performance, behavior, and overall health of containerized applications. Docker Container logs contain diverse types of information such as:

Application Messages: These logs encapsulate the communication generated by the application housed within the container. They contain a spectrum of data, including informative notifications, warnings, error alerts, debug insights, and other diagnostic details.

System Events: These logs pertain to events within the Docker daemon and the lifecycle of containers. They document critical occurrences such as container initialization, start-up, shutdown, and removal.

Container Environment Information: This category includes logs containing pertinent details about the container’s operational environment. Such information includes configuration parameters, network connections, resource utilization metrics, and other environmental factors. These logs serve as valuable resources for monitoring container performance, diagnosing configuration issues, and optimizing resource allocation.

Security Auditing: These logs record security-related events within the container, including access control activities, authentication attempts, and other security-related incidents.

Docker logs command

The docker logs commands helps in viewing logs generated by Docker containers. It allows users to access the stdout and stderr streams of a running container, providing valuable insights into the container’s behavior and operations.

The following is a demonstration of the basic usage of the docker logs command:

Example Usage

Let’s say we have a Docker container running a web server application named “web_server”. We can use the `docker logs` command to view the logs generated by this container:

The docker logs command will display the logs generated by the “web_server” container in real time. The output may include informational messages, warnings, errors, and other log entries produced by the application running inside the container.

Additional Docker Log Options

Follow mode (-f): Allows you to continuously stream logs as they are produced by the container. This is useful for real-time monitoring of container logs.

Timestamps (--timestamps): Adds timestamps to each log entry, providing a time reference for when the log message was generated.

Tail (-n): Limits the number of log entries displayed, showing only the last N lines of the log output.

Configuring logging drivers in Docker

Configuring logging drivers in Docker is a critical aspect of managing container logs effectively. Users can tailor their logging setup to meet specific requirements and ensure reliable log collection, storage, and management.

Understanding Available Logging Drivers

Logging drivers in Docker help in determining how container logs are collected, processed, and stored. Each logging driver offers unique functionalities and integrations, catering to diverse logging requirements and infrastructure setups. Let’s do a deep dive into each logging driver. We will explore their features, error-handling mechanisms, and space usage considerations

json-file

The json-file logging driver writes container logs in JSON format to local files on the Docker host. Each log entry is formatted as a JSON object. This provides structured data for easy parsing and analysis.

These files are typically located in the following directory:

You can check the current default logging driver for the Docker daemon using the following command:

Output:

You can determine the logging driver for a running container using this docker inspect command:

Output:

You will then compose files to control how logs are collected and stored. It will determine the logging drivers and configurations to use.

This example shows you how to use the json-file logging driver for local logging:

If there are errors in writing logs to the file system, Docker may throttle log output or discard log entries, depending on the severity of the error. If there isn’t enough space for logging, Docker may prioritize essential system operations and delay or discard log entries until space becomes available.

syslog

The syslog logging driver forwards container logs to the syslog daemon on the Docker host. It typically stores logs in /var/log/syslog or /var/log/messages, depending on the operating system and syslog configuration.

Errors in sending logs to the syslog daemon may result in log entries being lost or delayed. Syslog often has built-in mechanisms for handling log delivery failures. It also manages log storage independently, and Docker does not directly control log retention or space usage.

journald

For the journald logging driver, Docker sends container logs to the systemd journal on the Docker host. This provides centralized logging capabilities alongside system logs. Logs are stored in the systemd journal’s binary format, typically located under /var/log/journal/

In case of errors when sending logs to the systemd journal, Systemd journal may implement journal rotation and storage management policies to handle errors and ensure log integrity.

Systemd journal manages log storage and rotation, ensuring that logs are retained within configured storage limits. If space is limited, older logs may be rotated or discarded to make room for new entries.

fluentd

The fluentd logging driver streams container logs to an instance of Fluentd. fluentd is a robust log collector and aggregator. It typically sends logs to various destinations for further processing and storage. The storage location for logs processed by Fluentd depends on the configuration of Fluentd and the destination specified for log storage.

Errors in log transmission to Fluentd may result in log entries being buffered or retried based on Fluentd’s configuration. Fluentd supports error handling and retry mechanisms to ensure log delivery reliability. Fluentd provides flexibility in log storage and management. This allows users to define storage policies, buffer sizes, and retention periods based on their requirements.

awslogs

The awslogs logging driver sends container logs directly to Amazon CloudWatch Logs. It’s a managed log management service provided by AWS. Logs are stored within CloudWatch Logs log groups, which are associated with the AWS account and region.

In case of Errors when sending logs to CloudWatch, CloudWatch Logs offers robust in-built mechanisms to ensure reliable log delivery.

CloudWatch Logs manages log storage and retention according to configured log group settings. This ensures that logs are retained within specified retention periods.

gelf

The gelf logging driver forwards container logs to a Graylog Extended Log Format (GELF) endpoint. Typically, a Graylog server or another GELF-compatible endpoint. Logs processed by Graylog are stored within Graylog’s storage backend. It’s typically configured to store logs in Elasticsearch.

Graylog also supports various in-built mechanisms to ensure log delivery reliability. It also manages log storage and retention, allowing users to define retention policies and disk space thresholds for log storage.

logentries

The logentries logging driver is a cloud-based log management and analytics platform. It allows users to collect, centralize, and analyze logs from various sources, including Docker containers. Logs sent to Logentries are stored within the Logentries platform, accessible via the Logentries web interface.

In case of errors, Logentries also supports various in-built mechanisms to ensure log delivery reliability.

splunk

The splunk logging driver is a leading platform for monitoring, searching, and analyzing machine-generated data, including logs. When using the Splunk logging driver, Docker container logs are sent directly to Splunk Enterprise or Splunk Cloud. They are stored within Splunk’s data index.

Configuring the Logging Driver for a Single Container

To configure the logging driver for a single container, use the --log-driver and --log-opt flags when running the container.

For example:

This command sets the logging driver to json-file and specifies options for maximum log size and number of log files.

Configuring the Default Logging Driver for the Docker Daemon

To configure the default logging driver for the Docker daemon, modify the Docker daemon configuration file (typically /etc/docker/daemon.json).

For example:

Deciding the Delivery Mode of Log Messages

The log delivery mode of a Docker container dictates how it manages incoming log messages sent to the designated driver. There exist two modes:

  1. Blocking Mode: It’s the default blocking mode. Here, log messages are synchronously delivered to the specified driver. This means that the application generating the logs pauses until each log entry is successfully dispatched. While this ensures the completeness of log delivery, it may affect performance as the application awaits the delivery process.

The following example shows how to configure blocking mode in /etc/docker/daemon.json:

With drivers like json-file or local, delays are usually insignificant since they write to the local filesystem. However, if logs are sent to a remote server, noticeable delays may occur if log delivery is sluggish.

Non-blocking Mode: In non-blocking mode, incoming log entries are processed asynchronously without halting the application. They are stored temporarily in a memory buffer until the designated logging driver can handle them. After processing, they are cleared from the buffer to accommodate new entries.

The following example shows how to configure non-blocking mode in /etc/docker/daemon.json:

Using non-blocking mode minimizes performance issues, even in scenarios with high logging activity. Nonetheless, there’s a risk of losing log entries if the driver struggles to keep up with the influx of log messages from the application. If you want to enhance reliability in non-blocking mode, you can increase the maximum buffer size.

Below is an example configuration with an increased buffer size:

Docker Logging Best Practices

Effective logging is the backbone of maintaining visibility and diagnosing issues. It also ensures the stability of containerized applications. As Docker continues to gain popularity in modern software development, understanding and implementing best practices for Docker logging becomes paramount. Let’s do a deep dive into each of the Docker logging best practices. We will cover various scenarios, implementation strategies, and their associated benefits. Let’s get to it!

Logging via Application

This approach is ideal in a scenario where your application produces structured logs or requires custom log formatting. Embedding logging functionality within the application code is ideal. in this approach, you can utilize logging libraries compatible with Docker containers, such as Java applications utilizing log4j or Node.js applications using Winston. This will generate informative and standardized log messages.

How you can implement it

  • Integrate the chosen logging framework into the application codebase (e.g., add log4j dependencies to the Java project).
  • Define loggers and log levels within the application code to capture relevant events and messages.
  • Configure logging appenders or handlers to specify log output destinations (e.g., console, file).

Benefits of this approach

Developers gain granular control over log generation and formatting, facilitating easier debugging, monitoring, and analysis.

Example Implementation

Logging Using Data Volumes

It is suitable for applications generating high volumes of log data or requiring persistent storage. In this scenario, leveraging data volumes is recommended. You will mount external volumes to store log files outside the container filesystem. This ensures data durability and facilitates log analysis and archival.

How to implement

  • Create a data volume or mount a host directory to the container’s log directory.
  • Update application or container configurations to write logs to the mounted volume or directory.
  • Monitor log files in the volume for changes and perform log rotation or archival as needed.

The benefit of this approach

Data volumes separate log storage from the container filesystem. This ensures log persistence across container lifecycle events and facilitates long-term log analysis and retention.

Example Implementation

Logging Using the Docker Logging Driver

It’s suitable when you require centralized log management and flexibility in log routing. Configuring Docker logging drivers offers a versatile solution. In this scenario, choose an appropriate logging driver such as json-filesyslog, or fluentd based on your infrastructure’s requirements. You should also consider factors such as compatibility, performance, and integration with logging platforms. This approach enables aggregation, real-time monitoring, and analysis of logs across multiple containers and hosts.

How to implement

  • Set the logging driver for each container in Docker Compose or Docker CLI.
  • Configure logging driver options to specify log format, destination, and other parameters.
  • Monitor Docker logs and verify log transmission to the configured logging system.

The benefit of this approach

Centralized logging simplifies troubleshooting, auditing, and compliance efforts by providing a unified view of log data from diverse sources, facilitating faster incident response and root cause analysis.

Example Implementation

Logging Using a Dedicated Logging Container

It’s suitable for multi-container applications with complex logging requirements. In this scenario, you will deploy a dedicated logging container alongside application containers, tasked with collecting, parsing, and forwarding logs to a centralized logging infrastructure. This will facilitate a centralized log collection, and analysis, and decouple logging functionality from application logic. Therefore, enabling independent scaling and maintenance.

How to implement

  • Define a separate logging container image with tools or agents for log collection and forwarding.
  • Configure volume mounts or network communication between the application and logging containers.
  • Monitor logging container health and performance for reliable log processing.

The benefit of this approach

Dedicated logging containers simplify log management by consolidating logging-related tasks in a separate component, ensuring efficient resource utilization and fault isolation.

Example Implementation

Logging Using the Sidecar Approach

It’s suitable for microservices architectures. In this scenario, you will adopt the sidecar pattern for logging. This enables each service to have its dedicated logging component. Deploy a lightweight logging sidecar alongside each service container to handle log aggregation, reducing dependencies and enhancing scalability.

How to Implement

  • Design sidecar containers with specialized logging functionality, such as log shippers or log parsers.
  • Deploy sidecar containers alongside application containers in the same pod or network namespace.
  • Configure inter-container communication and synchronization mechanisms for log data exchange.

The benefit of this approach

The sidecar approach offers flexibility and extensibility in logging architecture design, allowing for tailored solutions to meet diverse application requirements, while minimizing the impact on the main application container.

Example Implementation

Logging Using Logging Frameworks and Tools

Using logging frameworks and tools is recommended for applications requiring advanced log processing capabilities or integration with external monitoring systems. For this scenario, explore frameworks/tools such as Elasticsearch, Logstash, and Kibana (ELK stack) or Prometheus and Grafana. These tools provide a comprehensive log analysis and visualization.

How to implement

  • Configure Docker logging options to route container logs to the appropriate log ingestion endpoint or agent.
  • Install and configure logging framework components (e.g., Logstash, Fluentd) to receive, process, and index Docker logs.
  • Monitor logging infrastructure health and performance to ensure reliable log processing and analysis.

The benefit of this approach

Integrating with familiar logging tools streamlines operational workflows, enhances visibility into containerized environments, and facilitates centralized log management across heterogeneous infrastructures.

Example Implementation

Logging Using Centralized Logging Solutions

Implementing centralized logging solutions is suitable in distributed environments with multiple Docker hosts and clusters. In this scenario, choose logging platforms such as Splunk, Graylog, or Fluentd. These platforms are best suited for aggregating, correlating, and visualizing logs across the entire infrastructure.

How to Implement

  • Deploy and configure centralized logging infrastructure components (e.g., Elasticsearch, Splunk indexers) to receive and index log data from Docker containers.
  • Integrate Docker logging drivers or agents with centralized logging solutions to facilitate log transmission and ingestion.
  • Establish log retention policies, access controls, and monitoring mechanisms for data integrity and compliance.

The benefit of this approach

Centralized logging solutions provide a single source of truth for log data, enabling real-time monitoring, analysis, and reporting while simplifying compliance audits and regulatory requirements.

Example Implementation

Implementing Log Rotation Policies

This practice is best suitable when you want to mitigate disk space constraints and optimize storage utilization. in this scenario, configure log rotation parameters based on log file size, age, or event triggers to maintain manageable log archives and prevent storage exhaustion.

How to Implement

  • Configure log rotation utilities (e.g., logrotate, Docker logging options) within container environments to manage log files.
  • Define log rotation schedules, thresholds, and retention policies based on application requirements and storage capacity.
  • Monitor log rotation processes and adjust configurations as needed to optimize resource usage and maintain log data integrity.

The benefit of this approach

Log rotation policies prevent log files from consuming excessive disk space, mitigate the risk of storage-related issues, and maintain log data integrity by ensuring timely archival and retention.

Example Implementation

Conclusion

In this article, we’ve covered advanced Docker logging concepts and best practices to optimize log management in containerized environments. From understanding standard output and error streams to implementing logging drivers, data volumes, and centralized solutions, we’ve provided strategies to enhance observability and troubleshoot efficiently.

Using these logging practices with application needs and infrastructure characteristics, you can overcome common challenges and maximize the value of log data. For better experience and results, experiment, monitor metrics, and collaborate across teams to refine logging workflows continually.

With knowledge from this article, you’ll drive improvements in performance, reliability, and scalability in your DevOps operations. Thanks for reading!

Load comments

About the author

Bravin Wasike

See Profile

Bravin is a creative DevOps engineer and Technical Writer. He loves writing about software development. He has experience with Docker, Kubernetes, AWS, Jenkins, Terraform, CI/CD, and other DevOps tools. Bravin has written many articles on DevOps tools such as Kubernetes, Docker, Jenkins, Azure DevOps, Aws, CI/CD, Terraform, Ansible, Infrastructure as code, Infrastructure Provisioning, Monitoring and Configuration, Git, Source code management and Azure.