| |

Centralized Logging in Spring Boot with ELK Stack: Complete Setup Guide

Logs are an essential part of maintaining a healthy application ecosystem. They allow you to debug issues, trace requests, and monitor application performance. However, as applications scale, logs scattered across different services and systems become difficult to manage. That’s where centralized logging steps in. By consolidating logs in one place, developers and operators can monitor, analyze, and act on them efficiently.

This guide will walk you through implementing centralized logging in Spring Boot with the ELK Stack (Elasticsearch, Logstash, Kibana). You’ll learn what centralized logging is, how to set up an ELK stack using Docker, sending logs from Spring Boot using Logstash or Filebeat, and how to visualize data-rich dashboards in Kibana.

Table of Contents

  1. What is Centralized Logging?
  2. Setting Up a Docker-Based ELK Stack
  3. Sending Spring Boot Logs Using Logstash or Filebeat
  4. Visualizing Logs in Kibana
  5. Summary

What is Centralized Logging?

Centralized logging is the practice of consolidating logs from various parts of your application (e.g., microservices, databases, servers) into a single system. Instead of searching through logs scattered across servers, you have a unified view to track events, troubleshoot errors, and monitor performance.

Benefits of Centralized Logging

  1. Single Source of Truth: A centralized logging system stores all logs, allowing teams to find any piece of information with a single query.
  2. Faster Debugging: Logs from multiple microservices are aggregated, meaning you can trace issues end-to-end in distributed systems.
  3. Enhanced Data Analysis: Systems like Elasticsearch enable advanced queries, aggregations, and real-time analytics.
  4. Compliance: For businesses needing to meet data regulations (e.g., GDPR, CCPA), centralized logs ensure traceability and access control.

The ELK Stack is a widely-used centralized logging solution that combines open-source tools for log processing and visualization:

  • Elasticsearch: A fast, scalable search engine for indexing and querying logs.
  • Logstash: A log processor that transforms and enriches logs before sending them to Elasticsearch.
  • Kibana: A data visualization tool for creating interactive dashboards and searches.

Now, let’s get started by setting up the ELK stack on Docker.


Setting Up a Docker-Based ELK Stack

Setting up the ELK stack with Docker simplifies deployment by avoiding complex installations. Follow these steps to get a running ELK stack.

Step 1. Install Docker and Docker Compose

Ensure you have Docker and Docker Compose installed on your system. You can verify the installation with:

docker --version
docker-compose --version

Step 2. Create a docker-compose.yml File

The following Docker Compose file spins up Elasticsearch, Logstash, and Kibana:

version: '3.8'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.5.0
    container_name: elasticsearch
    environment:
      - discovery.type=single-node
    ports:
      - "9200:9200"
    volumes:
      - es_data:/usr/share/elasticsearch/data

  logstash:
    image: docker.elastic.co/logstash/logstash:8.5.0
    container_name: logstash
    ports:
      - "5044:5044"
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
    depends_on:
      - elasticsearch

  kibana:
    image: docker.elastic.co/kibana/kibana:8.5.0
    container_name: kibana
    ports:
      - "5601:5601"
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    depends_on:
      - elasticsearch

volumes:
  es_data:

Step 3. Configure Logstash Pipeline

Define how Logstash processes incoming logs by editing logstash/pipeline/logstash.conf:

input {
  tcp {
    port => 5044
    codec => json
  }
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "spring-logs-%{+yyyy.MM.dd}"
  }
}

Step 4. Start the ELK Stack

Run the entire stack with a single command:

docker-compose up -d

Access the services:

Your ELK stack is now live and ready to process logs.


Sending Spring Boot Logs Using Logstash or Filebeat

To centralize logs from your Spring Boot application, you need to push them to either Logstash or Elasticsearch using tools like Filebeat.

Option 1. Use Logstash for Direct Integration

Integrate Spring Boot and Logstash via Logback.

  1. Add the Logstash Encoder dependency:

We’re including the Logstash Logback Encoder dependency—great choice for structuring your logs as JSON, which pairs beautifully with centralized log systems like Logstash, Elasticsearch, or Loki.

Here’s the complete Maven dependency, structured cleanly:

<dependency>
  <groupId>net.logstash.logback</groupId>
  <artifactId>logstash-logback-encoder</artifactId>
  <version>7.3</version>
</dependency>

To use it effectively, plug in the LogstashEncoder in your Logback config:

<encoder class="net.logstash.logback.encoder.LogstashEncoder" />

This will emit structured logs with fields like @timestamp, message, and any MDC/custom fields you configure.

  1. Configure logback-spring.xml to send logs to Logstash:

this config sets up a Logback TCP appender that forwards your Spring Boot logs to Logstash running locally on port 5044, using JSON encoding via LogstashEncoder. Here’s the same snippet formatted for readability:

xml

<configuration>
  <appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
    <destination>localhost:5044</destination>
    <encoder class="net.logstash.logback.encoder.LogstashEncoder" />
  </appender>

  <root level="INFO">
    <appender-ref ref="LOGSTASH" />
  </root>
</configuration>

To make it even more resilient:

  • Add reconnection settings to handle Logstash restarts gracefully:xml

These settings bolster your log delivery pipeline by improving its resiliency and fault tolerance:

<keepAliveDuration>5 minutes</keepAliveDuration>
<reconnectionDelay>10000</reconnectionDelay> <!-- in milliseconds -->

What they do:

  • keepAliveDuration keeps the socket connection alive for 5 minutes even if there’s no traffic—great for stability.
  • reconnectionDelay ensures that if the Logstash TCP server is temporarily down, the appender will retry every 10 seconds instead of failing outright.
  • Wrap the LOGSTASH appender in an AsyncAppender to avoid blocking the application thread if Logstash is unavailable.

Logs will now flow from your Spring Boot application to Logstash.

Option 2. Use Filebeat for Log Aggregation

Alternatively, use Filebeat to ship log files from your Spring Boot application.

  1. Install and configure Filebeat:

We’re configuring Filebeat to collect logs from your Spring Boot app and ship them directly to Elasticsearch—a solid move for real-time log analytics. Here’s your YAML snippet structured for clarity:

yaml

filebeat.inputs:
  - type: log
    paths:
      - /var/log/myapp/*.log
    fields:
      app_name: "spring-boot-app"

output.elasticsearch:
  hosts: ["http://localhost:9200"]

A few enhancements you might consider:

Add a logging block to monitor Filebeat’s own logs.

Add fields_under_root: true if you want app_name to appear at the top level in each document.

Use setup.template.enabled: true to auto-load the index template.

  1. Start Filebeat: filebeat -e

These configurations send application logs to Elasticsearch for indexing.


Visualizing Logs in Kibana

Once logs are in Elasticsearch, you can analyze and visualize them in Kibana.

Step 1. Create an Index Pattern in Kibana

  1. Navigate to Kibana at http://localhost:5601.
  2. Go to Management > Data Views and create a new index pattern for spring-logs-*.
  3. Select @timestamp as the time field.

Step 2. Build Interactive Visualizations

  1. Error Trends: Use a line graph to show the number of error logs (level:"ERROR") over time.
  2. Top Endpoints: Create a bar chart that tracks request counts grouped by endpoint.
  3. Latency Distribution: Visualize response times with histograms to identify slow requests.

Step 3. Save Dashboards

Bundle these charts into a dashboard for easy reference. Set auto-refresh for real-time insights.


Summary

Centralized logging with Spring Boot and the ELK Stack accelerates debugging, enhances observability, and improves performance monitoring. Here’s what you learned:

  1. Centralized Logging Concepts: Aggregate logs from all systems into one place for actionable insights.
  2. Dockerized ELK Setup: Deploy Elasticsearch, Logstash, and Kibana with ease.
  3. Spring Boot Integration: Send logs using Logstash or Filebeat with minimal configuration.
  4. Kibana Visualizations: Explore log data to uncover trends, errors, and anomalies.

Start centralizing your Spring Boot logs with this setup today and elevate your application monitoring to the next level!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *