Integrating Logstash with Spring Boot Applications
Managing logs in microservices architectures can be a daunting task without a centralized logging solution. Spring Boot applications generating logs locally make it difficult to gain a holistic view for debugging, monitoring, scaling, or auditing. Logstash, a data processing pipeline within the ELK Stack (Elasticsearch, Logstash, Kibana), solves this by collecting and transforming logs from diverse sources into a uniform format before shipping them to centralized storage like Elasticsearch.
This guide walks you through seamlessly integrating Logstash with Spring Boot applications. With step-by-step instructions, you’ll configure Logback to send logs to Logstash, write a Logstash configuration file, parse and filter logs efficiently, and test the pipeline end-to-end.
Table of Contents
- Why Centralized Logging with Logstash?
- Configuring a Logback Appender to Send Logs to Logstash
- Writing a logstash.conf File for Processing Logs
- Parsing and Filtering Logs in Logstash
- Testing the Pipeline End-to-End
- Official Documentation Links
- Summary
Why Centralized Logging with Logstash?
Logstash is a crucial piece in the ELK stack capable of collecting logs from various sources, transforming them, and delivering them to centralized storage. Here’s why centralizing logging through Logstash is vital for Spring Boot microservices:
Advantages:
- Improved Observability: Gain a unified view of logs from multiple services, making it easier to troubleshoot issues in distributed systems.
- Scalable Logging: Process logs from hundreds of microservices without overwhelming local storage.
- Data Enrichment: Logstash can parse, filter, and complement raw logs with metadata (e.g., timestamps, service names).
- Compatibility: Integrates seamlessly with Spring Boot logging frameworks like Logback or Log4j2.
- Customization: Use Logstash plugins and filters to normalize logs, redact sensitive content, or transform data formats.
Example Problem: “Request traceability is difficult when multiple Spring services log locally.”
Solution: Logstash aggregates logs into Elasticsearch with trace IDs, enabling end-to-end context visibility.
With this foundation set, let’s jump into configuring Spring Boot logs for Logstash.
Configuring a Logback Appender to Send Logs to Logstash
Spring Boot uses Logback as its default logging framework. You will need to configure a custom appender to stream logs from your application to Logstash using TCP or UDP.
Step 1. Add Dependencies
First, include the logstash-logback-encoder dependency in your pom.xml
:
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.3</version>
</dependency>
Step 2. Configure the Logback Appender
Create or modify the logback-spring.xml
file in your resources
directory to include the appender configuration.
Example – Send Logs Over TCP:
<configuration>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>127.0.0.1:5044</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH" />
</root>
</configuration>
Step 3. Optional Fields for Structured Data
Enhance logs with custom fields for better debugging:
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"application":"my-spring-app","environment":"production"}</customFields>
</encoder>
Step 4. Verify Logs in Local Development
Run your Spring Boot app to confirm that logs are forwarded to the TCP/UDP address where Logstash is listening. Logs should seamlessly integrate into the pipeline after processing through Logstash.
This configuration streams log data in JSON format, which is ideal for parsing and indexing in next steps.
Writing a logstash.conf File for Processing Logs
Logstash uses a pipeline configuration file (logstash.conf
) to define how logs are collected, parsed, and output to the next destination (commonly Elasticsearch).
Step 1. Define the Input Plugin
Tell Logstash to listen for incoming logs on a specific port:
input {
tcp {
port => 5044
codec => json
}
}
Step 2. Add Filtering Logic
Use the filter
block to parse, clean, and enrich log data. Here’s an example:
filter {
json {
source => "message" # Parse raw JSON into structured fields
}
mutate {
add_field => { "service_environment" => "production" }
}
date {
match => ["@timestamp", "ISO8601"]
}
}
Explanation:
json
plugin: Extracts key-value pairs from JSON logs emitted by Logback.mutate
plugin: Adds fields or modifies existing ones (e.g.,service_environment
).date
plugin: Normalizes timestamps for consistency across systems.
Step 3. Specify the Output Destination
Finally, define where the processed logs should go:
output {
elasticsearch {
hosts => ["http://localhost:9200"] # Elasticsearch endpoint
index => "spring-logs-%{+yyyy.MM.dd}"
}
stdout {
codec => rubydebug # Preview logs in console
}
}
The output block ensures that transformed logs are ingested into Elasticsearch for visualization with Kibana or other tools.
Parsing and Filtering Logs in Logstash
Filtering is where Logstash shines by enabling advanced log enrichment, formatting, and routing.
Example 1. Extracting Nested Fields
Suppose your JSON logs have nested metadata:
{
"level": "INFO",
"message": "User logged in",
"details": {
"user": "john_doe",
"ip_address": "192.168.1.1"
}
}
Use the filter
block to extract fields:
filter {
json {
source => "message"
}
mutate {
add_field => { "user_id" => "%{[details][user]}" }
rename => { "[details][ip_address]" => "client_ip" }
}
}
Example 2. Redacting Sensitive Information
Mask personally identifiable information (PII):
filter {
mutate {
gsub => ["message", "\d{3}-\d{2}-\d{4}", "XXX-XX-XXXX"] # Mask SSNs
}
}
Effective parsing ensures logs are searchable and consistent across all downstream systems.
Testing the Pipeline End-to-End
Testing ensures your logging setup is reliable and fault-tolerant.
Step 1. Start Logstash
Run Logstash with your logstash.conf
:
bin/logstash -f /path/to/logstash.conf
Step 2. Send Sample Logs
Manually send logs using nc
or telnet
to imitate log transmission from Spring Boot:
echo '{"@timestamp":"2025-06-13T12:00:00Z","level":"INFO","message":"This is a test log"}' | nc localhost 5044
Step 3. Verify Elasticsearch and Kibana
- View logs in Elasticsearch:
curl http://localhost:9200/spring-logs-*/_search
- Open Kibana (http://localhost:5601) and create an index pattern (
spring-logs-*
).
Step 4. Automate Testing in CI/CD
Use integration tests with tools like Testcontainers to validate your log pipeline during builds.
Official Documentation Links
For more detailed guidance:
- Logback Documentation: Logback Docs
- Logstash Documentation: Logstash Docs
These resources contain best practices for specific use cases.
Summary
Integrating Logstash with Spring Boot applications creates scalable, centralized logging pipelines that simplify debugging and monitoring. Here’s what we covered:
Key Takeaways:
- Configure Logback: Set up a TCP/UDP appender to stream structured JSON logs.
- Write Processing Pipelines: Use
logstash.conf
to parse, enrich, and format logs. - Filter and Enrich: Extract fields or redact sensitive information for safe, actionable logs.
- Verify the Setup: Test pipelines to ensure logs flow seamlessly from Spring Boot to Elasticsearch.
Start building a robust log management system for your Spring Boot microservices to transform raw log data into actionable insights!