Log Correlation Across Spring Boot Microservices with ELK
Microservices architectures often face challenges like debugging erroneous API calls or tracing request failures across multiple services. Without proper observability, it becomes a daunting task to pinpoint where and why a request failed. Log correlation solves this issue by connecting logs across services through a shared traceId
. By integrating Spring Cloud Sleuth and the ELK stack (Elasticsearch, Logstash, Kibana), you can track every step of a request’s lifecycle.
This guide explains how to add traceId to logs, push logs to Elasticsearch, search traceId in Kibana, and debug failed API calls with ease. Whether you’re tackling production failures or optimizing systems, this guide will enhance your microservices observability.
Table of Contents
- What is Log Correlation?
- Adding traceId to Logs with Spring Cloud Sleuth
- Pushing Logs to Elasticsearch
- Searching Logs in Kibana Across Services by traceId
- Use Case: Debugging Failed API Calls
- Summary
What is Log Correlation?
Log correlation is the process of linking logs generated by different services using a unique identifier like traceId
. It enables you to trace a single request’s flow, even when it spans multiple microservices, helping you:
- Pinpoint Failures: Identify which service or component caused errors.
- Analyze Performance: Measure latencies and find bottlenecks across systems.
- Streamline Debugging: View all logs related to a request in one place for easier troubleshooting.
When combining Spring Boot with Spring Cloud Sleuth and the ELK stack, you get enhanced tracing, log indexing, and searchability.
Adding traceId to Logs with Spring Cloud Sleuth
Spring Cloud Sleuth makes distributed tracing simple by handling the propagation of traceId
and spanId
across microservices automatically.
Step 1. Add Sleuth to Your Spring Boot Application
To start using Sleuth, include the following dependency in your pom.xml
:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
Sleuth integrates seamlessly with Spring Boot and automatically generates a traceId for each incoming request, propagating it downstream through HTTP headers.
Step 2. Adding Trace Information to Logs
Sleuth automatically injects traceId
and spanId
into Spring Boot’s Mapped Diagnostic Context (MDC). Update your logback-spring.xml
file to include them:
<configuration>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - [%X{traceId}] [%X{spanId}] %msg%n</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="CONSOLE" />
</root>
</configuration>
Example Log Output:
2025-06-13 12:45:32 INFO com.example.InventoryService - [ea7df456] [7abcde45] Inventory updated successfully.
Step 3. Required Configuration for Trace Propagation
Enable Sleuth globally by adding the following properties:
spring.sleuth.enabled=true
spring.sleuth.sampler.probability=1.0
This ensures that all requests are traced, making it ideal for development and debugging environments.
With traceId and spanId logged, the next step is to centralize logs using Elasticsearch.
Pushing Logs to Elasticsearch
The ELK stack provides a centralized logging solution where Elasticsearch indexes collected logs, while Logstash processes and ships them.
Step 1. Add Logstash Appender to Send Logs
Spring Boot’s Logback can send logs directly to Logstash. Add the dependency:
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.3</version>
</dependency>
Update the logback-spring.xml
configuration to include a Logstash appender:
<configuration>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5044</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH" />
</root>
</configuration>
Step 2. Configure Logstash
Set up Logstash to process and forward logs to Elasticsearch:
input {
tcp {
port => 5044
codec => json
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "microservice-logs-%{+yyyy.MM.dd}"
}
}
Start Logstash to begin receiving logs:
bin/logstash -f logstash.conf
Step 3. Check Logs in Elasticsearch
Once logs are processed, use Elasticsearch to ensure they’re indexed. Query:
GET microservice-logs-2025.06.13/_search
Now that logs with traceId are indexed, move to Kibana to search and visualize them.
Searching Logs in Kibana Across Services by traceId
Kibana’s powerful search and visualization capabilities make debugging via traceId seamless.
Step 1. Access Kibana and Create an Index Pattern
- Open Kibana at http://localhost:5601.
- Navigate to Management > Data Views (Index Patterns).
- Create a new index pattern, e.g.,
microservice-logs-*
. - Use
@timestamp
as the time field.
Step 2. Search Logs by traceId
To find logs related to a single request, use:
traceId:"ea7df456"
This query retrieves logs for all services that handled the request, enabling easy trace reconstruction.
Step 3. Save Frequently Used Queries
Save searches like 500 Errors by traceId
for quicker access:
traceId:"ea7df456" AND level:"ERROR"
Kibana filters and queries consolidate logs even in high-volume environments.
Use Case: Debugging Failed API Calls
Imagine a scenario where an API request fails, and you must trace the failure across multiple services.
Step 1. Simulate an API Failure
Service A calls Service B, but Service B throws a NullPointerException
. The request fails with HTTP 500. Sleuth propagates the same traceId
across both services.
Step 2. Trace Error Logs in Kibana
Search for:
traceId:"ea7df456" AND level:"ERROR"
Example Logs from Service A:
2025-06-13 12:45:32 ERROR com.example.OrderService - [ea7df456] [8abcdf12] Failed to process order. Cause: HTTP 500 from Service B.
Example Logs from Service B:
2025-06-13 12:45:33 ERROR com.example.PaymentService - [ea7df456] [9abcdef7] NullPointerException encountered.
Using the traceId, you can confirm that the issue originated in Service B and what caused it.
Step 3. Visualize Error Trends
Create a Kibana dashboard to group errors by service and visualize counts over time:
- X-Axis:
@timestamp
- Y-Axis:
Count of logs
- Filters:
level:"ERROR"
Regularly monitoring such dashboards helps in proactive troubleshooting.
Summary
Log correlation across Spring Boot microservices with ELK simplifies distributed debugging by linking logs via traceId. Here’s what we covered:
- Spring Cloud Sleuth: Automatically propagates traceId and spanId.
- Elasticsearch Integration: Centralizes logs with traceId and spanId for searching.
- Kibana Search: Enables querying and visualizing logs by traceId across services.
- Debugging Use Case: Trace failed API calls through linked logs for faster resolution.
Implementing this setup arms your teams with the tools to effectively monitor, debug, and optimize your microservices. Start enhancing observability today for a more resilient system!