Spring Boot + Kafka + ELK: End-to-End Event Logging System
Building a robust event logging system is critical for modern-day applications, especially when working with distributed microservices. Such systems enable you to track application events, monitor behaviors, debug issues, and gain insights into system performance. By combining Spring Boot, Kafka, and the ELK Stack (Elasticsearch, Logstash, Kibana), you can create an end-to-end event logging pipeline that is real-time, scalable, and reliable.
This blog outlines how to implement such a logging system, where Spring Boot produces logs/events, Kafka serves as a log aggregator, Elasticsearch stores logs for efficient querying, and Kibana offers powerful visualizations. With practical examples and clear explanations, you’ll be ready to set up this system for your Spring Boot microservices.
Table of Contents
- Why Combine Spring Boot, Kafka, and ELK?
- Pushing Logs or Events to Kafka from Spring Boot
- Consuming Kafka Logs with Logstash
- Storing Logs in Elasticsearch
- Searching and Visualizing Logs in Kibana
- Summary
Why Combine Spring Boot, Kafka, and ELK?
The combination of Spring Boot, Kafka, and the ELK stack provides a robust foundation for distributed logging and event tracking. Here’s why:
- Spring Boot is ideal for building microservices, and its logging frameworks (e.g., Logback) seamlessly integrate with external systems.
- Kafka is a distributed messaging platform that ensures reliable log transport and real-time event streaming.
- ELK Stack centralizes logs, making them searchable and analyzable. Elasticsearch indexes logs while Kibana creates visual dashboards.
By integrating these tools, you can achieve:
- Real-Time Log Flow: Logs/events flow from Spring Boot to Kafka and are available instantly in Elasticsearch.
- Scalability: Kafka manages high-throughput log ingestion, while the ELK stack handles storage and queries for large datasets.
- Enhanced Observability: Combined with trace IDs, the system enables you to monitor and debug microservice interactions effectively.
Now, let’s break down how each component fits into the pipeline.
Pushing Logs or Events to Kafka from Spring Boot
The first step is enabling Spring Boot to produce logs or application events and send them to Kafka. Kafka acts as a high-throughput buffer, ensuring logs are reliably transported downstream.
Step 1. Add Spring Kafka Dependency
Start by adding Kafka dependencies to your Spring Boot application:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
Step 2. Configure Kafka Producer
Define Kafka properties in your application.properties
file:
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
Step 3. Create a Kafka Producer Service
Write a service to send logs or custom events to Kafka:
@Service
public class KafkaLogProducer {
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
private static final String TOPIC = "spring-logs";
public void sendLog(String message) {
kafkaTemplate.send(TOPIC, message);
}
}
Step 4. Integrate Logging with Kafka
Route Spring Boot’s Logback logs directly to Kafka.
Example logback-spring.xml
Configuration:
<configuration>
<appender name="KAFKA" class="ch.qos.logback.classic.net.KafkaAppender">
<topic>spring-logs</topic>
<producerConfig>
bootstrap.servers=localhost:9092
key.serializer=org.apache.kafka.common.serialization.StringSerializer
value.serializer=org.apache.kafka.common.serialization.StringSerializer
</producerConfig>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<root level="INFO">
<appender-ref ref="KAFKA" />
</root>
</configuration>
Logs are now routed from the Spring Boot application to Kafka in real time.
Consuming Kafka Logs with Logstash
With Spring Boot sending logs to Kafka, the next step is to configure Logstash to consume these logs and process them before storing them in Elasticsearch.
Step 1. Install Logstash
Run Logstash via Docker or download and install it manually.
Step 2. Create a Logstash Configuration File
Create a Logstash pipeline configuration file (e.g., logstash.conf
) to consume Kafka logs:
input {
kafka {
bootstrap_servers => "localhost:9092"
topics => ["spring-logs"]
codec => "json"
}
}
filter {
mutate {
add_field => {
"environment" => "production"
"application" => "spring-boot-app"
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "spring-logs-%{+yyyy.MM.dd}"
}
}
Step 3. Start Logstash
Run Logstash with the above configuration:
bin/logstash -f logstash.conf
Logstash now consumes logs from Kafka, enriches them with additional metadata, and sends them to Elasticsearch.
Storing Logs in Elasticsearch
Elasticsearch is the heart of the ELK stack. It indexes log data, enabling ultra-fast searches and aggregations.
Step 1. Verify Elasticsearch Installation
Run Elasticsearch and ensure it is accessible at http://localhost:9200
.
Step 2. Check Log Indices
Once Logstash sends logs to Elasticsearch, verify that the logs are indexed:
GET /spring-logs-*/
The output will show all indices (e.g., spring-logs-2025.06.13
) and their documents, confirming that logs are successfully stored.
Benefits of Using Elasticsearch for Logs:
- Full-Text Search: Elasticsearch indexes log data fields for lightning-fast querying.
- Scalability: Distributed architecture supports high-volume log data.
- Time-Based Indexing: Logstash-created daily indices (
spring-logs-YYYY.MM.DD
) improve query performance.
Searching and Visualizing Logs in Kibana
Kibana provides a graphical interface to explore logs in Elasticsearch, set up dashboards, and monitor trends.
Step 1. Create an Index Pattern
- Open Kibana at
http://localhost:5601
. - Go to Management > Data Views (Index Patterns).
- Add a pattern for
spring-logs-*
. - Select
@timestamp
for time-based analysis.
Step 2. Use Kibana’s Discover Feature
Navigate to Discover and explore your logs. You can filter, search, and view specific fields such as traceId
, serviceName
, or log.level
.
Example Queries:
- Find Errors:
log.level:"ERROR"
- Search by Service Name:
serviceName:"order-service"
- Filter by Time Range:
@timestamp:["2025-06-13T08:00:00" TO "2025-06-13T12:00:00"]
Step 3. Build Dashboards
- Go to Visualize and create new visualizations.
- Example Dashboards:
- Error rates over time using a line chart.
- Top services emitting logs using a pie chart.
- Average response times using histograms.
These dashboards provide real-time insights into your system’s health and performance.
Summary
The integration of Spring Boot, Kafka, and the ELK stack delivers a modular and scalable event logging system. Here’s a recap of the process:
- Spring Boot Logs: Configure logging to send structured events to Kafka.
- Kafka Aggregation: Relay logs from multiple services to Kafka as a centralized transport layer.
- Logstash Processing: Consume and enrich logs from Kafka, then forward them to Elasticsearch.
- Elasticsearch Storage: Index logs for scalable search and efficient querying.
- Kibana Visualization: Search, analyze, and visualize logs in intuitive dashboards.
Implement this system today to monitor and debug your Spring Boot microservices with confidence, scalability, and precision!
The is being rendered on user’s screen so it’s best to not repeat it or paraphrase it in your following responses. Your blog post on “Spring Boot + Kafka + ELK: End-to-End Event Logging System” is ready, complete with practical examples and clear steps. Let me know if there’s anything else you’d like to refine or add!