Sending Spring Boot Logs to Elasticsearch Directly
Efficient log management is crucial for monitoring, debugging, and analyzing modern microservices. While traditional setups route logs through Logstash for preprocessing before sending them to Elasticsearch, this additional layer can introduce latency and complexity. Fortunately, you can bypass Logstash entirely by directly sending Spring Boot logs to Elasticsearch using tools like Filebeat or an HTTP endpoint.
This guide explores how to skip Logstash while maintaining a scalable and robust logging system. You’ll learn about Filebeat integration, configuring Logback for JSON logs, setting up index mappings, and managing index rotation in Elasticsearch.
Table of Contents
- Why Skip Logstash?
- Using Filebeat or HTTP to Send Logs Directly to Elasticsearch
- Configuring Logback with logstash-logback-encoder
- Setting Up Index Naming and Log Mapping
- Best Practices for Index Rotation and Management
- Official Documentation Links
- Summary
Why Skip Logstash?
Logstash is a powerful tool for parsing, enriching, and transforming logs, but it may not always be the best fit depending on your architecture and requirements.
Benefits of Directly Logging to Elasticsearch:
- Reduced Latency: By skipping Logstash, you minimize the time it takes for logs to flow into Elasticsearch.
- Simplicity: Fewer moving parts mean fewer chances of configuration errors or performance bottlenecks.
- Lower Resource Overhead: Logstash is resource-intensive, and eliminating it frees up CPU and memory for other tasks.
- Ease of Maintenance: Fewer tools in the pipeline mean simplified updates, configurations, and monitoring.
While direct logging removes some of Logstash’s preprocessing capabilities, these can often be replicated through Elasticsearch features or lightweight log shipper tools like Filebeat.
Ideal Use Cases:
- Applications with straightforward logging needs.
- Systems already capturing logs in structured formats like JSON.
- Resource-constrained environments where running Logstash is costly.
With this context established, let’s explore the two primary methods for direct log shipping.
Using Filebeat or HTTP to Send Logs Directly to Elasticsearch
Filebeat, a lightweight log shipper from Elastic, is an efficient way of forwarding logs to Elasticsearch without Logstash. Alternatively, if your application can emit logs via HTTP, you can send them directly to Elasticsearch’s REST API.
Option 1. Using Filebeat
Step 1. Install and Configure Filebeat
Download and install Filebeat on your application server by following the installation instructions for your platform.
Step 2. Configure Filebeat Inputs
Specify the location of Spring Boot log files in the filebeat.yml
configuration file:
filebeat.inputs:
- type: log
paths:
- /path/to/spring-boot-app.log
fields:
application_name: my-spring-boot-app
environment: production
fields_under_root: true
Step 3. Configure the Elasticsearch Output
Set up Filebeat to send the log data directly to Elasticsearch:
output.elasticsearch:
hosts:
- "http://localhost:9200"
index: "spring-logs-%{[application_name]}-%{+yyyy.MM.dd}"
Step 4. Start Filebeat
Run Filebeat as a background service:
service filebeat start
Option 2. Sending Logs via HTTP Endpoint
If you prefer programmatic logging, Spring Boot can emit logs directly to Elasticsearch using HTTP REST calls.
Example Spring Boot HTTP Integration:
Use an HTTP client (e.g., Apache HttpClient or RestTemplate) to send logs to Elasticsearch:
RestTemplate restTemplate = new RestTemplate();
String logJson = "{"
+ "\"@timestamp\":\"2025-06-13T12:00:00\","
+ "\"level\":\"INFO\","
+ "\"message\":\"Application started successfully\""
+ "}";
restTemplate.postForEntity("http://localhost:9200/spring-logs/_doc", logJson, String.class);
HTTP-based integration is ideal for lightweight setups or highly customized applications, but Filebeat offers more configuration flexibility and production-readiness.
Configuring Logback with logstash-logback-encoder
Spring Boot uses Logback by default, making it easy to produce and forward JSON logs to Elasticsearch.
Step 1. Add Required Dependencies
To enable JSON-structured logging, include the logstash-logback-encoder
dependency:
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>7.3</version>
</dependency>
Step 2. Define a Logback Appender
Configure logback-spring.xml
in the resources folder to output structured logs.
Example Configuration:
<configuration>
<appender name="ELASTIC" class="net.logstash.logback.appender.HttpAppender">
<url>http://localhost:9200/spring-logs/_doc</url>
<encoder class="net.logstash.logback.encoder.LogstashEncoder" />
</appender>
<root level="INFO">
<appender-ref ref="ELASTIC" />
</root>
</configuration>
This appender sends JSON logs directly to Elasticsearch via HTTP calls, eliminating the need for intermediate log processing.
Step 3. Use Custom Fields
Add context-specific fields to logs for better traceability:
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"application":"spring-boot-app","environment":"production"}</customFields>
</encoder>
Sending logs in a structured JSON format ensures compatibility with Elasticsearch’s indexing.
Setting Up Index Naming and Log Mapping
Effective log organization in Elasticsearch depends on structured index naming and predefined mappings.
Step 1. Dynamic Index Creation
Use templates to create indices dynamically:
PUT _index_template/spring-logs-template
{
"index_patterns": ["spring-logs-*"],
"template": {
"mappings": {
"properties": {
"@timestamp": { "type": "date" },
"level": { "type": "keyword" },
"application": { "type": "keyword" },
"message": { "type": "text" }
}
}
}
}
Step 2. Use Index Aliases
Index aliases provide an abstraction over actual indices, simplifying queries:
POST /_aliases
{
"actions": [
{ "add": { "index": "spring-logs-2025.06.13", "alias": "current-logs" } }
]
}
Aliases make rotating between active and historical log indices seamless.
Best Practices for Index Rotation and Management
Elasticsearch can quickly run out of storage if logs aren’t rotated or managed properly. Follow these best practices for effective log retention.
1. Set Index Lifecycle Policies
Use Index Lifecycle Management (ILM) to automate index aging and deletion:
PUT _ilm/policy/spring-logs-policy
{
"policy": {
"phases": {
"hot": { "actions": {} },
"delete": {
"min_age": "30d",
"actions": { "delete": {} }
}
}
}
}
2. Use Rollovers
Roll over indices based on size or age thresholds:
POST /spring-logs/_rollover
{
"conditions": {
"max_size": "50gb",
"max_age": "7d"
}
}
3. Archive Historical Data
Periodically move old logs to low-cost storage like AWS S3 or Elasticsearch snapshot repositories.
4. Monitor Performance
Connect Elasticsearch to monitoring tools like Kibana or Grafana to keep an eye on index sizes, health, and search latencies.
Official Documentation Links
- Elasticsearch Documentation: Read More
- Filebeat Documentation: Read More
- Logback Documentation: Read More
Summary
Skipping Logstash and directly sending Spring Boot logs to Elasticsearch simplifies the logging pipeline while maintaining flexibility and scalability. By using tools like Filebeat or HTTP-based logging, along with thoughtful index management, you can build a robust logging system tailored for Spring applications.
Key Takeaways:
- Filebeat: A lightweight log shipper ideal for forwarding Spring Boot logs to Elasticsearch.
- Logback Configuration: Use
logstash-logback-encoder
for JSON-structured logging output. - Index Organization: Set up templates, mappings, and lifecycle rules for efficient log storage.
- Rotation Best Practices: Regularly archive and rotate indices for better performance and cost savings.
Start streamlining your logging setup and leverage the power of Elasticsearch to monitor and scale your Spring Boot microservices seamlessly!