Log Level Tuning in Spring Boot + ELK Stack for Production
Effective log management is a key factor in maintaining high-performing and reliable Spring Boot applications in production. However, the challenge lies in sifting through vast amounts of log data without compromising system performance or filling Elasticsearch indices with unnecessary clutter. This is where log level tuning shines, offering precise control over the information captured in your logs.
This blog explores strategies for optimizing log levels when integrating Spring Boot with the ELK stack (Elasticsearch, Logstash, Kibana). Topics covered include dynamic log level changes using Spring Boot’s /actuator/loggers
endpoint, filtering logs with Logstash, reducing noise in Kibana, and controlling log size. By the end, you’ll be equipped with actionable techniques to fine-tune your logging setup for a streamlined production environment.
Table of Contents
- Why Log Level Tuning is Crucial for Production
- Dynamic Log Level Changes Using /actuator/loggers
- Filtering Logs in Logstash
- Avoiding Noise in Kibana
- Controlling Log Size
- Summary
Why Log Level Tuning is Crucial for Production

Logging is a powerful tool in both development and production stages, but uncontrolled logging can easily overwhelm your system. Overly verbose logs consume storage, degrade ELK stack performance, and make it difficult to find critical insights. Conversely, inadequate logging could mask vital information during incidents or debugging sessions.
Benefits of Log Level Tuning
- Resource Efficiency: Reduces storage and processing overhead by capturing only relevant log data.
- Faster Issue Resolution: Helps isolate errors by cutting through low-priority noise.
- Better Observability: Balances log verbosity across components, ensuring clear insights without clutter.
- Real-Time Adaptability: Enables quick reconfiguration of log levels based on system behavior.
Adopting a dynamic and deliberate approach to log level tuning ensures your production environment stays optimized and responsive.
Dynamic Log Level Changes Using /actuator/loggers
Spring Boot provides a built-in feature that allows you to adjust logging levels dynamically at runtime using the Actuator’s /loggers
endpoint. This eliminates the need for restarting the application every time you need to tweak logging settings.
Step 1. Enable Spring Boot Actuator
Add the Spring Boot Actuator dependency to your project:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
Step 2. Enable Loggers Endpoint
Activate the /loggers
endpoint in your application’s application.properties
or application.yml
:
management.endpoints.web.exposure.include=loggers
management.endpoint.loggers.cache.time-to-live=0s
Step 3. List Current Loggers
Use the /actuator/loggers
endpoint to list all active loggers and their levels:
GET /actuator/loggers
Example Output:
{
"levels": ["OFF", "ERROR", "WARN", "INFO", "DEBUG", "TRACE"],
"loggers": {
"org.springframework": {
"configuredLevel": "INFO",
"effectiveLevel": "INFO"
},
"com.example.myapp": {
"configuredLevel": null,
"effectiveLevel": "DEBUG"
}
}
}
Step 4. Change Log Level Dynamically
Switch the log level of a specific package or logger without restarting your application:
POST /actuator/loggers/com.example.myapp
{
"configuredLevel": "ERROR"
}
This targets the com.example.myapp
package and suppresses all logs below the ERROR
level while retaining flexibility to adjust as needed.
Use Cases for Dynamic Log Level Changes
- During Live Incidents: Temporarily enable
DEBUG
logs for detailed troubleshooting. - Performance Monitoring: Switch to
TRACE
levels on suspected bottleneck components.
Dynamic log level management improves agility and reduces restart-induced downtime.
Filtering Logs in Logstash
Logstash plays a central role in the ELK stack, acting as a log processor and shipper. You can leverage its filtering capabilities to drop irrelevant logs, format data, or route logs to different Elasticsearch indices.
Step 1. Define Input and Output in Logstash
Create a basic Logstash configuration (logstash.conf
):
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "application-logs-%{+yyyy.MM.dd}"
}
}
Step 2. Add Filters to Exclude Noisy Logs
Use filter directives to suppress noisy log entries:
filter {
if [log.level] == "DEBUG" and [environment] == "production" {
drop { }
}
}
This configuration drops all DEBUG
logs from production environments, reducing noise and storage usage.
Step 3. Enrich Logs for Better Traceability
Add metadata fields to make logs more informative:
filter {
mutate {
add_field => {
"application" => "spring-boot-app"
"environment" => "test"
}
}
}
By filtering and enriching logs, Logstash ensures only valuable and well-organized data flows into Elasticsearch.
Avoiding Noise in Kibana
Kibana provides powerful tools to surface actionable data, but excessive logging can result in cluttered visualizations and slower queries. Reducing noise improves both performance and usability.
Strategy 1. Utilize Index Patterns
Separate logs based on application components or log types:
application-logs-*
: General application logs.error-logs-*
: Priority logs for errors and exceptions.
This approach allows you to focus on specific log types during analysis.
Strategy 2. Use Query Filters
Leverage Kibana’s filter capabilities to exclude noisy logs:
- Exclude Debug Logs:
NOT log.level:"DEBUG"
- Focus on Errors:
log.level:"ERROR"
- Isolate Services:
serviceName:"gateway-service"
Strategy 3. Dashboards for Key Metrics
Create dedicated dashboards for critical insights:
- Error Trends: Show error frequency over time.
- Slow Endpoints: Highlight APIs with high response times.
- Deployment Monitoring: Track anomalies during release windows.
Avoiding noise in Kibana ensures valuable insights are easy to find and act upon.
Controlling Log Size
Uncontrolled log volume can strain storage resources and slow down Elasticsearch queries. Here are strategies to limit log sizes effectively.
Use Logback’s Size Policies
Limit Spring Boot log sizes by configuring Logback for log rotation:
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>logs/debug-%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<maxFileSize>10MB</maxFileSize>
<maxHistory>30</maxHistory>
</rollingPolicy>
This configuration caps log file sizes at 10MB and retains 30 days’ worth of logs.
Index Lifecycle Policies in Elasticsearch
Use Elasticsearch to automatically delete older indices:
PUT _ilm/policy/log-retention
{
"policy": {
"phases": {
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
This retention policy ensures logs older than 30 days are removed.
Drop Irrelevant Logs
Filter out excessive debug details at the application source:
spring:
logging:
level:
org.hibernate.SQL: WARN
These techniques collectively reduce storage overhead while retaining critical information.
Summary
Log level tuning in Spring Boot + ELK Stack for production environments is a fine balance between visibility and efficiency. Here’s a recap:
- Dynamic Log Level Changes: Adjust levels on the fly using the
/actuator/loggers
endpoint. - Logstash Filtering: Drop irrelevant logs and enrich entries with meaningful metadata.
- Noise Control in Kibana: Use index patterns, filters, and dashboards to focus on actionable data.
- Log Size Management: Rotate logs, set retention policies, and filter unneeded details to conserve resources.
By following these strategies, you can create an efficient, insightful logging ecosystem that keeps your production environment optimized and your logs actionable.