Logs with Logstash
Overview
Logstash is a powerful data processing pipeline that ingests, transforms, and forwards logs to various destinations. By integrating Logstash with Lumigo, you can enhance observability by efficiently collecting and analyzing logs in real time.
Prerequisites
- Logstash Installation: Ensure Logstash is installed in your environment. For installation instructions, refer to the Logstash documentation.
- Lumigo Account: Obtain a Lumigo account and ensure you have the necessary permissions to configure log ingestion.
- OTLP Output Plugin: Install the OpenTelemetry output plugin for Logstash using:
bin/logstash-plugin install logstash-output-opentelemetry
Installation Steps
Follow these steps to integrate Logstash with Lumigo.
- Configure Logstash to Collect Logs
Set up Logstash to collect logs from your applications by configuring the appropriate input plugins based on your log sources. For example, to collect logs from files, use the file
input plugin:
input {
file {
path => "/var/log/containers/*.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
This configuration instructs Logstash to monitor log files in the /var/log/containers/
directory, commonly used in Kubernetes environments. Adjust the path
parameter to match the location of your log files.
- Transform Logs into OpenTelemetry Format
Since Logstash does not natively support OpenTelemetry logs, format them accordingly using the filter
plugin before sending them to Lumigo:
filter {
mutate {
add_field => {
"[resourceLogs][0][resource][attributes][0][key]" => "service.name"
"[resourceLogs][0][resource][attributes][0][value][stringValue]" => "logstash-service"
"[resourceLogs][0][scopeLogs][0][scope][name]" => "logstash"
"[resourceLogs][0][scopeLogs][0][scope][version]" => "1.0"
"[resourceLogs][0][scopeLogs][0][logRecords][0][timeUnixNano]" => "%{+UNIX}000000000"
"[resourceLogs][0][scopeLogs][0][logRecords][0][severityNumber]" => "9"
"[resourceLogs][0][scopeLogs][0][logRecords][0][severityText]" => "INFO"
"[resourceLogs][0][scopeLogs][0][logRecords][0][body][stringValue]" => "%{message}"
"[resourceLogs][0][scopeLogs][0][logRecords][0][attributes][0][key]" => "log.file"
"[resourceLogs][0][scopeLogs][0][logRecords][0][attributes][0][value][stringValue]" => "%{path}"
}
}
}
- Send Logs to Lumigo Using OTLP Plugin
Use the opentelemetry
output plugin to batch and forward logs to Lumigo:
output {
opentelemetry {
endpoint => "https://ga-otlp.lumigo-tracer-edge.golumigo.com"
headers => { "Authorization" => "LumigoToken <your_lumigo_token>" }
signal_type => "logs"
compression => "gzip"
}
}
Replace <your_lumigo_token>
with your Lumigo API token. The opentelemetry
plugin ensures logs are batched before being sent, improving efficiency and reducing network overhead.
- Deploy Logstash Configuration
To deploy the Logstash configuration in your environment, create a ConfigMap and deploy it as a DaemonSet in Kubernetes:
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: logging
data:
logstash.conf: |
input {
file {
path => "/var/log/containers/*.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
}
output {
opentelemetry {
endpoint => "https://ga-otlp.lumigo-tracer-edge.golumigo.com"
headers => { "Authorization" => "LumigoToken <your_lumigo_token>" }
signal_type => "logs"
compression => "gzip"
}
}
Ensure the Authorization header includes your Lumigo API token.
- Verify Integration
After deployment, verify that logs are being sent to Lumigo:
- Logstash Logs: Check Logstash logs to ensure there are no errors in processing or forwarding logs.
- Lumigo Dashboard: Log in to your Lumigo account and navigate to the logs section to confirm that logs are being ingested and displayed correctly.
Updated 7 days ago