Log Archiving
Overview
By setting up archive rules, you can configure Lumigo to forward some or all of your logs to a specified S3 bucket. Archiving allows you to store logs indefinitely, comply with regulatory requirements, and query archived logs using Athena.
How does it work?
- When a new log arrives in Lumigo, it is evaluated against all your archive rules.
- If a log matches an archive rule, it is sent to an S3 bucket according to the rule's configuration
Important Note
If a log matches multiple archive rules, it will be stored in multiple S3 buckets.
Configure Archive Rule
1. Create an S3 bucket
Login to your AWS console and create an S3 bucket to store your archived logs.
2. Set permissions
Create a new policy using the following JSON:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
# replace YOUR_LUMIGO_ROLE_ARN with your Lumigo role ARN
"AWS": "<<YOUR_LUMIGO_ROLE_ARN>>"
},
"Action": "s3:PutObject",
"Resource": [
# replace YOUR_S3_BUCKET_NAME with your archives bucket names
"arn:aws:s3:::<YOUR_S3_BUCKET_NAME>/*"
]
}
]
}
Note
- The
PutObject
permission is used for uploading logs to your S3 buckets- Ensure that the resource value ends with
/*
to apply the put permission to objects within the bucket
3. Route logs to archive buckets using archive rules
This feature will soon be supported directly via the Lumigo platform. Until then, you can contact support via Intercom to create your archive rules.
How to use archived logs?
Each of the files in the bucket contain logs aggregated according to their timestamp.
- Every line in the logs file is a JSON that represents a single log, and contains the details below:
{"timestamp": "2024-05-22T09:15:27.337000", "request_id": "REQUEST_ID", "resource_id": "resource_id", "message": "the actual log"}
- The log files are zipped
- To query your logs, you can use Athena. Make sure to use Table type
Apache Hive
and File formatJSON
.
Updated 5 months ago