Kubernetes Logging from Files
Overview
Lumigo provides the ability to use different runtimes not currently supported by Lumigo's OpenTelemetry distribution. This provides an out-of-house solution for tracing and logging Kubernetes workloads, enabling users to monitor applications with minimal configuration. All logs are reported to a single Lumigo project, and each log line will only contain the source container, pod, and namespace. We offer full support for this method - all logs from all runtimes and logging libraries can be collected.
Fetching container logs via files
Workloads that are using runtimes not supported by current Lumigo OpenTelemetry distribution, such as Go or Rust, can still send logs to Lumigo via logs files from containers that Kubernetes manages on each node in the cluster. The Lumigo Kubernetes operator will automatically collect logs from those files and send them to Lumigo, once the following setting is applied when installing the operator:
helm upgrade -i lumigo lumigo/lumigo-operator \
--namespace lumigo-system \
--create-namespace \
--values values.yaml
this will automatically collect logs from the file /var/log/pods
folder in each node, and forward them to Lumigo (with the exception of the kube-system
and lumigo-system
namespaces). To further customize the workloads patterns for log collection, the following settings can be provided:
echo "
lumigoToken:
value: <your Lumigo token>
clusterCollection:
logs:
enabled: true
include:
- namespacePattern: some-ns
podPattern: some-pod-*
containerPattern: some-container-*
exclude:
- containerPattern: some-other-container-*
" | helm upgrade -i lumigo lumigo/lumigo-operator --values -
In the example above, logs from all containers prefixed with some-container-
running in pods prefixed with some-pod-
(effectively, pods from a specific deployment) under the some-ns
namespace will be collected, with the exception of logs from containers prefixed with some-other-container-
from the aforementioned namespace and pods.
There's a few things to keep in mind with regards to the settings:
include
andexclude
are arrays of glob patterns to include or exclude logs, where each pattern being a combination of namespacePattern, podPattern and containerPattern. All are optional.- If a pattern is not provided for one of the components, it will be considered as a wildcard pattern - e.g. including pods while specifying
podPatter
n will include all containers of those pods in all namespaces. - Each
exclude
value is checked against the paths matched byinclude
, meaning if a path is matched by bothinclude
andexclude
, it will be excluded. - By default, all logs from all pods in all namespaces are included, with no exclusions. Exceptions are the
kube-system
andlumigo-system
namespaces. These will be always added to the default or provided exclusion list.
Opting out for specific resources
To prevent the Lumigo Kubernetes operator from injecting tracing to pods managed by some resource in a namespace that contains a Lumigo resource, add the lumigo.auto-trace
label and set it to false
:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-node
lumigo.auto-trace: "false" # <-- No injection will take place
name: hello-node
namespace: my-namespace
spec:
selector:
matchLabels:
app: hello-node
In the logs of the Lumigo Kubernetes operator, you will see a message like the following:
1.67534267851615e+09 DEBUG controller-runtime.webhook.webhooks wrote response {"webhook": "/v1alpha1/inject", "code": 200, "reason": "the resource has the 'lumigo.auto-trace' label set to 'false'; resource will not be mutated", "UID": "6d341941-c47b-4245-8814-1913cee6719f", "allowed": true}
Opting in for specific resources
Instead of monitoring an entire namespace using a Lumigo CR, you can selectively opt in individual resources by adding the lumigo.auto-trace
label set to true:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-node
lumigo.auto-trace: "true" # <-- Enable tracing just for this resource
lumigo.token-secret: "my-lumigo-secret" # <-- Optional, defaults to "lumigo-credentials"
lumigo.token-key: "my-token-key" # <-- Optional, defaults to "token"
lumigo.enable-traces: "true" # <-- Optional, controls whether traces are sent (defaults to true)
lumigo.enable-logs: "false" # <-- Optional, controls whether logs are sent (defaults to false)
name: hello-node
namespace: my-namespace
spec:
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- command:
- /agnhost
- netexec
- --http-port=8080
image: registry.k8s.io/e2e-test-images/agnhost:2.39
name: agnhost
This approach allows you to:
- Be selective about which resources to monitor without having to create a Lumigo CR.
- Apply tracing to specific resources across different namespaces
- Have more granular control over your instrumentation strategy
When using resource labels for targeted tracing, you'll need a Kubernetes secret containing your Lumigo token in the same namespace. The following labels provide full control over the instrumentation:
lumigo.auto-trace
: Enables Lumigo instrumentation for the resource. By default, this is set totrue
.lumigo.token-secret
: Specifies the name of the secret containing the Lumigo token. By default, this islumigo-credentials.
lumigo.token-key
: Specifies the key in the secret where the token is stored. By default, this istoken
.lumigo.enable-traces
: Controls whether traces are sent to Lumigo. By default, this is set totrue
.lumigo.enable-logs
: Controls whether logs are sent to Lumigo. By default, this is set tofalse
.
When a Lumigo CR exists in the namespace, it takes precedence over the lumigo.auto-trace
label when set to true
. The label will only be respected when set to false
to opt out specific resources.
The secret referenced by the labels must exist in the same namespace as the labeled resource.
Events
Events are not supported for injection via resource labels. If you're interested in collecting events for that resource, you can do so by creating a Lumigo CR in the same namespace, which will automatically collect events for all resources in that namespace.
Updated about 1 month ago