You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey there 馃憢馃徏,
We noticed that the linkerd-proxy container created weird values for timestamp field. It is not a timestamp in its meaning; it's more like an occurrence of an event since the container's lifetime. See the example log below.
The current situation is misleading and confusing, especially when you use a central logging system where this field is dedicated and treated as a date/unix timestamp 馃槩.
{"timestamp":"[ 10132.943961s]","level":"INFO","fields":{"message":"Continuing after timeout: linkerd_proxy_http::version::Version protocol detection timed out after 10s"},"target":"linkerd_detect","spans":[{"name":"outbound"},{"addr":"10.163.70.5:9000","name":"proxy"}],"threadId":"ThreadId(2)"}
Please take a look at the value of the timestamp. Not only is it not a date/timestamp, but also it is, for some reason, an array 馃う馃徏.
output of linkerd check -o short
N/A
Environment
K8s Version 1.24.7, running on AWS EKS.
Release 2.210.0 (85db2fc) by linkerd on 2023-09-21T21:24:58Z
Possible solution
Change the value for the timestamp to something more common, for example, ISO-8601 format, like 2024-04-25T08:43:41Z.
This format is already used for linkerd-init container.
Additional context
We deploy linked via Helm chart.
We use json logging format.
Would you like to work on fixing this bug?
no
The text was updated successfully, but these errors were encountered:
Hey @Rohlik, so sorry for the delay here! Technically, the timestamp logged is the number of seconds since the start of the proxy, both because that's faster to generate and because, for many proxy issues, the time-since-start is actually more useful than anything else.
This is not uncommon in Kubernetes logs, and in fact kubectl logs has the --timestamps flag for exactly this purpose: it adds ISO8601 timestamps to each line. Setting up your aggregator to use that is likely the easiest path forward.
Thank you for your time and your reply @kflynn. I appreciate the background you provided.
The biggest issue with using the --timestamps is that the output will look like the below, and that log is a multi-format 馃槩 log.
Many logging tools, like Fluentd, will produce errors like "message is not a valid JSON".
2024-05-04T07:05:11.571258714Z {"timestamp":"[950763.773856s]","level":"INFO","fields":{"message":"Connection closed","error":"direct connections must be mutually authenticated","client.addr":"10.10.10.10:55668"},"target":"linkerd_app_core::serve","spans":[{"name":"inbound"}],"threadId":"ThreadId(3)"}
What is the issue?
Hey there 馃憢馃徏,
We noticed that the
linkerd-proxy
container created weird values fortimestamp
field. It is not a timestamp in its meaning; it's more like an occurrence of an event since the container's lifetime. See the example log below.The current situation is misleading and confusing, especially when you use a central logging system where this field is dedicated and treated as a date/unix timestamp 馃槩.
How can it be reproduced?
Logs, error output, etc
The log example of
linkerd-proxy
:Please take a look at the value of the timestamp. Not only is it not a date/timestamp, but also it is, for some reason, an array 馃う馃徏.
output of
linkerd check -o short
N/A
Environment
K8s Version 1.24.7, running on AWS EKS.
Release 2.210.0 (85db2fc) by linkerd on 2023-09-21T21:24:58Z
Possible solution
Change the value for the timestamp to something more common, for example,
ISO-8601
format, like2024-04-25T08:43:41Z
.This format is already used for
linkerd-init
container.Additional context
We deploy linked via Helm chart.
We use json logging format.
Would you like to work on fixing this bug?
no
The text was updated successfully, but these errors were encountered: