Skip to content
This repository has been archived by the owner on Jul 6, 2022. It is now read-only.

logrus outputs all the logs to stderr by default #25

Closed
arschles opened this issue Oct 31, 2017 · 12 comments
Closed

logrus outputs all the logs to stderr by default #25

arschles opened this issue Oct 31, 2017 · 12 comments
Assignees
Milestone

Comments

@arschles
Copy link
Contributor

arschles commented Oct 31, 2017

From @bingosummer on September 8, 2017 7:20

For now, all the logs are outputted to stderr.

In CF app logs, it looks like:

2017-09-08T06:38:28.86+0000 [APP/PROC/WEB/0] ERR time="2017-09-08T06:38:28Z" level=info msg="setting log level" logLevel=DEBUG
2017-09-08T06:38:28.86+0000 [APP/PROC/WEB/0] ERR time="2017-09-08T06:38:28Z" level=info msg="API server is listening" address="http://0.0.0.0:8080"

Not sure how the logs looks like in k8s.

From my understanding,

  1. debug, info, warn should be put into Stdout and levels error, fatal, panic into Stderr. But logrus doesn't support it directly. Needs more codes. FYI: How to separate Stdout and Stderr output? sirupsen/logrus#403
  2. No need to add timestamp because CF already did. Not sure whether k8s added the timestamp. If not, we should keep it.
@arschles
Copy link
Contributor Author

From @krancour on September 27, 2017 17:7

Interesting...

Most development has occurred outside of k8s. i.e. When I run it via make run, it's simply running in Docker via Docker Compose. There, log messages appear like so:

INFO[2017-09-27T16:54:43Z] setting log level                             logLevel=DEBUG
INFO[2017-09-27T16:54:43Z] API server is listening                       address="http://0.0.0.0:8080"

In k8s, however, they appear more similar (but not identical) to what you are observing in CF:

time="2017-09-27T17:00:48Z" level=info msg="setting log level" logLevel=INFO
time="2017-09-27T17:00:48Z" level=info msg="API server is listening" address="http://0.0.0.0:8080"

Note, however, that the fact the logrus may have written the logs to stderr didn't cause k8s to misclassify the messages as errors. I can, however, see where this is a problem for CF.

I know very little about CF, but even in the case of k8s, I am a little perplexed about how the format of the log messages got transformed so significantly from what we see with plain Docker.

I'm going to ask the ACS team's resident logging/monitoring expert for his insights into this.

Ultimately, however, if we can get log messages that aren't >= error sent to stdout, I am +1 on that.

@arschles
Copy link
Contributor Author

From @krancour on September 27, 2017 17:8

I'm going to put this in the v0.3.0 milestone for now. The focus of v0.3.0 is preparing to open source the repo, and I think we can put stability improvements under that umbrella. i.e. We'll be less feature-focused and more stability-focused.

@arschles
Copy link
Contributor Author

From @krancour on September 28, 2017 21:22

Ok...

What I've discovered is that the log.TextFormatter in logrus changes the format based on availability of TTY. When we make run, Docker Compose is running the broker container with TTY, whilst in k8s and CF, it is running without TTY. This explains the discrepancy between what I thought the log messages would look like and what they actually do.

As to the question of double timestamps-- k8s isn't adding any timestamps. When the following is seen in k8s, the timestamp was produced not by k8s, but by logrus:

time="2017-09-27T17:00:48Z" level=info msg="setting log level" logLevel=INFO

I am inferring that the double timestamps seen in CF are probably because CF (or more specifically whatever log aggregator it's using) is adding the timestamp (just as it's adding ERR based on a given message having originated from stderr instead of stdout).

Unfortunately, it seems CF will have to live with the double timestamps, because with k8s not adding timestamps, if we were to use a custom log formatter to strip away the timestamp from the logrus message, k8s logs would not have any timestamps in them at all. I'd rather see double timestamps on one platform than none on the other.

That being said, I wouldn't be totally opposed if you want to add a global config option that hints to to the broker what orchestrator/platform it's running on. This way, it would be possible to intelligently select a different log formatter for CF to eliminate the double timestamps. I wouldn't consider this high priority, but I'd accept such a PR if it were offered.

Back to the issue of logrus inappropriately using stderr for log messages of lesser severity-- I agree that's totally wrong and should be addressed.

@arschles
Copy link
Contributor Author

From @krancour on September 28, 2017 21:38

I posted this comment. Let's see what logrus' willingness to fix this is before we commit to writing a hook ourselves.

@arschles
Copy link
Contributor Author

From @bingosummer on October 9, 2017 5:38

@krancour Thanks for looking into it. I totally agree with your points. Let's wait for logrus' response.

@arschles
Copy link
Contributor Author

Looks like no response on that comment in 21 days - I think it's time to write the hook or change loggers (note: log ain't bad 😄 )

@arschles
Copy link
Contributor Author

From @krancour on October 20, 2017 2:42

@arschles my primary issue with the log package from Go's std lib is that it doesn't do leveled logging. There are lots of messages I want to see during development that I don't want to see in production. And while it wasn't a benefit I'd gone looking for specifically, logrus can do structured logging and I think that has actually turned out to be something that I would not now want to give up.

My vote is we look into writing the hook.

@arschles
Copy link
Contributor Author

From @krancour on October 20, 2017 2:42

And offering it upstream.

@arschles
Copy link
Contributor Author

@bingosummer would you be interested in writing this logrus hook?

@arschles
Copy link
Contributor Author

From @krancour on October 26, 2017 16:25

@bingosummer what if all log messages go to stdout instead? This will stop CF from allowing things that aren't errors to appear as if they are (based on the fact that they were written to stderr). Things that genuinely are errors will still be recognizable as such because the messages include a level=<level> k/v pair.

Would this be acceptable? Or would it be bothersome to CF that genuine errors aren't being written to stderr?

@arschles
Copy link
Contributor Author

From @krancour on October 26, 2017 18:10

From @seanmck:

The CF log stream stamps messages with the source channel, so it could be problematic if people are filtering on that:

https://docs.cloudfoundry.org/devguide/deploy-apps/streaming-logs.html#types

@arschles
Copy link
Contributor Author

From @krancour on October 30, 2017 4:55

Deferring to v0.4.0

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants