Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logging framework #6284

Open
cre4ture opened this issue Apr 28, 2024 · 14 comments
Open

logging framework #6284

cre4ture opened this issue Apr 28, 2024 · 14 comments

Comments

@cre4ture
Copy link
Contributor

Should we introduce a framework for logging debugging information in the uutils?

In my eyes it would make debugging of issues in general but especially on CI much easier.

I think there should then also be the option to enable logging during CI runs and provide the log-output automatically after a failed test.

Technically, I think we should use rusts default logging facade mechanism with a logging backend e.g. "log4rs".
https://github.com/rust-lang/log?tab=readme-ov-file#log

I started already experimenting.

What do you think?

@sylvestre
Copy link
Sponsor Contributor

what would you log ?

@cre4ture
Copy link
Contributor Author

what would you log ?

I thought we all came accross a usecase where we put a (e)println!(...) deep inside the code of a utility that doesn't pass a test.
I know that attaching a debugger and putting a breakpoint helps much better. But sometimes its just not that easily possible to attach the debugger. E.g.:

  1. Cross compilation usecases where the debugger doesn run on the target.
  2. CI usecases for issues that are only reproduceable in the CI or which appears on a operating system where you do not have a VM for yet.

Putting a (e)println!(...) helps often in these cases. At least from my experience.

  • But the issue with (e)println!(...) is that it by itself makes some of the tests fail. E.g. when the test expects the stderr to have a specific content or needs to be empty.
  • Additionally, some tests that do expect success or failure of the utility tested just check the exit_status and doen't even print the error output before failing the unittest.

For this purpose, and maybe others that are just currently not that important, I think it would make sense to use a logging framework.
The logging framework would provide an additional channel for providing information. E.g. into a file that is read allways at the end of a test through the framework.

@tertsdiepraam
Copy link
Member

tertsdiepraam commented Apr 28, 2024

For the purpose that you describe, I often have very specific things that I want to log to fix the issue. I tend to use dbg! a lot, and when I'm done I search for and delete them all. That seems to be enough for me. Being able to temporarily turn the logging on and off sounds useful though. Maybe there's a way that we can have logging but the logging is only a dev-dependency? So that none of the logging is actually left when we merge a PR.

@cre4ture
Copy link
Contributor Author

So that non of the logging is actually left when we merge a PR.

Would it be needed to remove all the logs again? I could imagine that at least part of the logs do not destroy readability. Maybe they even improve it.
There is only one potential issue that I see with keeping parts of the logs: The binary size will increase due to the logs and additional dependencies.
To overcome this, we could think of disabling (e.g. with a macro?) the logs on compile time by default. To enable it at compile time, one would just need to set an additional feature to the command line of cargo.
What do you think about this idea?

@tertsdiepraam
Copy link
Member

tertsdiepraam commented Apr 28, 2024

I want my logs to be extremely specific and not include anything that someone else added because they were debugging another problem. That's mostly why I don't a bunch of random log calls throughout the code.

I'll rephrase it with a question: what logging would you add that would be generally helpful for fixing most issues?

@cre4ture
Copy link
Contributor Author

cre4ture commented Apr 28, 2024

I'll rephrase it with a question: what logging would you add that would be generally helpful for fixing most issues?

Lets take an example: The ls utility gets information from the OS API and prints it in a formated and transformed way.
For debugging issues with this tool, I put some logs that dumps the used information received from the OS. You could refere to this information also as "raw-input". This raw-input is the first thing you want to know to understand if an issue comes from the OS or your implementation.
From my experience, such kind of interface-usage-log is usfull in manyfold issues.
You might compare it with generic logging tool strace, just that in CI we can't call strace, can we?
Its even better than strace, because it can be specific to the utility.

@BenWiederhake
Copy link
Collaborator

in CI we can't call strace, can we?

That shouldn't be a problem, if you ever run into a problem where doing that is interesting.

I'd rather argue that strace is not "generally helpful for fixing most issues", so let's not do strace in CI! :P

@cre4ture
Copy link
Contributor Author

cre4ture commented May 1, 2024

I worked out two proposals:

  1. (log-file as third output channel during tests #6313) The main idea here is that the logs can be part of the stable code-base.
    But this means, that there need to be a mechanism to activate then on demand.

  2. (always setup logging, dump whatever its logged #6314) The main idea here is, that the logs are not part of the stable code-base.
    They would always be removed when debuggin is done.
    This way, there is no explicit activation mechanism needed and the test-framework
    can always specify the logfile-path via environmental variable.
    It will check after execution if the logfile was written to and then dumps the content.

Please have a look and tell me if you see any no-goes and if you have a favorite.
Be aware that these are draft PRs and not yet on production quality level.

@BenWiederhake
Copy link
Collaborator

Let's consider three scenarios:

  • Production: The end-user is just trying to execute a binary. In this case, the logger should not interfere, not cause any output whatsoever, and must also not even generate any internal buffers, due to the risk of a performance hit. Logging would not be helpful, and in fact be an error.
  • CI, or other "general-purpose" development without any modifications: Additional output would cause issues (stderr-tests, log size limits, already-terrible performance) and not help (extremely few issues are CI-specific, and in those cases it's probably better to insert targeted eprintln!() or whatever)
  • local development: Additional output would cause some issues (stderr-tests, potentially confusing), and has to be highly-specific to whatever the developer is trying to debug. For example, test: "unary bool-op" unfixable with current parser #6203 couldn't reasonably have been detected, and always outputting the parse-tree/list would just produce a lot of noise.

So I'm asking a similar question to what sylvestre asked: What would you log? In which scenario would that help? You already mentioned that eprintln!() covers nearly all usecases, why are you so sure that logging to a file is superior? What is the specific usecase where having a new logging framework (which pulls in three new dependencies) would actually do more good than harm?

@sylvestre
Copy link
Sponsor Contributor

yeah, i am not convinced.
in general, gdb is doing great work.
example:
rust-gdb --args ./target/debug/coreutils cksum --binary --tag --untagged -a md5

(doesn't help with the CI)

@cre4ture
Copy link
Contributor Author

cre4ture commented May 1, 2024

Thanks for the feedback and sharing experience about different strategy.

Nevertheless I'm still convinced that the logfile is actually superior to the eprintln solution. Simply because it doesn't polute the stderr output that quite often is subject to asserts in test.

@BenWiederhake I get from your response that potential performance impact and additional dependecies might be seen as no-goes due to the percieved low usefullness.

To improve the benefit/harm relation, I did a 3rd solution proposal: #6315 .
Performance impact is avoided by not submitting the logging statments.
Not additional dependencies are achieved by providing own simple log macro.

This way, the harmfullness goes to zero and the benefit to infinity ;-)
I'm convinced that this is helpful enought to be taken into the main branch.

@tertsdiepraam
Copy link
Member

tertsdiepraam commented May 5, 2024

Hey @cre4ture! I'd like to take a quick step back and evaluate for a bit.

I wish you would've discussed this more before making all these implementations. It's great to see that you're passionate about this, but the problem is that you then put in a lot of effort, which makes it harder for us to reject this. Additionally it split the conversation over multiple pages and feedback on the PRs gets conflated with feedback on the issue. We should have talked about the concept first here in this issue and what the parameters are before moving on to the implementation. That makes the discussion usually much more friendly and easy :)

This is on us as maintainers as well by the way. We probably should have been more clear about that. So, now, to return to the general purpose of logging. I'd like to return the discussion to this issue and I'm not gonna respond to the PR for the moment.

Okay, step back taken, let's dive in :)


Let's start with the fundamental questions:

  • What do we want to log?
  • When do we want to log?

"What" is usually quite specific to the application and I think it's hard to define that clearly in general. That gets to a problem with logging in the coreutils: they are all different and a general strategy is hard to define. Hence, I fear that the logging becomes arbitrary and not very useful (as I have already stated before).

"When" can be during testing and debugging only, or exposing it to the end user. This can actually be useful for generating bug reports, but of course comes with a runtime cost. For the coreutils, I therefore think that the answer is "during debugging only".

Alright, then how do we want to enable logging? Via a feature flags makes sense, so let's indeed roll with that.

Now, you've brought up something interesting: we have to check both stdout and stderr in tests. This means that we don't have a channel for debug logging without breaking tests. And I agree that's a problem!

For me personally then. All I want is a way to log things to a third channel for temporary debugging. I then don't want to keep those logs around after merging the PR. So I think it should be impossible to use the logging infrastructure when the feature flag is not enabled or there should be a big warning if you keep them around. We could, for example, (ab)use the #[deprecated] attribute with a message stating that it should be removed.

So I think the requirements that I personally want for this are:

  • Should only log with a feature flags.
  • Should warn (or error) at compile time when the feature isn't used.
  • Should not introduce any external dependencies when the feature isn't enabled.
  • Should not introduce a runtime cost.

And this is where I come back to your PR, because it's doing pretty good on those requirements! I've checked the ones that it meets. So, from my perspective, it can almost be merged, apart from that second point. Is this something that you agree with?

cc @cakebaker @sylvestre @BenWiederhake

@cre4ture
Copy link
Contributor Author

cre4ture commented May 5, 2024

Hey, @tertsdiepraam
BIG thanks for sorting the things right.
I appologize for not discussion in detail beforehand.
I had the feeling that some of the presented concerns where best to tackle by giving an example implementation.
The effort for this was not very high.

I did a further change to address the last missing check in your list.
By just not providing a implementation when the feature is disabled, automatically gives a compilation error.
The only thing left would be a check in the CI for the feature beeing disabled.
This check should be seperated from all other tests such that one can easily see that everything except of this is green.
I personally don't think that this CI check would be needed.
But if this is requested from your side I would continue with the implementation for this check.

Here is the current version #6315. It still has some uu_log!() in the code for example usages. They will be removed before merge.

Hoping for positive feedback.
Best regards cre4ture.

@tertsdiepraam
Copy link
Member

Indeed, I think that CI check should not be needed! Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants