Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong documentation or behavior of Content.copy method #11720

Closed
scrat98 opened this issue Apr 29, 2024 · 4 comments · Fixed by #11731
Closed

Wrong documentation or behavior of Content.copy method #11720

scrat98 opened this issue Apr 29, 2024 · 4 comments · Fixed by #11731

Comments

@scrat98
Copy link

scrat98 commented Apr 29, 2024

Jetty version(s)
12.0.8

Description
According to the documentation https://github.com/jetty/jetty.project/blob/jetty-12.0.x/jetty-core/jetty-io/src/main/java/org/eclipse/jetty/io/Content.java#L62

In case of failure chunks, the content source is failed if the failure chunk is last, else the failing is transient and is ignored.

But if we look at the implementation it will close Content.Source when any failed Chunk were consumed https://github.com/jetty/jetty.project/blob/jetty-12.0.x/jetty-core/jetty-io/src/main/java/org/eclipse/jetty/io/internal/ContentCopier.java#L63

It should be like this to follow the documentation

if (Content.Chunk.isFailure(current)) {
   boolean fatal = chunk.isLast();
   if (fatal) {
     throw current.getFailure();
   } else {
     source.demand(this::iterate);
     return SCHEDULED;
   }
}

Note: I think it also make sense to update an example for Copy implementation here https://eclipse.dev/jetty/documentation/jetty-12/programming-guide/index.html#pg-arch-io-content-sink, because it even does not close the Content.Source at all, that may lead to the leaking resources.

To avoid leaking its resources, a source must either:
- be read until it returns a last chunk, either EOF or a terminal failure
- be failed
@scrat98 scrat98 added the Bug For general bugs on Jetty side label Apr 29, 2024
@gregw gregw added Documentation and removed Bug For general bugs on Jetty side labels Apr 30, 2024
@gregw
Copy link
Contributor

gregw commented Apr 30, 2024

In this case, I think it is the documentation that is wrong. When reading from source, the reader is free to decide to ignore transient failures and keep trying.
However, for this utility copy method, if we were to ignore transients, then a source that never closed, but did produce IdleTimeout failures would cause the copy to hang forever, just getting another idle timeout failure every so often, that would be ignored.

The places we do handle transient failures, we don't ignore them. We might avoid closing a connection and then throw to the application/handler/servlet and let it continue its handling, but only to produce an error page.

In this case, perhaps we could have a different copy method that takes in a Predicate(Throwable) which would be called on any transient chunks and allow the caller to decide to ignore or not. Do you have a use-case for that?

@scrat98
Copy link
Author

scrat98 commented Apr 30, 2024

@gregw no, I don't have such use case. I'm a freshman in Jetty and just carefully read entire documentation and noticed that. But honestly I didn't know that transient errors could be repeated infinitely and they don't have any limit configuration inside jetty selector

gregw added a commit that referenced this issue May 1, 2024
Fix #11720 by improving the javadoc.
@gregw
Copy link
Contributor

gregw commented May 1, 2024

@scrat98 see #11731. Note we already have a copy taking a processor, that would allow transient failures to be handled/ignored. I have javadoc'd that better.

@scrat98
Copy link
Author

scrat98 commented May 1, 2024

@gregw thanks a lot! Looks really good

joakime added a commit that referenced this issue May 3, 2024
* Improve javadoc to fix #11720

---------

Co-authored-by: Joakim Erdfelt <joakim.erdfelt@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants