Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop sending response data if connection is closed #349

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

janko
Copy link
Contributor

@janko janko commented May 23, 2018

If the connection has been closed, there is no point in continuing iterating over the response body and continuing to send data, because it will never get written to the socket anyway. This saves on performance.

Also, once #close has been called on the response body, continuing to iterate over it might raise an exception. Imagine a scenario where the body is a lazy enumerable that retrieves chunks from an open file, and #close closes the file.

If the connection has been closed, there is no point in continuing
iterating over the response body and continuing to send data, because it
will never get written to the socket anyway. This saves on performance.

Also, once `#close` has been called on the response body, continuing to
iterate over it might raise an exception. Imagine a scenario where the
body is a lazy enumerable that retrieves chunks from an open file, and
`#close` closes the file.
@igrigorik
Copy link
Member

If the connection has been closed, there is no point in continuing iterating over the response body and continuing to send data, because it will never get written to the socket anyway. This saves on performance.

To confirm, this patch does not address that.. Right?

def stream_data(chunks, &block)
return if @response.closed?
@conn.send_data(chunks.next)
EM.next_tick { stream_data(chunks, &block) }
rescue StopIteration
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't StopIteration clause already catch the case you're trying to address here? stream_data is iterating over the body of response and that will raise an error if the body is closed, which will in turn call terminate_request?

Can we wire up a spec to highlight where the current logic is not sufficient?

Copy link
Contributor Author

@janko janko May 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't StopIteration clause already catch the case you're trying to address here?

StopIteration is raised when there are no more chunks. That would be the case only if this example Rack response body:

file = File.open("/path/to/file")
body = Enumerator.new { |y| y << file.read(16*1024) until file.eof? }
body = Rack::BodyProxy.new(body) { file.close } # given block is called on `#close`

was modified to catch the closed file and stop the iteration:

body = Enumerator.new do |y|
  begin
    y << file.read(16*1024) until file.eof?
  rescue IOError
  end
end

But I don't write Rack response bodies like that, because I assume that when the web server calls #close on the body object, it will not continue iteration anymore.

stream_data is iterating over the body of response and that will raise an error if the body is closed, which will in turn call terminate_request?

In some cases it will. I thought at first that the fact that #server_exception also writes the error response to the socket would be a problem, but when #close was called the connection should be closed, so that shouldn't happen. But I think the fact that an exception will be logged is not ideal.

Also, in my case I'm streaming an S3 object through Goliath, and my Rack response body object doesn't do anything on #close, so Goliath will happily continue downloading the S3 object even if there is nowhere to send the data to (e.g. the connection to the client has been closed). That's the case I'm most concerned about, as I think that can allow DoSing.

Can we wire up a spec to highlight where the current logic is not sufficient?

Yeah, I'll try again to come up with something.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janko-m curious, any particular reason why you don't or can't wrap your stream in Rack::BodyProxy? Based on above, it seems like it would give you the behavior you're after — no?

Copy link
Contributor Author

@janko janko May 28, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Rack::BodyProxy#each would terminate iterating over the (enumerable) body once #close is called on it, then it would be what I'm after. In other words, if it would behave like this:

body = 10.times.lazy.map { |i| "chunk #{i}" }
body = Rack::BodyProxy.new(body) { ... }
enum = body.enum_for(:each)

enum.next #=> "chunk 0"
enum.next #=> "chunk 1"

body.close

enum.next # doesn't return "chunk 2" but raises StopIteration instead

But from what I can see it doesn't.

@janko
Copy link
Contributor Author

janko commented May 26, 2018

If the connection has been closed, there is no point in continuing iterating over the response body and continuing to send data, because it will never get written to the socket anyway. This saves on performance.

To confirm, this patch does not address that.. Right?

This patch is intended to address that. When connection is closed, then the Goliath::Response object is closed as well (in Goliath::Request#close), which we then detect while streaming by checking for @response.closed?, and in that case we stop the iteration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants