Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Limit pipelined requests from client to avoid memory exhaustion #995

Closed
vankoven opened this issue Apr 3, 2018 · 4 comments
Closed

Limit pipelined requests from client to avoid memory exhaustion #995

vankoven opened this issue Apr 3, 2018 · 4 comments

Comments

@vankoven
Copy link
Contributor

vankoven commented Apr 3, 2018

  • Client sends a huge amount of pipelined requests. The first request is much slower then others: it can be computationally complex, it can be long polling request or the expected response is big and slow to transmit.
    Attacker may also set zero TCP window size to pause response transmission.
  • All the requests will be scheduled to different backend connections and processed in parallel.
  • Since first response is 'slow' and client hasn't received it yet, all the following responses for the client will be buffered and saved in Tempesta - OOM will happen in the end.

Possible mitigation is not clear now, but these two actions seems to be handy:

  • Limit client's pipeline queue size (TfwCliConn->seq_queue). Slow down the client by setting zero TCP window size if the limit is exceeded.
  • Limiting number of clients' concurrent requests: don't forward all client's requests to backend servers immediately, only limited number of requests may be processed by backend servers in the same time.

Connected with #488

@vankoven vankoven added this to the 0.6 KTLS milestone Apr 3, 2018
@krizhanovsky krizhanovsky removed their assignment Jun 20, 2018
@krizhanovsky
Copy link
Contributor

krizhanovsky commented Jun 20, 2018

TfwCliConn->seq_queue keeps all the client requests and linked responses, which are already answered by a server, so I propose just to account the amount of memory acquired by the responses in the clines proxy_buffering in sense of #498/#1012.

If we get N pipelined requests from a client and M < N responses from a server, thne (M - 1)'th response can exceed the proxy_buffering limit, but not more than for appropriate limit for the server connection (server_msg_buffering in #1012 (comment)), so the client will be announced zero window size. All following (M - N) requests should be kept in the queue and dropped by server_forward_timeout if the client is too slow to receive all responses for previous requests.

So I believe the issue is linked with #498.

@i-rinat
Copy link
Contributor

i-rinat commented Nov 16, 2018

I was able to reproduce [1] memory exhaustion by sending pipelined requests with the script:

while true; do echo -en "GET / HTTP/1.1\r\nHost: loc\r\nCookie: __tfw=0000000100532f544adae29e04f752b3b1aff1cb57073da8375a0fa1\r\nConnection: keep-alive\r\n\r\n"; done | nc tempesta-vm.lan 80

Tempesta was configured to proxy only:

listen 80;
server 192.168.122.1:80;

It's crucial to let nc output to the terminal emulator. That creates required "slowness".

[1] tested version: c662213

@krizhanovsky
Copy link
Contributor

The linekd issue to test the case tempesta-tech/tempesta-test#57

@krizhanovsky
Copy link
Contributor

The subject for the issue is OOM, so we just need to carefully track and limit client memory. Limiting number of pipelined requests is hard to manage (which exactly number to specify? How does it relate to other limits, e.g. server queue size and number of server connections?)

To solve the issue client_rmem in #498 was replaced by client_mem, also see security section in #498.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants