Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow read DoS prevention #1715

Open
b3b opened this issue Oct 11, 2022 · 6 comments
Open

Slow read DoS prevention #1715

b3b opened this issue Oct 11, 2022 · 6 comments
Labels
bug question Questions and support tasks security
Milestone

Comments

@b3b
Copy link
Contributor

b3b commented Oct 11, 2022

Motivation

Because of high memory usage per each connection, possible denial of service exists when large resources are consumed by slow clients.

Related issues: #498 , #1714

Testing

Scenario to reproduce

Host1 (slow clients):

Start a lot of slow (1 byte per second) downloads:
curl --output /dev/null -H 'Connection: close' --parallel-max 999 --parallel --parallel-immediate 'http://tempesta-host/[1-10000]' --limit-rate 1B

Host2 (legitimate client):

Try to access resource: curl -v --output /dev/null http://tempesta-host

Depending on the load, result could be:

  • Empty response, client hangs for some time, and receive [FIN,ACK] from Tempesta:
    curl: (52) Empty reply from server
  • Partial response is downloaded and connection is closed by the Tempesta side [RST, ACK]:
  { [1132 bytes data]
  * Recv failure: Connection reset by peer
  * Closing connection 0
  curl: (56) Recv failure: Connection reset by peer

Tempesta

  • cat /proc/net/sockstat show high value of TCP memory usage
  • ss -l show high Send-Q values
  • DMESG contains messages: TCP: out of memory -- consider tuning tcp_mem
tempesta.cfg
listen 80 proto=http;
server 127.0.0.1:8000;
cache 0;

Backend

Backend on port 8000 should return a large response.
Tested on 200MB, larger responses could trigger #1714

Scenario for quick reproduction

  • Reduce TCP memory bound: sysctl -w net.ipv4.tcp_mem='100 100 100'
  • Start first (slow) client: curl -v --output /dev/null http://127.0.0.1 --limit-rate 1B
  • While first client is running, start second client: curl -v --output /dev/null http://127.0.0.1
  • Second client hangs until the first client has finishing downloading.
@b3b b3b added the bug label Oct 11, 2022
@b3b b3b changed the title Slow DoS prevention Slow read DoS prevention Oct 11, 2022
@krizhanovsky krizhanovsky added security question Questions and support tasks labels Oct 11, 2022
@krizhanovsky
Copy link
Contributor

@b3b I'm wondering why client_header_timeout and client_body_timeout (https://github.com/tempesta-tech/tempesta/wiki/HTTP-security) aren't used in the Tempesta config to prevent the attack?

@b3b
Copy link
Contributor Author

b3b commented Oct 11, 2022

@b3b I'm wondering why client_header_timeout and client_body_timeout (https://github.com/tempesta-tech/tempesta/wiki/HTTP-security) aren't used in the Tempesta config to prevent the attack?

client_header_timeout and client_body_timeout was not used to demonstrate the problem of high memory usage per connection.
And these headers alone will not completely remove the problem, but may disrupt large downloads of legitimate users.

@b3b
Copy link
Contributor Author

b3b commented Oct 11, 2022

On a testing host, memory consumption for sockets when only single slow client is connected:

$ cat /proc/net/sockstat
sockets: used 242
TCP: inuse 44 orphan 1 tw 0 alloc 80 mem 51987
UDP: inuse 2 mem 1
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0

@b3b
Copy link
Contributor Author

b3b commented Oct 12, 2022

client_header_timeout and client_body_timeout limits do not affect this issue in any way.

Checked with Tempesta config:

listen 80 proto=http;
server 127.0.0.1:8000;
cache 0;
keepalive_timeout 50;
frang_limits {
    http_header_chunk_cnt 10;
    http_body_chunk_cnt 30;
    client_header_timeout 10;
    client_body_timeout 25;
}

Curl commands:

curl --output /dev/null -H 'Connection: close' --parallel-max 999 --parallel --parallel-immediate 'http://tempesta-host/[1-10000]' --limit-rate 1B -H 'Host: tempesta-tech.com'
curl --output /tmp/xx -H 'Connection: close' 'http://tempesta-host' -H 'Host: tempesta-tech.com'

@krizhanovsky
Copy link
Contributor

@b3b probably the problem is in that we ignore TCP send buffers and just retransmit packets from a backend server as they appear, so if a client connection is significantly slower than the server's one, then we hit the TCP OOM issues. I.e. this is the subject for #488 .

Could you please post the Send-Q values and clarify what we have on the backend - is the request page is large?

@b3b
Copy link
Contributor Author

b3b commented Oct 12, 2022

@krizhanovsky
For requested page of 200MB size,
When single client is conneced, for the first quick part (backend -> Tempesta) netstat shows:

Proto Recv-Q Send-Q Local Address           Foreign Address         State    
tcp        0 130966 127.0.0.1:8000          127.0.0.1:54420         ESTABLISHED

And for the second, long part:

Proto Recv-Q Send-Q    Local Address          Foreign Address      State    
tcp6       0 209649764    192.168.1.1:80      192.168.1.2:48300    FIN_WAIT1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug question Questions and support tasks security
Projects
None yet
Development

No branches or pull requests

2 participants