HPN-SSH 18.2.0 #57
Replies: 1 comment
-
One of my clients uses hpnssh for large, long-distance file transfers between large servers. The servers are part of a geographically dispersed client-server architecture system, each server usually having 5,000-8,000 concurrent "local" connections and frequently scaling upwards of 10,000 at peak periods. Same servers usually zero , but periodically 1 to 5 hpnssh/hpnsshd connections for large transfers. Total TCP memory utlization is controlled by limiting the local client connections maximum TCP socket buffer size to 1 MByte per connection. HPNSSH/HPNSSHD traffic bandwidth - delay products indicate a range of 32 MBytes and 128 MBytes for TCP socket buffer sizes to reach maximum bandwidth. /etc/sysctl configuration's settings for TCP auto-tuning are net.ipv4.tcp_rmem '4096 16384 1048576' & tcp_wmem '4096 16384 67108864' to keep total TCP memory below 'pressure zone.' The HPNSSH processes are setting TcpRcvBuf between 65536 and 524288 kilobytes to enable "full pipe" transfers with 25 - 400 ms round-trip latency. Without these settings, how does the new version achieve same throughput in a single connection? |
Beta Was this translation helpful? Give feedback.
-
This release brings HPN-SSH up to parity with OpenSSH 9.5p1. The only other change is that the HPNBufferSize and TcpRcvBuf options were removed. Both of the options were used to limit the throughput by imposing constraints on the receive buffer - either at the application layer for HPNBufferSize or at the TCP layer with TcpRcvBuf. Due to changes in the way that flow control was implemented (around 8.9p1) neither of these options actually had any effect. If, at some point, we do need to implement a throughput limiter there are better and more transparent ways to do this.
This discussion was created from the release HPN-SSH 18.2.0.
Beta Was this translation helpful? Give feedback.
All reactions