[pbs-devel] [PATCH proxmox-backup] fix #2983: improve tcp performance
Fabian Grünbichler
f.gruenbichler at proxmox.com
Wed Sep 9 14:51:10 CEST 2020
On September 9, 2020 1:54 pm, Dominik Csapak wrote:
> by leaving the buffer sizes on default, we get much better tcp performance
> for high latency links
>
> throughput is still impacted by latency, but much less so when
> leaving the sizes at default.
> the disadvantage is slightly higher memory usage of the server
> (details below)
>
> my local benchmarks (proxmox-backup-client benchmark):
>
> pbs client:
> PVE Host
> Epyc 7351P (16core/32thread)
> 64GB Memory
>
> pbs server:
> VM on Host
> 1 Socket, 4 Cores (Host CPU type)
> 4GB Memory
>
> average of 3 runs, rounded to MB/s
> | no delay | 1ms | 5ms | 10ms | 25ms |
> without this patch | 230MB/s | 55MB/s | 13MB/s | 7MB/s | 3MB/s |
> with this patch | 293MB/s | 293MB/s | 249MB/s | 241MB/s | 104MB/s |
>
> memory usage (resident memory) of proxmox-backup-proxy:
>
> | peak during benchmarks | after benchmarks |
> without this patch | 144MB | 100MB |
> with this patch | 145MB | 130MB |
>
> Signed-off-by: Dominik Csapak <d.csapak at proxmox.com>
Tested-by: Fabian Grünbichler <f.gruenbichler at proxmox.com>
AFAICT, the same applies to the client side despite the comment there:
diff --git a/src/client/http_client.rs b/src/client/http_client.rs
index dd457c12..ae3704d6 100644
--- a/src/client/http_client.rs
+++ b/src/client/http_client.rs
@@ -292,7 +292,6 @@ impl HttpClient {
let mut httpc = hyper::client::HttpConnector::new();
httpc.set_nodelay(true); // important for h2 download performance!
- httpc.set_recv_buffer_size(Some(1024*1024)); //important for h2 download performance!
httpc.enforce_http(false); // we want https...
let https = HttpsConnector::with_connector(httpc, ssl_connector_builder.build());
leaves restore speed unchanged without artifical delay, but improves it
to the speed without delay when adding 25ms (in this test, the
throughput is not limited by the network since it's an actual restore):
no delay, without patch: ~50MB/s
no delay, with patch: ~50MB/s
25ms delay, without patch: ~11MB/s
25ms delay, with path: ~50MB/s
do you see the same effect on your system (proxmox-backup-client restore
.. | pv -trab > /dev/null)? I haven't setup a proper test bed to
minimize effects of caching (yet), but I did the following sequence:
build, restart
test restore without delay for 1 minute and watch throughput
test restore with delay for 1 minute and watch throughput
test restore without delay for 1 minute and watch throughput
test restore with delay for 1 minute and watch throughput
patch, rinse, repeat
> ---
> src/bin/proxmox-backup-proxy.rs | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/src/bin/proxmox-backup-proxy.rs b/src/bin/proxmox-backup-proxy.rs
> index 75065e6f..5844e632 100644
> --- a/src/bin/proxmox-backup-proxy.rs
> +++ b/src/bin/proxmox-backup-proxy.rs
> @@ -87,8 +87,6 @@ async fn run() -> Result<(), Error> {
> let acceptor = Arc::clone(&acceptor);
> async move {
> sock.set_nodelay(true).unwrap();
> - sock.set_send_buffer_size(1024*1024).unwrap();
> - sock.set_recv_buffer_size(1024*1024).unwrap();
> Ok(tokio_openssl::accept(&acceptor, sock)
> .await
> .ok() // handshake errors aren't be fatal, so return None to filter
> --
> 2.20.1
>
>
>
> _______________________________________________
> pbs-devel mailing list
> pbs-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pbs-devel
>
>
>
More information about the pbs-devel
mailing list