[pbs-devel] [PATCH proxmox-backup] api: chunk reader: make reading from filesystem fully async

Christian Ebner c.ebner at proxmox.com
Thu Nov 27 09:55:31 CET 2025


On 11/26/25 7:10 PM, Thomas Lamprecht wrote:
> Am 26.11.25 um 17:28 schrieb Christian Ebner:
>> Blocking the thread is problematic here and must be avoided, so
>> read the chunk data via tokio::fs::read() instead of std::fs::read()
>> and make the full loading from filesystem branch async.
> 
> Nothing against that, but "async" here comes a bit with a bigger asterisks,
> as:
> 
> "This operation is implemented by running the equivalent blocking operation
> on a separate thread pool using spawn_blocking."
> -- https://docs.rs/tokio/latest/tokio/fs/fn.read.html
> 
> So technically async, but not really does any async IO (tokio io uring when? ;)).
> 
> The important thing is that it cannot block anything, so it _is_ an OK solution
> here, might be nice to adapt the commit message slightly though, e.g. something
> like:
> 
> ...::read() to move the blocking file read in the "full loading from filesystem"
> branch to it's own thread pool. Can be done on applying though.

True, fully async is indeed overreaching and incorrect.

Can send a v2 with the commit message adapted if requested. Just to 
clarify as this came up in off-list discussion with Fabian. I do not 
expect this to be the cause of the issues as reported by the users, so 
finding that has priority.

> 
>>
>> Encountered while investigating a user provided backtrace looking for
>> possible causes of hanging backups reported in [0].
>>
>> [0] https://forum.proxmox.com/threads/176444/post-819858
>>
>> Signed-off-by: Christian Ebner <c.ebner at proxmox.com>
>> ---
>>   src/api2/reader/mod.rs | 7 ++++---
>>   1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/src/api2/reader/mod.rs b/src/api2/reader/mod.rs
>> index f7adc366f..1e74b0758 100644
>> --- a/src/api2/reader/mod.rs
>> +++ b/src/api2/reader/mod.rs
>> @@ -321,7 +321,7 @@ fn download_chunk(
>>           }
>>   
>>           let body = match &env.backend {
>> -            DatastoreBackend::Filesystem => load_from_filesystem(env, &digest)?,
>> +            DatastoreBackend::Filesystem => load_from_filesystem(env, &digest).await?,
>>               DatastoreBackend::S3(s3_client) => match env.datastore.cache() {
>>                   None => fetch_from_object_store(s3_client, &digest).await?,
>>                   Some(cache) => {
>> @@ -357,13 +357,14 @@ async fn fetch_from_object_store(s3_client: &S3Client, digest: &[u8; 32]) -> Res
>>       bail!("cannot find chunk with digest {}", hex::encode(digest));
>>   }
>>   
>> -fn load_from_filesystem(env: &ReaderEnvironment, digest: &[u8; 32]) -> Result<Body, Error> {
>> +async fn load_from_filesystem(env: &ReaderEnvironment, digest: &[u8; 32]) -> Result<Body, Error> {
>>       let (path, _) = env.datastore.chunk_path(digest);
>>       let path2 = path.clone();
>>   
>>       env.debug(format!("download chunk {path:?}"));
>>   
>> -    let data = proxmox_async::runtime::block_in_place(|| std::fs::read(path))
>> +    let data = tokio::fs::read(path)
>> +        .await
>>           .map_err(move |err| http_err!(BAD_REQUEST, "reading file {path2:?} failed: {err}"))?;
>>       Ok(Body::from(data))
>>   }
> 





More information about the pbs-devel mailing list