[pbs-devel] [PATCH proxmox-backup v3 3/3] sync job: pull: improve log output

Stefan Hanreich s.hanreich at proxmox.com
Thu Apr 13 17:41:56 CEST 2023


Adding an opening line for every group makes parsing the log easier.
We can also remove the 're-sync [...] done' line, because the next
line should be a progress line anyway. Printing the different reasons
for skipping snapshots makes it easier to check whether the
transfer-last parameter is working as expected.

Suggested-By: Fabian Grünbichler <f.gruenbichler at proxmox.com>
Signed-off-by: Stefan Hanreich <s.hanreich at proxmox.com>
---
Still quite unsure about the format of the skip reasons, I've opted for newlines
since putting everything on one line seemed quite long:

2023-04-13T15:30:12+02:00: skipped: 4 snapshot(s) (2023-04-12T14:55:59Z .. 2023-04-13T13:02:09Z), 2 already synced, 2 due to transfer-last

One upside of this would be, that it is easy to grep.

The current version with newlines looks like this:

2023-04-13T15:08:31+02:00: skipped: 9 snapshot(s) (2023-04-12T14:55:59Z .. 2023-04-13T13:02:22Z)
7 older than newest local snapshot
2 due to transfer-last

Those would be the only lines in the Web UI Task Log without prepended
timestamps, which might be suboptimal as well.

I could move the output of the skip reasons out of the fmt method to be more
flexible, but having everything contained there seemed desirable to me - so I
settled on this solution for now.

Complete sample output for the sync of a group in the Web UI after this commit:

2023-04-13T15:08:31+02:00: sync group vm/101
2023-04-13T15:08:31+02:00: re-sync snapshot vm/101/2023-04-13T13:02:17Z
2023-04-13T15:08:31+02:00: no data changes
2023-04-13T15:08:31+02:00: percentage done: 28.79% (1/6 groups, 8/11 snapshots in group #2)
2023-04-13T15:08:31+02:00: sync snapshot vm/101/2023-04-13T13:02:25Z
2023-04-13T15:08:31+02:00: sync archive qemu-server.conf.blob
2023-04-13T15:08:31+02:00: sync archive drive-virtio0.img.fidx
2023-04-13T15:08:31+02:00: downloaded 0 bytes (0.00 MiB/s)
2023-04-13T15:08:31+02:00: sync archive drive-scsi0.img.fidx
2023-04-13T15:08:31+02:00: downloaded 0 bytes (0.00 MiB/s)
2023-04-13T15:08:31+02:00: got backup log file "client.log.blob"
2023-04-13T15:08:31+02:00: sync snapshot vm/101/2023-04-13T13:02:25Z done
2023-04-13T15:08:31+02:00: percentage done: 33.33% (2/6 groups)
2023-04-13T15:08:31+02:00: skipped: 9 snapshot(s) (2023-04-12T14:55:59Z .. 2023-04-13T13:02:22Z)
7 older than newest local snapshot
2 due to transfer-last


Any input would be appreciated, as I'm a bit torn.


 src/server/pull.rs | 32 +++++++++++++++++++++++++++-----
 1 file changed, 27 insertions(+), 5 deletions(-)

diff --git a/src/server/pull.rs b/src/server/pull.rs
index 37058f9b..d6e3c2c4 100644
--- a/src/server/pull.rs
+++ b/src/server/pull.rs
@@ -1,6 +1,7 @@
 //! Sync datastore from remote server
 
 use std::collections::{HashMap, HashSet};
+use std::fmt::Write;
 use std::io::{Seek, SeekFrom};
 use std::sync::atomic::{AtomicUsize, Ordering};
 use std::sync::{Arc, Mutex};
@@ -535,7 +536,6 @@ async fn pull_snapshot_from(
     } else {
         task_log!(worker, "re-sync snapshot {}", snapshot.dir());
         pull_snapshot(worker, reader, snapshot, downloaded_chunks).await?;
-        task_log!(worker, "re-sync snapshot {} done", snapshot.dir());
     }
 
     Ok(())
@@ -589,12 +589,32 @@ impl SkipInfo {
 
 impl std::fmt::Display for SkipInfo {
     fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
-        write!(
-            f,
-            "skipped: {} snapshot(s) ({}) older than the newest local snapshot",
+        let mut reason_string = String::new();
+
+        writeln!(
+            reason_string,
+            "skipped: {} snapshot(s) ({}), ",
             self.count(),
             self.affected().map_err(|_| std::fmt::Error)?
-        )
+        )?;
+
+        if self.already_synced_count > 0 {
+            writeln!(
+                reason_string,
+                "{} already synced",
+                self.already_synced_count
+            )?;
+        }
+
+        if self.transfer_last_count > 0 {
+            writeln!(
+                reason_string,
+                "{} due to transfer-last",
+                self.transfer_last_count
+            )?;
+        }
+
+        write!(f, "{}", reason_string.trim_end())
     }
 }
 
@@ -635,6 +655,8 @@ async fn pull_group(
     remote_ns: BackupNamespace,
     progress: &mut StoreProgress,
 ) -> Result<(), Error> {
+    task_log!(worker, "sync group {}", group);
+
     let path = format!(
         "api2/json/admin/datastore/{}/snapshots",
         params.source.store()
-- 
2.30.2





More information about the pbs-devel mailing list