[PVE-User] Unresponsive VM(s) during VZdump

Alexander Burke alex at alexburke.ca
Thu May 9 13:24:06 CEST 2024


Hello all,

My understanding is that if the backing store is ZFS, a snapshot of the zvol underlying the guest's disk(s) is instant and atomic, and the snapshot is what gets backed up so fleecing is moot. Am I wrong on this?

I know nothing about Ceph other than the fact that it supports snapshots; assuming the above understanding is correct, does snapshot-based backup not work much the same way on Ceph?

Cheers,
Alex
----------------------------------------

2024-05-09T10:11:20Z Mike O'Connor <mike at oeg.com.au>:

> I played with all the drive interface settings, the end result was I lost customers because of backups causing windows drive failures.
> Since fleecing was an option, I've not had a lockup.
> 
> 
> 
> On 9/5/2024 7:32 pm, Iztok Gregori wrote:
>> Hi Mike!
>> 
>> On 09/05/24 11:30, Mike O'Connor wrote:
>>> You need to enable fleecing in the advanced backup settings. A slow backup storage system will cause this issue, configuring fleecing will fix this by storing changes in a local sparse image.
>> 
>> I see that fleecing is available from PVE 8.2, I will enable it next week once all the nodes will be upgraded to the latest version.
>> 
>> Thanks for the suggestion.
>> 
>> In the meantime I found this thread on the forum:
>> 
>> https://forum.proxmox.com/threads/high-io-wait-during-backups-after-upgrading-to-proxmox-7.113790/
>> 
>> which mention the max_workers parameter. I change it to 4 for the next scheduled backup and see if there are some improvements (I migrated the affected VMs to catch the new configuration).
>> 
>> I will keep you posted!
>> 
>> Iztok
> 
> _______________________________________________
> pve-user mailing list
> pve-user at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user



More information about the pve-user mailing list