[pve-devel] Speed up PVE Backup
aderumier at odiso.com
Tue Jul 26 09:45:11 CEST 2016
>>But you can try to assemble larger blocks, and write them once you get
>>an out of order block...
>>I always thought the ceph libraries does (or should do) that anyways?
librbd is doing this if writeback is enabled. (merge coalesced block).
But I'm not sure (don't remember exactly, need to be verifed) it's working fine with current backup restore or offline disk cloning.
(maybe they are some fsync each 64k block)
----- Mail original -----
De: "dietmar" <dietmar at proxmox.com>
À: "pve-devel" <pve-devel at pve.proxmox.com>, "Eneko Lacunza" <elacunza at binovo.es>
Envoyé: Mercredi 20 Juillet 2016 17:46:12
Objet: Re: [pve-devel] Speed up PVE Backup
> This is called from restore_extents, where a comment precisely says "try
> to write whole clusters to speedup restore", so this means we're writing
> 64KB-8Byte chunks, which is giving a hard time to Ceph-RBD because this
> means lots of ~64KB IOPS.
> So, I suggest the following solution to your consideration:
> - Create a write buffer on startup (let's asume it's 4MB for example, a
> number ceph rbd would like much more than 64KB). This could even be
> configurable and skip the buffer altogether if buffer_size=cluster_size
> - Wrap current "restore_write_data" with a
> "restore_write_data_with_buffer", that does a copy to the 4MB buffer,
> and only calls "restore_write_data" when it's full.
> * Create a new "flush_restore_write_data_buffer" to flush the write
> buffer when device restore reading is complete.
> Do you think this is a good idea? If so I will find time to implement
> and test this to check whether restore time improves.
We store those 64KB blocks out of order, so your suggestion will not work
But you can try to assemble larger blocks, and write them once you get
an out of order block...
I always thought the ceph libraries does (or should do) that anyways?
pve-devel mailing list
pve-devel at pve.proxmox.com
More information about the pve-devel