[pve-devel] [PATCH qemu-server] add discard_granularity to 4M for rbd storage
Alwin Antreich
a.antreich at proxmox.com
Thu Jun 28 14:56:32 CEST 2018
On Thu, Jun 28, 2018 at 10:08:45AM +0200, Alexandre Derumier wrote:
> when we have snapshots on rbd and do a trim, the space is increasing
> http://tracker.ceph.com/issues/18352
>
> we need to trim a full object (4MB by default), to be able to free space.
>
> test:
>
> without discard_granularity
> ---------------------------
> vm-107-disk-1 20480M 2500M
> vm-107-disk-1 20480M 2500M
>
> vm-107-disk-1 at snap1 20480M 2500M
> vm-107-disk-1 20480M 90112k
>
> vm-107-disk-1 at snap1 20480M 2500M
> vm-107-disk-1 20480M 1020M
>
> with discard_granularity=4M
> ---------------------------
> vm-107-disk-1 20480M 2500M
> vm-107-disk-1 20480M 2500M
>
> vm-107-disk-1 at snap1 20480M 2500M
> vm-107-disk-1 20480M 90112k
>
> vm-107-disk-1 at snap1 20480M 2500M
> vm-107-disk-1 20480M 144M
> ---
> PVE/QemuServer.pm | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 6a355f8..fd9754c 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -1695,6 +1695,11 @@ sub print_drivedevice_full {
> $device .= ",serial=$serial";
> }
>
> + my $volid = $drive->{file};
> + if($volid && $drive->{discard}) {
> + my $storage_name = PVE::Storage::parse_volume_id($volid);
> + $device .= ",discard_granularity=4194304" if $storecfg->{ids}->{$storage_name}->{type} eq 'rbd';
> + }
In my opinion it would be better to have Qemu figure it out automagically or
use the discard config option to add the granularity.
As example: discard=on / discard=4194304 (is on with granularity).
This way it is configurable per disk image and can be set according to the
needs of the storage (eg. zvol; 8K).
For Ceph, the object size can be set when an image is created (--object-size).
>
> return $device;
> }
> --
> 2.11.0
--
Cheers,
Alwin
More information about the pve-devel
mailing list