If the GUI is resizing volumes, the API supports it, which means you should be able to use `pvesh` to do the operation in one command, instead of using the LVM commands and QEMU monitor directly. It does only support specifying the new size in bytes (which it seems to convert to MiB before actually using), but it's still an option.<br><br>As for the "max available" option, I'd personally find it more useful to upgrade the API itself support the full range of `lvresize -L` values (it currently uses `lvextend`, which means volumes cannot be reduced in size - a fairly safe approach in case the filesystem inside the VM hasn't been reduced in advance, but also a bit restrictive), or at least the largest subset we could also support for other storage plugins. I'll see about implementing that if nobody else gets to it first.<br><br><div class="gmail_quote">On Thu, Nov 27, 2014, 09:15 Alexandre DERUMIER <<a href="mailto:aderumier@odiso.com">aderumier@odiso.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">>>This process is correct when you use the GUI, but when you have space<br>
>>limited in the hard disk, and you want to change some partitions by CLI,<br>
>>where finally will be working with the logical volumes, is when starting the<br>
>>problem due that the VM not see the change applied.<br>
<br>
ah ok.<br>
<br>
This is normal, you need to tell to qemu what is the new size.<br>
(This is what we are doing in the code : vm_mon_cmd($vmid, "block_resize", device => $deviceid, size => int($size)); )<br>
<br>
if you manually upgrade the disk size,<br>
you need to use the monitor :<br>
<br>
#block_resize device size<br>
<br>
ex:<br>
<br>
#block_resize drive-virtio0 sizeinbytes<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
----- Mail original -----<br>
<br>
De: "Cesar Peschiera" <<a href="mailto:brain@click.com.py" target="_blank">brain@click.com.py</a>><br>
À: "Alexandre DERUMIER" <<a href="mailto:aderumier@odiso.com" target="_blank">aderumier@odiso.com</a>><br>
Cc: <a href="mailto:pve-devel@pve.proxmox.com" target="_blank">pve-devel@pve.proxmox.com</a><br>
Envoyé: Jeudi 27 Novembre 2014 17:00:37<br>
Objet: Re: [pve-devel] Error between PVE and LVM<br>
<br>
Hi Alexandre<br>
<br>
>This value correctly change after resize ?<br>
If, before change, the logical volume had a smaller size.<br>
<br>
>We first extend the lvm disk, then we tell to qemu the new disk.<br>
This process is correct when you use the GUI, but when you have space<br>
limited in the hard disk, and you want to change some partitions by CLI,<br>
where finally will be working with the logical volumes, is when starting the<br>
problem due that the VM not see the change applied.<br>
<br>
Please let me to do two suggestions:<br>
- Maybe will be better than PVE GUI have a option that say: "resize to max<br>
available", or something.<br>
I guess that this first option would be very good due to that the user will<br>
not need calculate the space available considering the space used in the<br>
metadata of LVM.<br>
<br>
- Moreover, in previous versions of PVE, while that I could see the<br>
reflected changes into the VM, in the PVE GUI, when i see the size of his<br>
hard disk, it shows his old size, then i had that remove the disk for re add<br>
it, only of this manner i could see his new size.<br>
<br>
>What kind of disk do you use in your guest ? virtio ? scsi ? ide ?<br>
Virtio-block, moreover i have good references about virtio-scsi, do you know<br>
something about virtio-scsi for use it in windows systems?<br>
<br>
Many thanks again for your attention<br>
Best regards<br>
Cesar<br>
<br>
<br>
----- Original Message -----<br>
From: "Alexandre DERUMIER" <<a href="mailto:aderumier@odiso.com" target="_blank">aderumier@odiso.com</a>><br>
To: "Cesar Peschiera" <<a href="mailto:brain@click.com.py" target="_blank">brain@click.com.py</a>><br>
Cc: <<a href="mailto:pve-devel@pve.proxmox.com" target="_blank">pve-devel@pve.proxmox.com</a>><br>
Sent: Thursday, November 27, 2014 6:35 AM<br>
Subject: Re: [pve-devel] Error between PVE and LVM<br>
<br>
<br>
So,<br>
<br>
>>shell# lvs<br>
>>LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert<br>
>>vm-100-disk-1 drbdvg2 -wi------ 30.00g<br>
<br>
This value correctly change after resize ?<br>
<br>
<br>
<br>
the resize code is here:<br>
<br>
we first extend the lvm disk, then we tell to qemu the new disk.<br>
<br>
(What kind of disk do you use in your guest ? virtio ? scsi ? ide ?)<br>
<br>
<br>
<br>
<br>
/usr/share/perl5/PVE/<u></u>QemuServer.pm<br>
<br>
<br>
sub qemu_block_resize {<br>
my ($vmid, $deviceid, $storecfg, $volid, $size) = @_;<br>
<br>
my $running = check_running($vmid);<br>
<br>
return if !PVE::Storage::volume_resize($<u></u>storecfg, $volid, $size,<br>
$running);<br>
<br>
return if !$running;<br>
<br>
vm_mon_cmd($vmid, "block_resize", device => $deviceid, size =><br>
int($size));<br>
<br>
}<br>
<br>
<br>
/usr/share/perl5/PVE/Storage/<u></u>LVMPlugin.pm<br>
<br>
sub volume_resize {<br>
my ($class, $scfg, $storeid, $volname, $size, $running) = @_;<br>
<br>
$size = ($size/1024/1024) . "M";<br>
<br>
my $path = $class->path($scfg, $volname);<br>
my $cmd = ['/sbin/lvextend', '-L', $size, $path];<br>
run_command($cmd, errmsg => "error resizing volume '$path'");<br>
<br>
return 1;<br>
}<br>
<br>
----- Mail original -----<br>
<br>
De: "Cesar Peschiera" <<a href="mailto:brain@click.com.py" target="_blank">brain@click.com.py</a>><br>
À: "Alexandre DERUMIER" <<a href="mailto:aderumier@odiso.com" target="_blank">aderumier@odiso.com</a>><br>
Cc: <a href="mailto:pve-devel@pve.proxmox.com" target="_blank">pve-devel@pve.proxmox.com</a><br>
Envoyé: Jeudi 27 Novembre 2014 09:11:29<br>
Objet: Re: [pve-devel] Error between PVE and LVM<br>
<br>
Hi Alexandre<br>
<br>
Thanks for your attention, here my answers and suggestions about of the<br>
problem of your customer:<br>
<br>
>We have made no change since resize feature has been implemented.<br>
>Can you describe a little bit more the problem on the guest side ?<br>
>do you see disk size increase with parted/fdisk ?<br>
I see the new size (vm-100-disk-1) of the logical volume by CLI, but it<br>
isn't reflected into the VM.<br>
In my case DRBD is in a upper layer to the LV, but the concept of LVM is<br>
applicable for any Logical Volume in any kind of block device that Linux can<br>
recognise.<br>
shell# lvs<br>
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert<br>
vm-100-disk-1 drbdvg2 -wi------ 30.00g<br>
data pve -wi-ao--- 143.50g<br>
root pve -wi-ao--- 20.00g<br>
swap pve -wi-ao--- 20.00g<br>
<br>
>I have add a customer during a previous training session, which have<br>
>problem with raw lvm disk in vms,<br>
I don't have problems with the VMs (LVM or raw image file), only with the<br>
LVM resize by CLI.<br>
<br>
>because of proxmox host scanning all lvm disks on the host side. (Don't<br>
>remember if it's have impact on resize).<br>
It don't have impact on resize, and it is necessary for that LVM can manage<br>
the changes online included (VM and host), that it is my case.<br>
Moreover, for my DRBD resources, i use this filter on the lvm.conf file for<br>
avoid scanning all lvm disks:<br>
<br>
filter = [ "r|/dev/sdb1|", "r|/dev/sdc1|", "r|/dev/sdd1|", "r|/dev/sde1|",<br>
"r|/dev/disk/|", "r|/dev/block/|", "a/.*/" ]<br>
Where:<br>
a=accept (include) , and<br>
r=reject (exclude) the scans to speed startup.<br>
<br>
>We have need to add a filter in lvm.conf on the host, to exclude scan of<br>
>vms lvm disk.<br>
Sure, i use the CLI<br>
<br>
An additional note of IBM:<br>
In the best practices, LVM as block device is the mode more fast for get the<br>
better performance in reads and writes of disks.<br>
<br>
Official Web page of IBM in "Best practice: Use block devices for VM<br>
storage":<br>
<a href="http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatbpblock.htm" target="_blank">http://www-01.ibm.com/support/<u></u>knowledgecenter/linuxonibm/<u></u>liaat/liaatbpblock.htm</a><br>
<br>
And finally, my question:<br>
Can be corrected my problem?<br>
<br>
Best regards<br>
Cesar<br>
<br>
----- Original Message -----<br>
From: "Alexandre DERUMIER" <<a href="mailto:aderumier@odiso.com" target="_blank">aderumier@odiso.com</a>><br>
To: "Cesar Peschiera" <<a href="mailto:brain@click.com.py" target="_blank">brain@click.com.py</a>><br>
Cc: <<a href="mailto:pve-devel@pve.proxmox.com" target="_blank">pve-devel@pve.proxmox.com</a>><br>
Sent: Thursday, November 27, 2014 3:47 AM<br>
Subject: Re: [pve-devel] Error between PVE and LVM<br>
<br>
<br>
Hi,<br>
<br>
>>In previous versions of PVE, this task was possible do it with much<br>
>>easily.<br>
<br>
We have made no change since resize feature has been implemented.<br>
Can you describe a little bit more the problem on the guest side ?<br>
do you see disk size increase with parted/fdisk ?<br>
<br>
Do you use raw lvm disk in your vms ? or partitions on top of lvm ?<br>
I have the LVM partition, and the image of the virtual disk isn't on a file<br>
system (as raw, qcow2, or any other kind of file format), so my image disk<br>
is a LVM as block device.<br>
<br>
I have add a customer during a previous training session, which have problem<br>
with raw lvm disk in vms,<br>
<br>
because of proxmox host scanning all lvm disks on the host side. (Don't<br>
remember if it's have impact on resize).<br>
<br>
We have need to add a filter in lvm.conf on the host, to exclude scan of vms<br>
lvm disk.<br>
<br>
----- Mail original -----<br>
<br>
De: "Cesar Peschiera" <<a href="mailto:brain@click.com.py" target="_blank">brain@click.com.py</a>><br>
À: <a href="mailto:pve-devel@pve.proxmox.com" target="_blank">pve-devel@pve.proxmox.com</a><br>
Envoyé: Jeudi 27 Novembre 2014 07:15:19<br>
Objet: [pve-devel] Error between PVE and LVM<br>
<br>
Hi to the PVE team.<br>
<br>
I found a problem between PVE and LVM.<br>
<br>
Considering that if it is used LVM as block device for the virtual disks of<br>
the VMs, Linux give us a great comfort, but the problem is that if I in<br>
"online mode" enlarge a Physical Volume and after enlarge a Logical Volume<br>
by CLI, in PVE, the VM can not see the new free hard disk space without<br>
partition.<br>
<br>
In previous versions of PVE, this task was possible do it with much easily.<br>
<br>
Moreover, i think that this feature is very util, due to that in the actual<br>
condition, it force me to power off the VM and start it again, so that being<br>
a server (talking about of the VM) that is in a production environment, only<br>
can I do it outside of working hours.<br>
<br>
So i would like to ask if the PVE team have interest in correcting this<br>
problem.<br>
<br>
Best regards<br>
Cesar<br>
<br>
______________________________<u></u>_________________<br>
pve-devel mailing list<br>
<a href="mailto:pve-devel@pve.proxmox.com" target="_blank">pve-devel@pve.proxmox.com</a><br>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel" target="_blank">http://pve.proxmox.com/cgi-<u></u>bin/mailman/listinfo/pve-devel</a><br>
______________________________<u></u>_________________<br>
pve-devel mailing list<br>
<a href="mailto:pve-devel@pve.proxmox.com" target="_blank">pve-devel@pve.proxmox.com</a><br>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel" target="_blank">http://pve.proxmox.com/cgi-<u></u>bin/mailman/listinfo/pve-devel</a><br>
</blockquote></div>