[pve-devel] [PATCH v2] (partially) implement bwlimits
Stoiko Ivanov
s.ivanov at proxmox.com
Mon Apr 1 11:30:56 CEST 2019
Changes from v1:
* Incorporated Thomas great feedback! - Thanks!
* Added a patch to pve-common - there the description wrote MiB/s, but the
restore-codepaths take the value as KiB/s [0]
I changed the output of PVE::QemuServer::qemu_drive_mirror to always write the
speed in KB/s, instead of using one of the format_size methods (the one in
JSONSchema seems equivalent to the one in VZDump (and would have been already
included)). That way the user sees the same number they have in their
config-files or provided via parameter.
I also decided to stick with the 'migrate' bwlimit for qemu_drive_mirror
during online migrations with local-disks (instead of the move limit), because
in my initial understanding migrate is the limit between nodes (over the network)
and the others stay within one node (move_disk, restore, clone).
Regarding the unit of the bwlimits (MiB/s vs. KiB/s) I'm a little bit torn
between the two:
* MiB/s would make more sense and sensible numbers (10 GiB/s expressed as KiB/s
makes for a lot of digits)
* other utils I know use KiB/s (ie.g. rsync), although this is seldomly
consistent (scp uses Kbit/s, cstream Bps).
* probably it would be best to support optional unit-suffixes (see e.g. curl),
and define limits w/o units to be in KiB/s - should break the fewest existing
setups - but this would be done in a separate Patchset
Adapted cover-letter from v1:
This patch-series adds support for bandwidth limits to:
* qemu-server for:
* online migration (including local volumes)
* offline migration (consisting of only local volumes)
* online clone_vm
* online move_vm_disk
* pve-container for:
* migration (only offline)
* clone_vm
* move_volume
The following operations still ignore the bwlimits configured:
* Migration with replicated volumes
* replication (which has its 'rate' parameter per job, but probably should also
honor the 'migrate' bwlimit)
* Qemu offline clone and move_vm_disk - this codepath currently uses
`qemu-img convert` (which has no rate-limiting parameter (yet), but keeps
sparse files sparse)
Additionally my (quick and limited) tests showed that qemu's drive-mirror seems
to have a discrepancy between the set 'speed' parameter and the actual speed
(IIRC setting the speed to 10MB/s, consistently moved the data with 8MB/s),
however I stuck to the documentation (speed parameter is in bytes/s).
[0] e.g. PVE/VZDump.pm:837, PVE/LXC/Create:111, PVE/QemuServer:6174
pve-common:
Stoiko Ivanov (1):
JSONSchema: fix units of bwlimit
src/PVE/JSONSchema.pm | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
qemu-server:
Stoiko Ivanov (8):
bwlimit: honor bwlimit for migrate qmp call
bwlimit: add parameter to qemu_drive_mirror
bwlimit: add parameter to QemuMigrate::sync_disks
QemuMigrate::sync_disks: small refactoring
bwlimit: add parameter for QemuMigrate::phase2
add todo comment for bwlimits in clone_disk
bwlimit: add parameter to QemuServer::clone_disk
bwlimit: add parameter to API2 calls
PVE/API2/Qemu.pm | 36 +++++++++++++++++++++++++++++++++---
PVE/QemuMigrate.pm | 41 ++++++++++++++++++++++++++++++++---------
PVE/QemuServer.pm | 14 ++++++++++----
3 files changed, 75 insertions(+), 16 deletions(-)
pve-container:
Stoiko Ivanov (3):
bwlimit: add parameter to LXC::Migrate
bwlimit: add parameter to rsync in copy_volume
bwlimit: add parameter to API2 calls
src/PVE/API2/LXC.pm | 38 +++++++++++++++++++++++++++++++++-----
src/PVE/LXC.pm | 10 ++++++----
src/PVE/LXC/Migrate.pm | 9 +++++++--
3 files changed, 46 insertions(+), 11 deletions(-)
More information about the pve-devel
mailing list