[pve-devel] template considerations

Dietmar Maurer dietmar at proxmox.com
Sat Jan 26 21:18:56 CET 2013


> >>The current VM template patches heavily depend on the ‘live snapshot’
> >>feature. This works well for some storage types (rdb, sheeddog, …),
> >>but not for all. Most important it does not work well with qcow2 files.
> 
> Well, my currents patches allow to do clone (clone = linked cloned) of
> 
> qcow2,raw : from current (no snapshot)

Sure, and only from current - I want to avoid such special cases
to make the code easier.

> rbd,sheepdog,nexenta : from snapshots
> 
> But I use snapshot tree panel to manage both.
> 
> 
> >>So using ‘live snapshots’ for templates has the following drawbacks:
> >>- does not work well for qcow2 files
> >>- does not work with LVM
> Yes indeed, but I don't use snapshots for qcow2,raw clone (I don't have
> implemetend lvm)

lvm is a special case anyways - forget get for now.

> >>- we are not able to backup/restore such templates
> 
> Do we really need to backup template ?

By sure! Template will be very important for you if you use them.
Besides, we get that for free.

> template disk are protected (readonly file or locked volumes), so they are no
> way to delete them.
> In case of a total storage destroy, we lost also the vm disks, so we need to
> restore them, and currently we backup the full vm (base image + real image
> are merged).

For example, if you provide templates for you customers, I am sure you want to
restore them.
 
> Or do you plan to find a way to backup separatly the base image and the real
> image ?

No. But with my suggestion, you can simply backup templates.

> - difficult to copy/migrate such templates
> for rbd,sheepdog,nexenta, as they are shared storage, I'm not sure we need
> to copy/migrate the templates.
> But for local storage, qcow2/raw baseimage copy accross each node could be
> great, to allow start a clone child on all nodes.
> 
> 
> >>Terms and Definitions:
> >>
> >>Disk Image : Also called ‘volumes’. We use the following volume naming:
> >>
> >>local:100/vm-100-disk-1.raw
> >>
> >>The real image file resides at (path):
> >>
> >>/var/lib/vz/images/100/vm-100-disk-1.raw
> >>
> >>Base image : A ‘read only’ image file which can be ‘cloned’. We can
> >>use the following volume naming:
> >>
> >>local:base/an-arbitray-name.raw
> 
> arbitray, really ? Doesn't we need some convention names, or ids
> somewhere ?

Why do you need an ID (the name is the ID)?

> >>The real images resides at (path):
> >>
> >>/var/lib/vz/images/base/an-arbitray-name.raw
> >>
> >>We can easily clone that by creating a qcow2 image which use this
> >>as base image.
> >>
> >>You could also name that a ‘template disk’.
> >>
> >>Cloned image: A disk image which refers to a ‘base image’.
> >>
> >>VM templates : A VM only using base images. You can never start such VM
> because
> >>the disk images are read-only. But copy/clone/backup/restore should be
> very easy
> >>to implement.
> 
> >>Usage scenario
> >>
> >>Base images :
> >>
> >>When we create/add disks, we can allow to choose from those
> >>base images to create cloned images.
> >>
> 
> Do you think we need this ? vm template cloning is not enough ?

We get that for free. And I imagine it is very convenient.

> >>VM templates :
> >>
> 
> So a vm template is only a vm with baseimage in current config right ?

yes

> >>There are basically two ways to create them.
> >>
> >>1.) manually assemble them by adding ‘base images’ to a VM
> 
> What happen if a vm a baseimage and non base images?

Then you can't start the VM, and you can't create a template.
But it is not a problem.

> Or if vm have existing live snasphots ?

The image with all 'internal' snapshots gets read-only. But we can also forbid
to create templates in that case.
 
> >>2.) Transform an existing VM automatically.
> >>
> 
> >>If the user wants to modify a template, he need to:
> >>
> >>1.) clone/copy the VM
> >>2.) start the VM, do any modification, then stop
> >>3.) create a new template from that
> 
> I could be great to rollback a template to a vm, if no clone of the template
> exist. (You create a template, but forgot something by example)

maybe - but you can do that manually with above steps above.
(yes, it is harder to do).
 
> >>Other storage types
> >>
> >>This should work with any storage type, because we do not longer
> >>depend on internal snapshots.
> 
> >>Directory storage:
> >>
> >>create base image:
> >>mv images/100/vm-100-disk-1.raw images/base/an-arbitray-name.raw
> >>
> >>create clone:
> >>simply create an qcow2 image which refers to the base image
> 
> ok no problem
> 
> >>LVM :
> >>
> >>create base image:
> >>simply rename the LV to indicate that it is a template and read only.
> >>
> >>create clone:
> >>create a snapshot.
> 
> >>Note: I am not really sure if that works on shared storage?
> 
> I have done some fast tests, and it seemed to work, but maybe it can give
> datas corruptions.
> Need to be tested more.

I guess the real problem is that you cannot create snapshot of snapshot. 
AFAIK it is possible with low-level dm tools, but not with LVM.

> >>RDB, Sheepdog and others :
> >>
> >>create base image:
> >>do whatever is needed to create a ‘cloneable’ image. This most likely
> >>involves to create a ‘snapshot’, and rename/remove the original image.
> >>
> >>create clone:
> >>storage driver dependent.
> 
> What are we doing with existing disks snapshots ? (If the original vm have
> snapshots).
> Do we keep them or deleted them ?

We do not touch them (should we?), or forbid that action.
 
> Also we need some convention for snapshot name we create for the
> template. (To avoid conflict with possible existing disk snapshots)
> maybe @base  ? like for the  /base/ for directory storage.

We currently use 'vm-{vmid}-xxx' as prefix - maybe we can use 'base-xxx'.
 
> I think it should be easy to implement, I'll have time this week to work on it if
> you want.

My feeling is that we can avoid many of those special cases. In general, it should
be easier to implement.

But it would be great if you can implement a first prototype. Maybe there are
other drawbacks which we do not see now.



More information about the pve-devel mailing list