[pve-devel] [RFC PATCH cluster/guest-common/qemu-server/container/manager] add backend profile support

Thomas Lamprecht t.lamprecht at proxmox.com
Mon Nov 6 09:53:03 CET 2023


Am 06/11/2023 um 09:17 schrieb Dominik Csapak:
> On 11/4/23 09:34, Thomas Lamprecht wrote:
>> On 03/11/2023 12:53, Dominik Csapak wrote:
>>> Using a single section config for both VMs and CTs make handling the
>>> properties a bit weird. For now i prefix the options with "$type_"
>>> so vm options are e.g. 'vm_ostype', 'vm_name', and so on while container
>>> options are 'ct_ostype', 'ct_hostname', etc.
>>
>> That properties are shared by different sections by default is IMO not
>> really ideal in general, it's also making the storage API and its docs
>> rather hard to understand & work with.
>>
>> One option could be to opt-into a newer behavior, e.g., via some
>> property, or getter, that the section config implementation needs to
>> set, or override and return true, which makes then all properties
>> isolated if not explicitly marked as shared (via another new property),
>> that way we could use it now, and move over existing ones where it
>> makes sense, without risking wide breakage.
>>
> 
> ok, so if i'm reading this right, having a config per type is not really
> an option for you? (that would be nice and easy, without having
> to extend the section config at all, but still get most of the
> upsides)

for some definition of "nice", as I find the resulting docs, and APIs
not really nice.

I do not want to block a per-type config out-right, it just feels wrong
to me to have section configs with types, and then not be able to use
that for different types of the (in spirit) same stuff, so I'd much
rather see section config adapted to be able to actually handle
different types better through the whole stack.

> hard part of extending the section config is how to handle the api
> since when we want to have a 'clean' api per type, we then have

For JSON schema this could be mapped with oneOf support, i.e., for
properties that exist for multiple types as different schema's one
would create a oneOf with two sub-schemas.

https://json-schema.org/understanding-json-schema/reference/combining#oneOf

That would be clean from an API POV and could be reused for the storage
API then, where we already have this 1:1, and while not perfect it
actually uses section configs for what they got made for and works
well from an API and user POV – it would be of much more use to
improve it's rough edges compared to just go the easy way (for the
dev, not users) again.

> to also split the api per type? (like e.g. we do with the mapping
> of pci/usb) and then whats the gain of having both types in a single
> config (besides a single file instead of two?)

API split is not required, so this is obsolete. Besides, a single
config makes it easier to ensure that IDs are unique (to avoid
confusion), so even if, there would be still some advantages.


>>> Using the same config/plugin system also makes using it a bit weird.>>> We have to register/init them in the api where they're used, but for the
>>> cli we have to register only the available type and then init. This
>>> makes it necessary to always set 'allow_unknown' while parsing the
>>> config so that 'qm' doesn't trip over the container profiles...
>>>
>>> A fix for both could be to separate the ct/vm configs into two files
>>> (similar to how we did pci/usb mapping configs), that would fix the
>>> prefixing, as well as the register/init issue (We have to have
>>> separate APIs then ofc).
>>>
>>> Another fix would be to extend the section config to allow different
>>> properties of the same name for different types. Should be possible,
>>> although we have to be careful to not break existing ones, and also
>>> the API interface has to be separate for each type then (cannot really
>>> have confiflicting api parameter schemas for the same name?)
>>>
>>> We could also go in a completely different direction and create a config
>>> per profile? (like we handle vm configs). Downside of that is, that the
>>> current guest config handling part is partly in pmxcfs, so we'd have to
>>> make that either more generic, or duplicate it for profiles.
>>
>> I don't see how this would have anything to do with pmxcfs and VMIDs,
>> that map to an actual guest instance and thus needs special treatment
>> compared to just some profile that can, e.g., live in a
>> /etc/pve/guest-profiles/  as <id>.profile (the extension just an
>> example), and be handled only via perl – profiles are only used in
>> the profile management API, where it's ok to fully parse one, and
>> in guest creation POST calls, so even cfs_register* would be probably
>> overkill.
> 
> you're right, i did not explain right what i meant. We currently do
> somethings in pmxcfs for vm configs (e.g. read/write/list) and
> we'd have to use the filesystem layer for the profiles if done in perl
> (which i meant with 'duplicating')


Yeah, as you've followed-up, ser/de happens still in perl.
So if that route would be chosen, it'd avoid touching CFS at first, we
can still add it to the tracked files for improved versioning & caching, 
there later one, if  ever needed, removing it is a bit harder.

>> Having a file per profile makes some things relatively easy, but doesn't
>> fits the commonly used section config, let's see if others have input
>> (i.e., actively ask one/some of fabian/wolfgang/fiona.
> 
> i agree, just mentioned it for completeness sake, but yes, lets wait
> for another opinion
> 

From my gut feeling I'd like that a bit more if we'd start out newly,
and I guess the VMID guest configs can be seen as prior art here, but
those are always a bit special (as they map to instances, not something
that we just access or provides some profile), and I would like to avoid
adding to many special cases – i.e., ideally we got one way with which
we can handle almost all of our needs, as otherwise this mental cost for
devs and users alike, will just grow.


>>> * Is the priv path ok? /mapping/profiles/<id> feels a bit weird
>>
>> What feels weird? s/profile/guest-profile/ might be a bit more telling
>> though.
> 
> weird that it's putting in the /mapping/ dir, but if that's not an
> issue for you then i'll use /mapping/guest-profile(s)/<id>
> (idk if we should prefer singular or plural for that...)

didn't we had the same discussion for mapping itself (i.e., map vs. maps
vs. mapping vs. mappings)

IIRC singular was chosen then due to be used more often and to avoid
spending to much painting that bike shed, so would do the same here and
go for singular.

> 
>>
>>> * We should probably introduce a 'meta' property for ct like we have for
>>>    vms? we could record the profile there and also e.g. the template used
>>>    to set the container up.
>>
>> I assume you mean in the CT config, IIRC there is even an older patch
>> floating around for adding that for recording the ctime.
>>
>> But it shouldn't be needed, as meta stuff should be only informational,
>> profile names can change (delete/re-add) or get modified, so saving the
>> ID has no use, logging it to the task-log is enough for starters, I think.
>>
> 
> makes sense, i just thought maybe recording the template name + ctime, would
> make sense anyway.
> 
> should i just log it for vms too, or is it ok to put it in the meta option
> there?

I'd start out with just logging it, IMO this is mostly relevant for the
creation event, as it's only applied there (i.e., profile updates do
nothing for already instantiated guests), so adding it to the config, even
if just in the meta-data part, feels wrong to me – and here again, adding
that later, e.g., after user demand shows up, is way easier than removing
it again.

>>> * there is still an issue with the docs generation, i'll have to look
>>>    into it
>>> * I'm not quite sure how the UI for creating/editing profiles should
>>>    look like. For creating i could imagine a 'create profile from guest'
>>>    type of thing (thanks @mira for the idea), but for editing or creating
>>>    from scratch I'm a bit conflicted. We could simply show the profiles
>>>    in the tree, and reuse the vm hw/options panels for that, but does
>>>    that seem overkill? We could also leave that API only for now, and
>>>    make the gui later? (Using it in the wizard would also not be trivial,
>>>    but there the constraints are rather straight forward)
>>
>> - view: simple grid with profile ID, profile type and the (unavoidable)
>>    profile comment as the tree columns (for starters). The admins can
>>    set a descriptive ID and comment to know what a profile is intended
>>    for. Showing all profile values is IMO bad UX, as it becomes crowded
>>    super fast for more than a few trivial profiles.
> 
> maybe an additional button that simply shows the config, akin to the
> 'show configuration' button for backups?
> 
> might be overkill if we have an edit panel though..

could be done, but yes, a bit redundant if there's a edit panel, as unlike
with backup jobs there's no subject that can influence the job too, like
the per-disk opt-out of backups for guests, which was IIRC one of the main
reason for the dedicated job-view there (as that information is simply not
present in the edit panel of backup jobs).





More information about the pve-devel mailing list