[pbs-devel] applied: [RFC PATCH proxmox-backup 2/2] file-restore: dynamically increase memory of vm for zpools

Dominik Csapak d.csapak at proxmox.com
Tue Nov 8 08:59:04 CET 2022


On 11/7/22 18:15, Thomas Lamprecht wrote:
> Am 07/11/2022 um 13:35 schrieb Wolfgang Bumiller:
>> applied
> 
> meh, can we please get this opt-in any only enabled it for root at pam or for users
> with some powerfull priv on / as talked as chosen approach to allow more memory the
> last time this came up (off list IIRC)... I really do *not* want a memory DOS potential
> increased by a lot just opening some file-restore tabs, this actually should get more
> restrictions (for common "non powerfull" users), not less..

understandable, so i can do that, but maybe it's time we rethink the file-restore
mechanism as a whole, since it's currently rather inergonomic:

* users don't know how many and which file restore vms are running, they
may not even know it starts a vm at all
* regardless with/without my patch, the only thing necessary to start a bunch vms
is VM.Backup to the vm and Datastore.AllocateSpace on the storage
(which in turn is probably enough to create an arbitrary amount of backups)
* having arbitrary sized disks/fs inside, no fixed amount we give will always
be enough

so here some proposals on how to improve that (we won't implement all of them
simultaneously, but maybe something from that list is usable)
* make the list of running file-restore vms visible, and maybe add a manual 'shutdown'
* limit the amount of restore vms per user (or per vm?)
   - this would need the mechanism from above anyway, since otherwise either the user
     cannot start the restore vm or we abort an older vm (with possibly running
     download operation)
* make the vm memory configurable (per user/vm/globally?)
* limit the global memory usage for file restore vms
   - again needs some control mechanism for stopping/seeing these vms
* throw away the automatic starting of vms, and make it explicit, i.e.
   make the user start/shutdown a vm manually
   - we can have some 'configuration panel' before starting (like with restore)
   - the user is aware it's starting
   - still needs some mechanism to see them, but with a manual starting api call
     it's easier to have e.g. a running worker that can be stopped

> 
>>
>> AFAICT if the kernel patch is not applied it'll simply have no effect
>> anyway, so we shouldn't need any "hard dependency bumps" where
>> otherweise things break?
> 
> if you actually want to enforce that the new behavior is there you need a dependency
> bump.






More information about the pbs-devel mailing list