[pve-devel] sdn: looking to unify .cfg files, need opinions about config format

alexandre derumier aderumier at odiso.com
Tue Apr 20 09:51:30 CEST 2021


Hi Thomas!

On 18/04/2021 19:01, Thomas Lamprecht wrote:
> On 12.01.21 10:19, aderumier at odiso.com wrote:
>> Hi,
>>
>> I'm looking to unify sdn .cfg files with only 1 file,
>> with something different than section config format.
>>
>> We have relationship like  zones->vnets->subnets,
>> so I was thinking about something like this:
>>
>>
>>
>> [zone myzone]
>>     type: vxlan
>>     option1: xxx
>>     option2: xxx
>> [[vnet myvnet]]
>>     option1: xxx
>>     option2: xxx
>> [[[subnet 10.0.0.0/8]]]
>>     option1: xxx
>>     option2: xxx
>>
>>
>> [controller  mycontroller]
>>     type: evpn
>>     option1: xxx
>>     option2: xxx
>>
>> [dns  mydns]
>>     type: powerdns
>>     option1: xxx
>>     option2: xxx
>>
>>
>> What do you think about this ?
> That looks like section config, just spelled differently?
>
> But yes, the way section config does schema and types are not ideal when combined
> with quite different things.
>
> Maybe we should really just go the simple way and keep it separated for now.
>
> For zones it works good this way, there exist different types and we can use that as
> section config type. Subnets and vnets could be combined as vnets are really not that
> specially I guess?

I think that maybe the only thing that could be improve is indeed 
subnets/vnets.

currently, we can have same subnet range defined on zones.

but they are really different object, as gateway or other subnet option 
can be different,

that's why I'm doing concatenate zone+subnet to have a unique subnetid, 
something like

subnet: zone1-192.168.0.0-24
         vnet vnet1
         gateway 192.168.0.1

subnet: zone2-192.168.0.0-24
         vnet vnet2
         gateway 192.168.0.254

It's not bad, but maybe it could be better, with defining subnet 
somewhere inside the vnet directly.

Not sure how the config format should be to handle this ?



>
> We had a mail about what would be Ok to merge, but I do not remember/find it anymore...

Small reminder of other related patches:


pve-network:
[pve-devel] [PATCH pve-network 0/2] evpn && bgp improvements
https://www.mail-archive.com/pve-devel@lists.proxmox.com/msg03265.html

(2 small patches)

pve-manager:

[PATCH V11 pve-manager 1/1] sdn: add subnet/ipam/sdn management

https://www.mail-archive.com/pve-devel@lists.proxmox.com/msg02746.html

(I had merged and rebased the differents patches from previous series)

pve-cluster:

[PATCH V5 pve-cluster 0/5] sdn : add subnets management

https://lists.proxmox.com/pipermail/pve-devel/2020-September/045284.html


pve-common:

INotify: add support for dummy interfaces type

(this is a small patch for ebgp loopback/dummy interface support)

https://www.mail-archive.com/pve-devel@lists.proxmox.com/msg01755.html


pve-container: (maybe we could wait a little bit to finish qemu support too)

[PATCH pve-container] add ipam support
https://lists.proxmox.com/pipermail/pve-devel/2021-January/046609.html


>> Another way could be a simple yaml config file. (but I think it's not
>> really matching currents proxmox configs formats)
>>
> I do not like yaml to much, it looks simple first but can do way to much (turing
> complete, IIRC) and we do not really use it anywhere atm., so that would mean lots
> of new tooling/work to do to handle it sanely and as first-class citizen in PVE
> stack...
>
> My goal would be to do a pve-network bump at end of next week, and for that we
> need pve-cluster bump.
>
> Currently there we get three new configs:
>
> 1. ipams, different management plugins (types), so OK to be own section config
> 2. dns, different, APIs/DNS servers (types), so OK to be own section config
> 3. subnets, only one type, or?
subnet only 1 type indeed
> hmm, rethinking this now it could be OK to keep as is... While subnets could
> possibly be merged into vnets, there's a mediocre benefit there, API could
> maybe even get more complicated?

not sure about api, but if current config format with subnetid is ok for 
you, it's ok for me ;)


> If we'd bump now the biggest thing missing is applying an IP to a VM and CT.
>
> For a CT we can quite easily do it.
yes, I have already send patches, maybe it need more testing.
>
> For a VM we could even need to support different ways?
>
> * DHCP (?)

for dhcp, It'll be more difficult for bridged setup, as we need an 1 by 
subnet.

for routed setup, it's more easy.

I think we should see that later, I have an idea about managing some 
kind of

gateways edges vms/appliance feature like vmware nsx edge gateway.

https://bugzilla.proxmox.com/show_bug.cgi?id=3382

where you could manage this kind of central service (dhcp, vpn, nat 1:1, 
balancing,...).

(a lot of users use pfsense for exemple or other gateway applicance, my 
idea is to manage this kind of appliance through api, or maybe manage 
our own appliance)

This should works with any kind of network, bridged/routed, or any zone 
(vlan/vxlan/...)

But It's a big thing, so later ;)

> * cloudinit

yes, this is my current plan.

offline, it's easy;

online, it's more difficult.

That's why I was working on cloudinit too recently, with pending 
features,...

I need to do more test with ipam + ipconfig in pending state (if user 
try to rollback,...)

Also, cloudinit is a little bit tricky to apply changes online, but it's 
possible at least on linux with some udev rules.

That's why I'm also playing with opennebula context agent

https://github.com/OpenNebula/addon-context-linux

https://github.com/OpenNebula/addon-context-windows

This is doing the same thing than cloudinit (I have added configdrive 
format recently), but with simply bash scripts + udev rules for online 
change. It's working very well.

I wonder if at the end, we shouldn't have similar proxmox daemon (or 
maybe fork it, 
https://github.com/aderumier/addon-context-linux/commits/proxmox ;) to 
have true online changes. (adding udev rules for hotplug mem/cpu ,extend 
disk/lvm partition, do network config changes,...)

Cloud-init is really done to bootstrap the vm, but online changes are 
still tricky.


> * would the qemu-guest-agent work too?
This is a good question. I don't known if we can write config file with 
qemu-guest-agent.
>
> IMO that's the biggest thing left, or what do you think?
>
yes, qemu is the biggest thing.

I think that the sdn part itself (zone/vnets/subnets/...) are working fine.







More information about the pve-devel mailing list