[PVE-User] Simultaneous API call
Mohamed Sadok Ben Jazia
benjazia.mohamedsadok at gmail.com
Sat Feb 13 13:29:47 CET 2016
I see you didn't approve the fact that API calls an API, i see that it's a
necessity especially for handling the multi host and multi node in case we
install proxmox in different datacenters.
Thank you for your answer, i care about the answer, not the act of being
direct or not.
Anycase, can you explain me more this
"Be sure to respect the licence from the (PHP) API binding you're
using, avoiding legal trouble is always good, I guess"
On 13 February 2016 at 12:36, Thomas Lamprecht <t.lamprecht at proxmox.com>
wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Hi,
>
> Al 13.02.2016 a 11:38 Mohamed Sadok Ben Jazia ha scritto:
> > Hello, to describe my purpose, i'm supposed to create a commercial
> > plateform for providing IAAS and PAAS using proxmox. For this
> > step, i'm planning to develop a full API for third pary users, the
> > development will be done in PHP as start. The API will contain 3
> > main parts (clustering and managing hosts and nodes, Actions on
> > CT's and VM's, Authorisations handling)
>
> A ReST API to another ReST API, interesting. :)
> Be sure to respect the licence from the (PHP) API binding you're
> using, avoiding legal trouble is always good, I guess.
>
> > The issue i'm facing here happens when i run simultaneous API call
> > to create CT's on the same host because i'm using call-to-nextid()
> > that returns the same id. I think semaphores or mutex are not a
> > good options because once not well implemented, i will have
> > deadlocks which is worst than simple error.
>
> Can you please describe to me what I my proposed implementation can
> result in a deadlock? AFAIK, solutions which are using only one (!)
> lock are deadlock safe per definition, there is no possibility to
> cause a deadlock in this case and if you're using more locks only be
> sure to acquire and release them all (!) in the same order and your
> good to go. You normally only get deadlocks when at least two
> resources (i.e. locks) are wanted and multiple consumers (i.e.
> processes/API callers) are running.
>
> See https://en.wikipedia.org/wiki/Deadlock
>
> > I simply think to implement call-to-nextid() myself and get a list
> > of used id's and add a random number for security but it's not a
> > perfect choice.
> >
>
> That's not "added security", further you also will come into
> collisions with this approach as random is random.
>
>
> I'm hoping you don't take this answer as an insult/attack as I'm quite
> direct, I only want to set a few things straight and help you to
> understand why I proposed such a solution.
>
> I'm thinking also about making a patch which adds another parameter to
> the create Container and Virtual Machine API calls, which results in
> auto selecting a free VMID when creating one and returning that when
> finished.
> This seems as an usable "feature" for other use cases then yours also,
> but no promise yet, we have to look if it fits in our concept.
>
> cheers,
> Thomas
>
> > On 13 February 2016 at 10:07, Thomas Lamprecht
> > <t.lamprecht at proxmox.com <mailto:t.lamprecht at proxmox.com>> wrote:
> >
> > Hi,
> >
> > can you show a (simple) example how you do it? Are you making next
> > id calls from all over the cluster (in parallel)?
> >
> > If you're doing it in serial like:
> >
> > new_vmid = call-to-nextid(); create-ct(new_vmid)
> >
> > you're fine, doing parallel calls you get race conditions and
> > atomicity violations.
> >
> > Solving that could be done in several ways:
> >
> > * Locking from your side (with semaphore, mutex, ...)
> >
> >
> > LOCK(CREATE)
> >
> > new_vmid = call-to-nextid(); create-ct(new_vmid)
> >
> > UNLOCK(CREATE)
> >
> >
> > this is a pit of an overkill though, as you only need to lock until
> > the container config is created in:
> >
> > /etc/pve/nodes/<yournode>/lxc/<neew_vmid>.conf
> >
> > Ther should be also the possibility to force create a CT, AFAIK,
> > then you could do the following:
> >
> > LOCK(CREATE)
> >
> > new_vmid = call-to-nextid();
> >
> > // create the cfg file before calling pct create touch
> > /etc/pve/nodes/<yournode>/lxc/<new_vmid>.conf
> >
> > UNLOCK(CREATE)
> >
> > create-ct(new_vmid, force=true)
> >
> >
> > The locked context here would be magnitudes shorter and thus
> > everything more responsive, BUT I didn't tested that sod it could
> > be that I'm miss a check we do, will quickly test it on Monday.
> >
> > Third way would be to hold a possible free list "bitmap" of VMIDs
> > in your program which you use to determine the next free. This
> > would be synced with the real one from time to time, not that nice
> > but a possibility.
> >
> > Out of interest, in what language do you make this? And what's the
> > goal, if I may ask?
> >
> > cheers, Thomas
> >
> >
> > On 10.02.2016 10:08, Mohamed Sadok Ben Jazia wrote:
> >> Hello list, I'm creating LXC container using the API on proxmox
> >> 4.1 I use get("/cluster/nextid") to get the next free id to use.
> >> The issue i encountered is when i do simultaneous number of API
> >> calls I get proxmox trying to create containers with the same ID
> >> which
> > gave me
> >> this error
> >>
> >> trying to aquire lock...TASK ERROR: can't lock file
> >> '/run/lock/lxc/pve-config-147.lock' - got timeout
> >>
> >> are there a standard way to deal with this?
> >>
> >>
> >> _______________________________________________ pve-user mailing
> >> list pve-user at pve.proxmox.com <mailto:pve-user at pve.proxmox.com>
> >> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>
> >
> > _______________________________________________ pve-user mailing
> > list pve-user at pve.proxmox.com <mailto:pve-user at pve.proxmox.com>
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> >
> >
> >
> > _______________________________________________ pve-user mailing
> > list pve-user at pve.proxmox.com
> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2
>
> iQIcBAEBCAAGBQJWvxUvAAoJEATlfMwh0PH8/oQQANBW5OW1RBOtuBrGaqn1yhsx
> Xlc0aRs063jSLcQlGzMAGMkE1RLtR6TQkObSUBTa5g3H/rmWCXbo1vV9IfPZ4Qmu
> osyqxRBiTO7QwALu4UmZhTmoGruHBOzwQNed9F37jRXO3xKYTgiO6zpjJGaDouci
> tc8AWsawvvSZ9t3YggE3JJwMH+AmSiZTsyqlTCIQ72ojxIIxYX09vKG6OHf/Xes8
> Rx5WPbjXZGs9O/upwlNuPKpOYI0dQr5ldPGHDwBn5XNTwTtcn5KYJLMXgbKH93O4
> OAiEtaom0diVFF/Xef4emjfVGtOF3deesJh+IhcT1dJLnk52zfULFQw3t6UJzrRk
> GACvt+/O5vvJ5yWmnRawcFCpiAA1Yji9KObImx/rC6muPd2Jmzqhq14041lhmR8u
> +w3lKrktRjaHRGCGFI4OKOlBdzbFBteptBfeqGxM1Wz6WEppKzAI5kbg+MClqi3R
> kDluwK7mfKQuZYLnf4Qef2oXJ4Yqi2zkomR8e40xz7sTftf8QDovORVekvakkLxm
> a/WglVhfOWs2UPa1HRUjwnY/dM/pxN+fbnd4rFJmfIWbYWsYjrGG0b5ddeWIL3x2
> Ho0kk6Aik3o7T37syyW52RXJ61A9NyebN2j5jsmZ2khsWysVS/VjN9Uivec8St7a
> dM9W6CLmudGvV8kZvK7Y
> =HC9T
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20160213/50c4b1bb/attachment.htm>
More information about the pve-user
mailing list