[pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

Alexandre DERUMIER aderumier at odiso.com
Mon Oct 12 19:20:02 CEST 2015


Sorry, 
I speak to fast, doesn't seem to resolve the problem.

It has worked for 1vm (don't known why), but the others don't start.

(I have increase the counter to 10000 to be sure)

I'll check that tommorrow
----- Mail original -----
De: "aderumier" <aderumier at odiso.com>
À: "dietmar" <dietmar at proxmox.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Lundi 12 Octobre 2015 19:15:51
Objet: Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

>>Wild guess - maybe it is related to INotify? 
> 
>># cat /proc/sys/fs/inotify/max_user_instances 
>> 
>>We already run into that limit with LXC containers recently. 
>>Please can you test? 

Yes, this is working with increasing /proc/sys/fs/inotify/max_user_instances ! 

Thanks. 


This is strange, this is the same value on redhat 3.10 kernel, but I don't remember this behaviour. 


----- Mail original ----- 
De: "dietmar" <dietmar at proxmox.com> 
À: "aderumier" <aderumier at odiso.com>, "pve-devel" <pve-devel at pve.proxmox.com> 
Envoyé: Lundi 12 Octobre 2015 19:11:47 
Objet: Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks) 

> On October 12, 2015 at 6:53 PM Alexandre DERUMIER <aderumier at odiso.com> wrote: 
> 
> 
> Hi, 
> 
> I have upgraded a server with a lot of vm (160vms), 
> 
> each vms is a clone of a template and they are 1 disk by vm 
> 
> When I started the 130th vm, I have this error 
> 
> kvm: -drive 
> file=/var/lib/vz/images/104/vm-104-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on: 
> Could not set AIO state: File descriptor in bad state 
> 
> 
> Stopping another vm, and start the 130th is working. 
> 
> 
> Seem to be a system limit. 

Wild guess - maybe it is related to INotify? 

# cat /proc/sys/fs/inotify/max_user_instances 

We already run into that limit with LXC containers recently. 
Please can you test? 




More information about the pve-devel mailing list