[pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

Alexandre DERUMIER aderumier at odiso.com
Mon Oct 12 20:43:45 CEST 2015


maybe is it related :

proxmox4 : node1

root at kvmmind1:~# cat  /proc/sys/fs/aio-nr
116736
root at kvmmind1:~# cat  /proc/sys/fs/aio-max-nr
65536

proxmox4  : node2 (where I have the problem)


root at kvmmind2:~#  cat  /proc/sys/fs/aio-nr
131072
root at kvmmind2:~# cat /proc/sys/fs/aio-max-nr 
65536


proxmox3  : node3

root at kvmmind3:~# cat  /proc/sys/fs/aio-nr
29184
root at kvmmind3:~# cat  /proc/sys/fs/aio-max-nr 
65536


proxmox3 : node 4
root at kvmmind4:~# cat  /proc/sys/fs/aio-nr
30720
root at kvmmind4:~#  cat  /proc/sys/fs/aio-max-nr 
65536


each node have around 120-130kvm, with same config and disk.
aio-nr seem to be quite huge on proxmox4, and bigger than aio-max-nr 


I'll try to increase s.aio-max-nr tomorrow



----- Mail original -----
De: "aderumier" <aderumier at odiso.com>
À: "dietmar" <dietmar at proxmox.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Lundi 12 Octobre 2015 19:20:02
Objet: Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks)

Sorry, 
I speak to fast, doesn't seem to resolve the problem. 

It has worked for 1vm (don't known why), but the others don't start. 

(I have increase the counter to 10000 to be sure) 

I'll check that tommorrow 
----- Mail original ----- 
De: "aderumier" <aderumier at odiso.com> 
À: "dietmar" <dietmar at proxmox.com> 
Cc: "pve-devel" <pve-devel at pve.proxmox.com> 
Envoyé: Lundi 12 Octobre 2015 19:15:51 
Objet: Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks) 

>>Wild guess - maybe it is related to INotify? 
> 
>># cat /proc/sys/fs/inotify/max_user_instances 
>> 
>>We already run into that limit with LXC containers recently. 
>>Please can you test? 

Yes, this is working with increasing /proc/sys/fs/inotify/max_user_instances ! 

Thanks. 


This is strange, this is the same value on redhat 3.10 kernel, but I don't remember this behaviour. 


----- Mail original ----- 
De: "dietmar" <dietmar at proxmox.com> 
À: "aderumier" <aderumier at odiso.com>, "pve-devel" <pve-devel at pve.proxmox.com> 
Envoyé: Lundi 12 Octobre 2015 19:11:47 
Objet: Re: [pve-devel] proxmox 4.0 : "Could not set AIO state: File descriptor in bad state" qemu error with more than 129 kvm (129 disks) 

> On October 12, 2015 at 6:53 PM Alexandre DERUMIER <aderumier at odiso.com> wrote: 
> 
> 
> Hi, 
> 
> I have upgraded a server with a lot of vm (160vms), 
> 
> each vms is a clone of a template and they are 1 disk by vm 
> 
> When I started the 130th vm, I have this error 
> 
> kvm: -drive 
> file=/var/lib/vz/images/104/vm-104-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on: 
> Could not set AIO state: File descriptor in bad state 
> 
> 
> Stopping another vm, and start the 130th is working. 
> 
> 
> Seem to be a system limit. 

Wild guess - maybe it is related to INotify? 

# cat /proc/sys/fs/inotify/max_user_instances 

We already run into that limit with LXC containers recently. 
Please can you test? 

_______________________________________________ 
pve-devel mailing list 
pve-devel at pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 


More information about the pve-devel mailing list