[pve-devel] pve-kernel package : add irqbalance as recommended ? (like debian linux-image package)
Alexandre DERUMIER
aderumier at odiso.com
Mon Feb 22 10:07:18 CET 2016
>>nearly all current cards and drivers do that. That's the reason why you
>>don't need it with current HW and drivers. Not sure why mellanox isn't
>>doing that by default - see alexandre's post.
>>
>>adaptec, lsi and intel have all queues per cpu / interrupt.
>>
>>Stefan
I don't remember the interrupts distribution when doing the ceph benchmark.
But they was also a lot of cpu load at the same time, so maybe irqbalance help to redistribute
irq to less busy core.
Here a recent slide from redhat about irqbalance
http://events.linuxfoundation.org/sites/events/files/slides/interrupts_16x9_final.pdf
Seem that since irqbalance 1.X, they are almost able to do same auto fined tuning than manual affinity.
----- Mail original -----
De: "Stefan Priebe" <s.priebe at profihost.ag>
À: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Lundi 22 Février 2016 09:26:31
Objet: Re: [pve-devel] pve-kernel package : add irqbalance as recommended ? (like debian linux-image package)
Am 20.02.2016 um 12:40 schrieb Martin Waschbüsch:
>
>> Am 20.02.2016 um 08:25 schrieb Alexandre DERUMIER <aderumier at odiso.com>:
>>
>>>> Some articles, for instance https://www.kernel.org/doc/ols/2009/ols2009-pages-169-184.pdf,
>>>> explicitly recommend disabling irqbalance when 10GbE is involved.
>>>>
>>>> Do you know if this is still true today? After all, the paper is from 2009.
>>
>> Well, the article is about to disabling irqbalance AND manually binding cpus on network interfaces.
>>
>> Manual binding is better because you can fine tuning.
>>
>> But using irqbalance vs do nothing, irqbalance wins.
>>
>> I have seen a lot of system, using only cpu0 for network interrupts for example.
>
> Ah, I see. In that case it would make sense indeed.
> The cards I use (both nic and sas/raid) employ one interrupt queue per core, so there never was anything for me to tune.
nearly all current cards and drivers do that. That's the reason why you
don't need it with current HW and drivers. Not sure why mellanox isn't
doing that by default - see alexandre's post.
adaptec, lsi and intel have all queues per cpu / interrupt.
Stefan
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
_______________________________________________
pve-devel mailing list
pve-devel at pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
More information about the pve-devel
mailing list