[pve-devel] Hyperconverged Cloud / Qemu + Ceph on same node
Dietmar Maurer
dietmar at proxmox.com
Mon Jul 23 20:53:19 CEST 2018
I am not sure CPU pinning helps. What problem do you want to solve
exactly?
> maybe could we use cgroups ? (in ceph systemd units)
>
> we already use them fo vm && ct (shares cpu option for priority)
>
>
> ----- Mail original -----
> De: "Stefan Priebe, Profihost AG" <s.priebe at profihost.ag>
> À: "pve-devel" <pve-devel at pve.proxmox.com>
> Envoyé: Lundi 23 Juillet 2018 13:49:17
> Objet: [pve-devel] Hyperconverged Cloud / Qemu + Ceph on same node
>
> Hello,
>
> after listening / reading:
> https://www.openstack.org/videos/vancouver-2018/high-performance-ceph-for-hyper-converged-telco-nfv-infrastructure
>
> and
> https://www.youtube.com/watch?v=0_V-L7_CDTs&feature=youtu.be
> and
> https://arxiv.org/pdf/1802.08102.pdf
>
> I was thinking about creating a Proxmox based Cloud with Ceph on the
> same nodes as proxmox.
>
> What i'm missing is HOW do i get automatic CPU pinning for Qemu and
> Ceph? How can they live in parallel without manually adjusting cpu
> pinning lists? Has anybody already tried it?
>
> Greets,
> Stefan
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
More information about the pve-devel
mailing list