[PVE-User] Proxmox Ceph high memory usage

Gilberto Nunes gilberto.nunes32 at gmail.com
Wed Jan 16 14:04:01 CET 2019


May I use this command:

ceph osd primary-affinity <osd-id> <weight>


In order to reduce slow requests?
Thanks
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qua, 16 de jan de 2019 às 10:52, Gilberto Nunes <
gilberto.nunes32 at gmail.com> escreveu:

> I already do that
> ceph config set bluestore_cache_size 536870912
> ceph config set bluestore_cache_size_hdd 536870912
> ceph config set bluestore_cache_size_ssd 1073741824
>
> Any other clue you may have will be welcome.
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua, 16 de jan de 2019 às 10:42, Ronny Aasen <ronny+pve-user at aasen.cx>
> escreveu:
>
>> the memory consumption of the machine is an aggregate of multiple
>> consumers.
>>
>> identify what is using memory try commands like
>> top -o VIRT and top -o RES
>>
>>
>> to reduce VM memory usage, you can move or stop virtual machines,
>> reconfigure them with less memory, or try to use KSM if you nave many
>> identical vm's  https://en.wikipedia.org/wiki/Kernel_same-page_merging
>>
>> to reduce ceoh osd memory consumption you can tweak the bluestore memory
>> cache
>>
>> http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#automatic-cache-sizing
>> ; with only 16GB i think you need to try to reduce cache (and hence
>> performance) a bit here.
>>
>> also ceph memory usage increase quite a bit when recovering and
>> backfilling, so when planning resource requirements, plan for the
>> recovery situation, and have some free overhead.
>>
>> kind regards
>> Ronny Aasen
>>
>> On 16.01.2019 13:28, Gilberto Nunes wrote:
>> > pve-ceph01:~# ceph status
>> >    cluster:
>> >      id:     e67534b4-0a66-48db-ad6f-aa0868e962d8
>> >      health: HEALTH_WARN
>> >              nobackfill,norebalance,norecover,nodeep-scrub flag(s) set
>> >              394106/2589186 objects misplaced (15.221%)
>> >              Degraded data redundancy: 124011/2589186 objects degraded
>> > (4.790%), 158 pgs degraded, 76 pgs undersized
>> >
>> >    services:
>> >      mon: 5 daemons, quorum
>> > pve-ceph01,pve-ceph02,pve-ceph03,pve-ceph04,pve-ceph05
>> >      mgr: pve-ceph05(active), standbys: pve-ceph01, pve-ceph03,
>> pve-ceph04,
>> > pve-ceph02
>> >      osd: 21 osds: 21 up, 21 in; 230 remapped pgs
>> >           flags nobackfill,norebalance,norecover,nodeep-scrub
>> >
>> >    data:
>> >      pools:   1 pools, 512 pgs
>> >      objects: 863.06k objects, 3.17TiB
>> >      usage:   9.73TiB used, 53.0TiB / 62.8TiB avail
>> >      pgs:     124011/2589186 objects degraded (4.790%)
>> >               394106/2589186 objects misplaced (15.221%)
>> >               180 active+clean
>> >               76  active+remapped+backfill_wait
>> >               70  active+recovery_wait
>> >               63  active+undersized+degraded+remapped+backfill_wait
>> >               49  active+recovery_wait+degraded+remapped
>> >               32  active+recovery_wait+degraded
>> >               28  active+recovery_wait+remapped
>> >               12  active+recovery_wait+undersized+degraded+remapped
>> >               1   active+recovering+degraded+remapped
>> >               1   active+undersized+degraded+remapped+backfilling
>> >
>> >    io:
>> >      client:   694KiB/s rd, 172KiB/s wr, 118op/s rd, 38op/s wr
>> >      recovery: 257KiB/s, 0objects/s
>> > ---
>> > Gilberto Nunes Ferreira
>> >
>> > (47) 3025-5907
>> > (47) 99676-7530 - Whatsapp / Telegram
>> >
>> > Skype: gilberto.nunes36
>> >
>> >
>> >
>> >
>> >
>> > Em qua, 16 de jan de 2019 às 10:23, Gilberto Nunes <
>> > gilberto.nunes32 at gmail.com> escreveu:
>> >
>> >> Hi...
>> >> I am using BlueStore.
>> >> pve-manager/5.3-7/e8ed1e22 (running kernel: 4.15.18-9-pve)
>> >>   ceph                                 12.2.10-pve1
>> >>
>> >> Thanks
>> >> ---
>> >> Gilberto Nunes Ferreira
>> >>
>> >> (47) 3025-5907
>> >> (47) 99676-7530 - Whatsapp / Telegram
>> >>
>> >> Skype: gilberto.nunes36
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Em qua, 16 de jan de 2019 às 10:17, Eneko Lacunza <elacunza at binovo.es>
>> >> escreveu:
>> >>
>> >>> Hi Gilberto,
>> >>>
>> >>> Are you using Blustore? What version of Ceph?
>> >>>
>> >>> El 16/1/19 a las 13:11, Gilberto Nunes escribió:
>> >>>> Hi there
>> >>>>
>> >>>> Anybody else experiment hight memory usage in Proxmox CEPH Storage
>> >>> Server?
>> >>>> I have a 6 node PVE CEPH and after upgrade, I have noticed this high
>> >>> memory
>> >>>> usage...
>> >>>> All server has 16GB of ram. I know this is not recomended, but that
>> >>> what I
>> >>>> have at the moment.
>> >>>> In fact, just 3 servers ran with about 90% of memory usage.
>> >>>> All server is IBM x3200 m2 with SATA disks...
>> >>>> Here's ceph osd tree
>> >>>> ceph osd tree
>> >>>> ID  CLASS WEIGHT   TYPE NAME           STATUS REWEIGHT PRI-AFF
>> >>>>    -1       38.50000 root default
>> >>>>    -3        4.00000     host pve-ceph01
>> >>>>     0   hdd  1.00000         osd.0           up  1.00000       0
>> >>>>     1   hdd  1.00000         osd.1           up  1.00000       0
>> >>>>     2   hdd  1.00000         osd.2           up  1.00000       0
>> >>>>     3   hdd  1.00000         osd.3           up  1.00000       0
>> >>>>    -5        8.00000     host pve-ceph02
>> >>>>     4   hdd  2.00000         osd.4           up  1.00000 1.00000
>> >>>>     5   hdd  2.00000         osd.5           up  1.00000 1.00000
>> >>>>     6   hdd  2.00000         osd.6           up  1.00000 1.00000
>> >>>>     7   hdd  2.00000         osd.7           up  1.00000 1.00000
>> >>>>    -7        9.00000     host pve-ceph03
>> >>>>     8   hdd  3.00000         osd.8           up  1.00000 1.00000
>> >>>>     9   hdd  3.00000         osd.9           up  1.00000 1.00000
>> >>>>    10   hdd  3.00000         osd.10          up  1.00000 1.00000
>> >>>>    -9       12.00000     host pve-ceph04
>> >>>>    11   hdd  3.00000         osd.11          up  1.00000 1.00000
>> >>>>    12   hdd  3.00000         osd.12          up  1.00000 1.00000
>> >>>>    13   hdd  3.00000         osd.13          up  1.00000 1.00000
>> >>>>    14   hdd  3.00000         osd.14          up  1.00000 1.00000
>> >>>> -11        1.00000     host pve-ceph05
>> >>>>    15   hdd  0.50000         osd.15          up  1.00000       0
>> >>>>    16   hdd  0.50000         osd.16          up  1.00000       0
>> >>>> -13        4.50000     host pve-ceph06
>> >>>>    17   hdd  1.00000         osd.17          up  1.00000       0
>> >>>>    18   hdd  1.00000         osd.18          up  1.00000       0
>> >>>>    20   hdd  1.00000         osd.20          up  1.00000       0
>> >>>>    21   hdd  1.50000         osd.21          up  1.00000 1.00000
>> >>>>
>> >>>> ---
>> >>>> Gilberto Nunes Ferreira
>> >>>>
>> >>>> (47) 3025-5907
>> >>>> (47) 99676-7530 - Whatsapp / Telegram
>> >>>>
>> >>>> Skype: gilberto.nunes36
>> >>>> _______________________________________________
>> >>>> pve-user mailing list
>> >>>> pve-user at pve.proxmox.com
>> >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> >>>
>> >>>
>> >>> --
>> >>> Zuzendari Teknikoa / Director Técnico
>> >>> Binovo IT Human Project, S.L.
>> >>> Telf. 943569206
>> >>> Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
>> >>> www.binovo.es
>> >>>
>> >>> _______________________________________________
>> >>> pve-user mailing list
>> >>> pve-user at pve.proxmox.com
>> >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> >>>
>> >>
>> > _______________________________________________
>> > pve-user mailing list
>> > pve-user at pve.proxmox.com
>> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>> >
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>


More information about the pve-user mailing list