[PVE-User] Proxmox Ceph high memory usage
Gilberto Nunes
gilberto.nunes32 at gmail.com
Wed Jan 16 16:25:57 CET 2019
Well I realize that set memory to 1GB cause me a lot of trouble.
Now set to 2GB and all is ok...
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em qua, 16 de jan de 2019 às 11:40, Gilberto Nunes <
gilberto.nunes32 at gmail.com> escreveu:
> My other question is if I could use affinity to prevent performance
> bottlenecks...
> I have 5 HDD which is 5 900 RPM.... So can I apply this affinity to this
> slow disks?
> ---
> Gilberto Nunes Ferreira
>
> (47) 3025-5907
> (47) 99676-7530 - Whatsapp / Telegram
>
> Skype: gilberto.nunes36
>
>
>
>
>
> Em qua, 16 de jan de 2019 às 11:18, Gilberto Nunes <
> gilberto.nunes32 at gmail.com> escreveu:
>
>> SO I use this command
>>
>> ceph config set osd_memory_target 1073741824
>> And set it into /etc/pve/ceph.conf
>> It's seems to me that have a positive effect...
>> I am monitoring yet
>>
>> Thanks a lot
>> ---
>> Gilberto Nunes Ferreira
>>
>> (47) 3025-5907
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>> Skype: gilberto.nunes36
>>
>>
>>
>>
>>
>> Em qua, 16 de jan de 2019 às 11:11, Gilberto Nunes <
>> gilberto.nunes32 at gmail.com> escreveu:
>>
>>> Oh! I see it now!
>>> So what I need to change is osd_memory_target instead bluestore_cache_*,
>>> right?
>>>
>>> Thanks
>>> ---
>>> Gilberto Nunes Ferreira
>>>
>>> (47) 3025-5907
>>> (47) 99676-7530 - Whatsapp / Telegram
>>>
>>> Skype: gilberto.nunes36
>>>
>>>
>>>
>>>
>>>
>>> Em qua, 16 de jan de 2019 às 11:07, Alwin Antreich <
>>> a.antreich at proxmox.com> escreveu:
>>>
>>>> Hello Gilberto,
>>>>
>>>> On Wed, Jan 16, 2019 at 10:11:06AM -0200, Gilberto Nunes wrote:
>>>> > Hi there
>>>> >
>>>> > Anybody else experiment hight memory usage in Proxmox CEPH Storage
>>>> Server?
>>>> > I have a 6 node PVE CEPH and after upgrade, I have noticed this high
>>>> memory
>>>> > usage...
>>>> > All server has 16GB of ram. I know this is not recomended, but that
>>>> what I
>>>> > have at the moment.
>>>> > In fact, just 3 servers ran with about 90% of memory usage.
>>>> > All server is IBM x3200 m2 with SATA disks...
>>>> > Here's ceph osd tree
>>>> > ceph osd tree
>>>> > ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
>>>> > -1 38.50000 root default
>>>> > -3 4.00000 host pve-ceph01
>>>> > 0 hdd 1.00000 osd.0 up 1.00000 0
>>>> > 1 hdd 1.00000 osd.1 up 1.00000 0
>>>> > 2 hdd 1.00000 osd.2 up 1.00000 0
>>>> > 3 hdd 1.00000 osd.3 up 1.00000 0
>>>> > -5 8.00000 host pve-ceph02
>>>> > 4 hdd 2.00000 osd.4 up 1.00000 1.00000
>>>> > 5 hdd 2.00000 osd.5 up 1.00000 1.00000
>>>> > 6 hdd 2.00000 osd.6 up 1.00000 1.00000
>>>> > 7 hdd 2.00000 osd.7 up 1.00000 1.00000
>>>> > -7 9.00000 host pve-ceph03
>>>> > 8 hdd 3.00000 osd.8 up 1.00000 1.00000
>>>> > 9 hdd 3.00000 osd.9 up 1.00000 1.00000
>>>> > 10 hdd 3.00000 osd.10 up 1.00000 1.00000
>>>> > -9 12.00000 host pve-ceph04
>>>> > 11 hdd 3.00000 osd.11 up 1.00000 1.00000
>>>> > 12 hdd 3.00000 osd.12 up 1.00000 1.00000
>>>> > 13 hdd 3.00000 osd.13 up 1.00000 1.00000
>>>> > 14 hdd 3.00000 osd.14 up 1.00000 1.00000
>>>> > -11 1.00000 host pve-ceph05
>>>> > 15 hdd 0.50000 osd.15 up 1.00000 0
>>>> > 16 hdd 0.50000 osd.16 up 1.00000 0
>>>> > -13 4.50000 host pve-ceph06
>>>> > 17 hdd 1.00000 osd.17 up 1.00000 0
>>>> > 18 hdd 1.00000 osd.18 up 1.00000 0
>>>> > 20 hdd 1.00000 osd.20 up 1.00000 0
>>>> > 21 hdd 1.50000 osd.21 up 1.00000 1.00000
>>>> >
>>>> Did you see the changle on package upgrade? It explains why it is using
>>>> more memory then before.
>>>>
>>>> http://download.proxmox.com/debian/ceph-luminous/dists/stretch/main/binary-amd64/ceph_12.2.10-pve1.changelog
>>>>
>>>> --
>>>> Cheers,
>>>> Alwin
>>>>
>>>> _______________________________________________
>>>> pve-user mailing list
>>>> pve-user at pve.proxmox.com
>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>
>>>
More information about the pve-user
mailing list