[PVE-User] High CPU and Memory utilzation

Gilberto Nunes gilberto.nunes32 at gmail.com
Wed Jul 13 19:28:53 CEST 2016


Hi list

I notice a high diskwrite ops in the Disk IO graphic...
For now, I work with just one GlusterFS server used as NFS.
If I put more one server, this will emprove the performance some how?

2016-07-13 9:36 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:

> Hello PVE guys...
>
> I have one host, which is a Dell PowerEdge R430, with 12 cores and 48 GB
> of memory.
> The disk are all SAS 10K RPM.
> I have just one KVM VM with Ubuntu 14.04, which is our Zimbra Mail Server.
> The qcow2 image is reside in another server, also a Dell PowerEdge R430,
> with 15 GB of memory. This server, act like a Storage, a GlusterFS Server,
> set up like NFS.
> There's 3 SAS disk 10K RPM. Between this to servers, I have three 1 GB NIC
> set as bond0, with Bonding Mode: load balancing.
> This the iperf3 output:
>
> iperf3 -c storage100
> Connecting to host storage100, port 5201
> [  4] local 10.1.1.140 port 36098 connected to 10.1.1.180 port 5201
> [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
> [  4]   0.00-1.00   sec   309 MBytes  2.60 Gbits/sec  101    386 KBytes
>
> [  4]   1.00-2.00   sec   250 MBytes  2.10 Gbits/sec   20    387 KBytes
>
> [  4]   2.00-3.00   sec   247 MBytes  2.07 Gbits/sec   20    390 KBytes
>
> [  4]   3.00-4.00   sec   306 MBytes  2.56 Gbits/sec    0    390 KBytes
>
> [  4]   4.00-5.00   sec   314 MBytes  2.63 Gbits/sec   13    390 KBytes
>
> [  4]   5.00-6.00   sec   314 MBytes  2.63 Gbits/sec    0    390 KBytes
>
> [  4]   6.00-7.00   sec   295 MBytes  2.48 Gbits/sec    0    390 KBytes
>
> [  4]   7.00-8.00   sec   280 MBytes  2.35 Gbits/sec   90    468 KBytes
>
> [  4]   8.00-9.00   sec   297 MBytes  2.49 Gbits/sec    0    468 KBytes
>
> [  4]   9.00-10.00  sec   304 MBytes  2.55 Gbits/sec  142    485 KBytes
>
> - - - - - - - - - - - - - - - - - - - - - - - - -
> [ ID] Interval           Transfer     Bandwidth       Retr
> [  4]   0.00-10.00  sec  2.85 GBytes  2.45 Gbits/sec  386
> sender
> [  4]   0.00-10.00  sec  2.85 GBytes  2.44 Gbits/sec
>  receiver
>
> iperf Done.
>
> Sometimes, and this is more often than I wish, I run into high cpu and
> memory utilization. The CPU reach %CPU 250-300 and %MEM reach 77-80.
>> And I don't exactly this is the cause of slow access to disk, but in fact,
> I get more than 50 MB/s in diskwrite. That is showing to me, from Disk IO
> rrdtool graphics in PVE web console.
> In this period of time, the Zimbra Mail Server has slow access and my
> users complain a lot of time...
>
> I had make some research in Google and try improve performance in
> glusterFS.
> Here is something I working on:
>
> gluster vol info
>
> Volume Name: storage
> Type: Distribute
> Volume ID: 183b40bc-9e1d-4f2c-a772-ec8e15367485
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: storage100:/data/volume
> Options Reconfigured:
> performance.write-behind-window-size: 1024MB
> performance.cache-size: 1GB
> performance.io-thread-count: 16
> performance.flush-behind: on
> performance.cache-refresh-timeout: 10
> performance.quick-read: off
> performance.read-ahead: enable
> network.ping-timeout: 2
> performance.cache-max-file-size: 4MB
> performance.md-cache-timeout: 1
> nfs.addr-namelookup: off
> performance.client-io-threads: on
> performance.nfs.io-cache: off
> performance.cache-priority: *.qcow2:1,*.raw:1
>
>
> Is there some other thing I can do improve performance?
>
> I'll appreciated any advice!...
>
>
> Thanks a lot.
>
>
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36



More information about the pve-user mailing list