[PVE-User] I try hard but...

Gilberto Nunes gilberto.nunes32 at gmail.com
Thu Oct 29 19:17:27 CET 2015


Well friends
I am happy again... I don't know if this "happiness" will last forever..
Only time will say...
But now, with GlusterFS the VM hold still.
Previously, with NFS, the iowait always high, near 15-20 %.
Now, with Glusterfs, I am right now transferring more than 40GB from one
server to VM and the iowait inside the VM hold 4,50 %...
We will see what happens next....
Thanks for all help that I received from you guys!
All my best!

2015-10-29 14:39 GMT-02:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:

> Oh God!
>
> I re-run last test with dd and the VM just die!!! So sad!!!!!!!! :(
>
> 2015-10-29 14:32 GMT-02:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>
>> Inside the guest, with GlusterFS I get this:
>>
>> dd if=/dev/zero of=writetest bs=8k count=131072
>>
>> 131072+0 registros de entrada
>> 131072+0 registros de saída
>> 1073741824 bytes (1,1 GB) copiados, 0,858862 s, 1,3 GB/s
>>
>>
>>
>>
>> 2015-10-29 13:57 GMT-02:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>>
>>> This one can easy be a new thread, but let say it anyway
>>>
>>> I run simple dd test like this:
>>>
>>> dd if=/dev/zero of=writetest bs=8k count=131072
>>>
>>>
>>> In NFS
>>>
>>> 131072+0 registros de entrada
>>> 131072+0 registros de saída
>>> 1073741824 bytes (1,1 GB) copiados, 17,1806 s, 62,5 MB/s
>>>
>>> In GlusterFS
>>>
>>> 131072+0 records in
>>> 131072+0 records out
>>> 1073741824 bytes (1.1 GB) copied, 9.13338 s, 118 MB/s
>>>
>>> And with conv=fsync
>>>
>>> GlusterFS
>>>
>>> dd if=/dev/zero of=writetest bs=8k count=131072 conv=fsync
>>> 131072+0 records in
>>> 131072+0 records out
>>> 1073741824 bytes (1.1 GB) copied, 11.0702 s, 97.0 MB/s
>>>
>>> NFS
>>>
>>>  dd if=/dev/zero of=writetest bs=8k count=131072 conv=fsync
>>> 131072+0 registros de entrada
>>> 131072+0 registros de saída
>>> 1073741824 bytes (1,1 GB) copiados, 14,2901 s, 75,1 MB/s
>>>
>>> Both servers are PowerEdge R430 with SAS HD 7.2 K RPM, with 16 GB of
>>> memory, linked with 1 GB direct cable...
>>>
>>> GlusterFS won?!?!
>>>
>>> I would like to hear some opnions...
>>>
>>> Thanks a lot
>>>
>>>
>>> 2015-10-29 13:02 GMT-02:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>>>
>>>> It could be, but it could be also that with IDE or SATA the performance
>>>> is not the same as well with Virtio, or am I wrong???
>>>>
>>>> 2015-10-29 12:14 GMT-02:00 Lindsay Mathieson <
>>>> lindsay.mathieson at gmail.com>:
>>>>
>>>>>
>>>>> On 29 October 2015 at 23:09, Gilberto Nunes <
>>>>> gilberto.nunes32 at gmail.com> wrote:
>>>>>
>>>>>> Disk is virtio, alright!
>>>>>> Because I need live migration and HA.
>>>>>> With IDE/SATA there is no way to do that, AFAIK!
>>>>>
>>>>>
>>>>> You can live migrate IDE/SATA
>>>>>
>>>>>
>>>>> --
>>>>> Lindsay
>>>>>
>>>>> _______________________________________________
>>>>> pve-user mailing list
>>>>> pve-user at pve.proxmox.com
>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Gilberto Ferreira
>>>> +55 (47) 9676-7530
>>>> Skype: gilberto.nunes36
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530
>>> Skype: gilberto.nunes36
>>>
>>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530
>> Skype: gilberto.nunes36
>>
>>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20151029/6146cfe9/attachment.htm>


More information about the pve-user mailing list