[PVE-User] I/O issues
Shain Miley
smiley at npr.org
Fri Jun 26 18:01:58 CEST 2009
Just an FYI...
I installed the new PERC raid cards and I getting better numbers for the
'FSYNCS/SECOND' test now.
proxmox:/home/smiley# pveperf
CPU BOGOMIPS: 42564.48
REGEX/SECOND: 671808
HD SIZE: 68.41 GB (/dev/pve/root)
BUFFERED READS: 141.97 MB/sec
AVERAGE SEEK TIME: 4.98 ms
FSYNCS/SECOND: 2545.27
Shain
Shain Miley wrote:
> Well the version does not seem to make a difference much at all:
>
> proxmox_1.1:/# pveperf
> CPU BOGOMIPS: 42564.52
> REGEX/SECOND: 739582
> HD SIZE: 68.41 GB (/dev/pve/root)
> BUFFERED READS: 122.05 MB/sec
> AVERAGE SEEK TIME: 5.05 ms
> FSYNCS/SECOND: 150.74
>
>
> proxmox_1.3:/# pveperf
> CPU BOGOMIPS: 42564.51
> REGEX/SECOND: 663579
> HD SIZE: 68.41 GB (/dev/pve/root)
> BUFFERED READS: 112.45 MB/sec
> AVERAGE SEEK TIME: 5.08 ms
> FSYNCS/SECOND: 153.96
>
> I ran pveperf on another machine and I got these numbers:
>
> CPU BOGOMIPS: 39904.42
> REGEX/SECOND: 797648
> HD SIZE: 130.32 GB (/dev/sda2)
> BUFFERED READS: 120.10 MB/sec
> AVERAGE SEEK TIME: 5.33 ms
> FSYNCS/SECOND: 3069.82
>
> It looks like the slower machines are using:
>
> 300G, Serial Attached Scsi.10k, 2.5, Seagate Firefly
>
> and the faster one (FSYNCS/SECOND) are using these:
>
> 146G, Serial Attached SCSI, 15K, 3.5, Seagate, 15K
>
> The faster systems are using a different raid controller and the
> megaraid_sas diver (vs mptsas in the slower system) as well...so I can
> only really wait on the PERC 6I RAID card to get here and test it.
>
> Now I am wondering if it is not the RAID card but the drives that are
> the issue...or a combination of both. Should 10K SAS drives give
> better numbers then 'FSYNCS/SECOND: 153.96'?
>
>
> Thanks,
>
> Shain
>
>
>
>
> Martin Maurer wrote:
>>> No these are not SSD's. They are SAS drives. I am going to try an
>>> install of 1.1 on one of these DELL's then upgrade to 1.2 and then to
>>> 1.3 and run pveperf each time. I found another machine v1.1 withe the
>>> same RAID card and the numbers are not as bad as the ones from 1.3. I
>>> am wondering it is some change in the kernel that is responsible for
>>> the
>>> poor performance.
>>>
>>> I will post the numbers in a few hours so I can get some feedback.
>>>
>>> Thanks,
>>>
>>> Shain
>>>
>>
>>
>> I just want to show you the results you can get with SSD, just FYI -
>> our results with the Samsung ssd.
>> Sorry for the unclear comment,
>>
>> Br, martin
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
>
>
More information about the pve-user
mailing list