<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hi Michale,<br>
<br>
That would risk the data in case of a power failure doesn't it?
(yes, server is UPS backed... :) )<br>
<br>
<br>
On 17/09/13 12:38, Michael Rasmussen wrote:<br>
</div>
<blockquote
cite="mid:e18eea0a-413c-4367-af46-bdd60508e8a4@email.android.com"
type="cite">Also barrier=0<br>
<br>
<div class="gmail_quote">Marco Gabriel - inett GmbH
<a class="moz-txt-link-rfc2396E" href="mailto:mgabriel@inett.de"><mgabriel@inett.de></a> wrote:
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt
0.8ex; border-left: 1px solid rgb(204, 204, 204);
padding-left: 1ex;">
<pre class="k9mail">For SSDs you should set the I/O scheduler to "NOOP". Proxmox uses Deadline by default, which is good for spinning disks, but not for SSDs.
Examples (ssd is sdb):
Get currently used scheduler:
cat /sys/block/sdb/queue/scheduler
noop anticipatory [deadline] cfq
Set scheduler:
echo noop > /sys/block/sdb/queue/scheduler
best regards,
Marco
-----Ursprüngliche Nachricht-----
Von: <a class="moz-txt-link-abbreviated" href="mailto:pve-user-bounces@pve.proxmox.com">pve-user-bounces@pve.proxmox.com</a> [<a class="moz-txt-link-freetext" href="mailto:pve-user-bounces@pve.proxmox.com">mailto:pve-user-bounces@pve.proxmox.com</a>] Im Auftrag von Eneko Lacunza
Gesendet: Dienstag, 17. September 2013 12:12
An: <a class="moz-txt-link-abbreviated" href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a>
Betreff: [PVE-User] SSD Performance test
Hi all,
I'm doing some tests with a Intel SSD 320 300GB disk. This is a 3 Gbps disk with max ratings of R/W 270/205 MB/s and 39500/23000-400 IOPS.
The disk is attached to a Dell PERC H200 (LSI SAS2008) RAID controller, no raid, no logical
volume, no cache and is mounted as "ext4 (rw,relatime,barrier=1,data=ordered)"
root@butroe:~# pveperf /srv/storage-local-ssd/
CPU BOGOMIPS: 36176.88
REGEX/SECOND: 761809
HD SIZE: 275.08 GB (/dev/sdc)
BUFFERED READS: 211.87 MB/sec
AVERAGE SEEK TIME: 0.23 ms
FSYNCS/SECOND: 1373.42
DNS EXT: 157.35 ms
I was expecting a better Fsync/second value, having seen on this list much better values with spinning disk RAIDs and taking into account this drive's IOPS rating.
What do you think? Maybe it's the lack of a controller write cache what is hurting the fsyncs and the value is good with a H200 controller?
Thanks
Eneko
--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa) <a moz-do-not-send="true" href="http://www.binovo.es">www.binovo.es</a>
<hr>
pve-user mailing list
<a class="moz-txt-link-abbreviated" href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a>
<a moz-do-not-send="true" href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a>
<hr>
pve-user mailing list
<a class="moz-txt-link-abbreviated" href="mailto:pve-user@pve.proxmox.com">pve-user@pve.proxmox.com</a>
<a moz-do-not-send="true" href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a>
!DSPAM:52382d67173671703778415!
</pre>
</blockquote>
</div>
<br>
-- <br>
Sent from my Android phone with K-9 Mail. Please excuse my
brevity.
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943575997
943493611
Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)
<a class="moz-txt-link-abbreviated" href="http://www.binovo.es">www.binovo.es</a></pre>
</body>
</html>