[PVE-User] Shared storage on NAS speed - LVM(over iSCSI) vs NFS
Mikhail
m at plus-plus.su
Wed Jul 19 14:10:22 CEST 2017
On 07/19/2017 02:43 PM, Yannis Milios wrote:
> Have you checked if these drives are properly aligned, sometimes that can
> cause low r/w performance.
> Is there any particular reason you use mdadm instead of h/w raid controller?
Hello Yannis
There's no h/w raid controller because first we wanted to adopt ZFS on
that storage server. I wanted to use OmniOS as a base OS, but by the
time (about ~15 months ago) OmniOS did not support Intel's X550 10GiGE
(no drivers in kernel) NICs we have inside that server, so had to fall
back to Linux. As you know ZFS feels better when it has direct access to
the drives, without h/w raid level.
The MDADM RAID10 array was created without specifying any special
alignment options. What's the best way to check if the drives are
aligned in a proper way on existent arrwy?
Here's what I can see now:
1) fdisk output for one of disks in array:
# fdisk -l /dev/sda
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 482FED1A-9CD0-4AEF-ACFC-D981C9916FE2
Device Start End Sectors Size Type
/dev/sda1 2048 1953791 1951744 953M Linux filesystem
/dev/sda2 1953792 7814035455 7812081664 3.7T Linux RAID
2) MDADM array details:
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Mar 18 18:27:06 2016
Raid Level : raid10
Array Size : 7811819520 (7449.93 GiB 7999.30 GB)
Used Dev Size : 3905909760 (3724.97 GiB 3999.65 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Jul 19 14:58:57 2017
State : active, checking
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Check Status : 43% complete
Name : storage:0 (local to host storage)
UUID : 7346ef36:0a6b33f6:37eb29cd:58d04b7c
Events : 1010431
Number Major Minor RaidDevice State
0 8 2 0 active sync set-A /dev/sda2
1 8 18 1 active sync set-B /dev/sdb2
2 8 34 2 active sync set-A /dev/sdc2
3 8 50 3 active sync set-B /dev/sdd2
3) and LVM information for the PV that resides on md0 arrway:
# pvdisplay
--- Physical volume ---
PV Name /dev/md0
VG Name vg0
PV Size 7.28 TiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 1907182
Free PE 182430
Allocated PE 1724752
PV UUID CefFFF-Q6yz-eX2p-Ziev-jdFW-3G6h-vHaesD
The mdadm array is running check right now, but the speed is limited to
it's defaults:
# cat /proc/sys/dev/raid/speed_limit_max
200000
# cat /proc/sys/dev/raid/speed_limit_min
1000
# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
7811819520 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
[========>............] check = 43.6% (3412377088/7811819520)
finish=17599.2min speed=4165K/sec
bitmap: 16/59 pages [64KB], 65536KB chunk
unused devices: <none>
Thanks for your help!
More information about the pve-user
mailing list