[PVE-User] Very slow install of applications in Windows 2008R2 VM on Proxmox - What is the cause?
Bruce B
bruceb444 at gmail.com
Tue Sep 24 01:08:02 CEST 2013
And does this strike you as odd - I never seen this on another proxmox:
root at hp:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 4.8G 304K 4.8G 1% /run
/dev/mapper/pve-root 95G 1.4G 89G 2% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 9.5G 22M 9.5G 1% /run/shm
/dev/mapper/pve-data 302G 59G 243G 20% /var/lib/vz
/dev/sda1 495M 56M 415M 12% /boot
/dev/fuse 30M 40K 30M 1% /etc/pve
/var/lib/vz/private/40020 150G 1.2G 149G 1% /var/lib/vz/root/40020
none 4.0G 8.0K 4.0G 1% /var/lib/vz/root/40020/dev
/var/lib/vz/private/40100 80G 932M 80G 2% /var/lib/vz/root/40100
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40100/dev
/var/lib/vz/private/40150 80G 929M 80G 2% /var/lib/vz/root/40150
none 1.0G 4.0K 1.0G 1% /var/lib/vz/root/40150/dev
/var/lib/vz/private/40101 80G 932M 80G 2% /var/lib/vz/root/40101
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40101/dev
/var/lib/vz/private/40103 80G 930M 80G 2% /var/lib/vz/root/40103
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40103/dev
/var/lib/vz/private/40104 80G 930M 80G 2% /var/lib/vz/root/40104
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40104/dev
/var/lib/vz/private/40105 80G 930M 80G 2% /var/lib/vz/root/40105
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40105/dev
/var/lib/vz/private/40106 80G 930M 80G 2% /var/lib/vz/root/40106
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40106/dev
/var/lib/vz/private/40107 80G 930M 80G 2% /var/lib/vz/root/40107
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40107/dev
/var/lib/vz/private/40102 80G 1005M 80G 2% /var/lib/vz/root/40102
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40102/dev
/var/lib/vz/private/40011 80G 1.5G 79G 2% /var/lib/vz/root/40011
none 2.0G 4.0K 2.0G 1% /var/lib/vz/root/40011/dev
none 2.0G 0 2.0G 0%
/var/lib/vz/root/40011/dev/shm
/var/lib/vz/private/40111 80G 930M 80G 2% /var/lib/vz/root/40111
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40111/dev
/var/lib/vz/private/40112 80G 929M 80G 2% /var/lib/vz/root/40112
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40112/dev
/var/lib/vz/private/40113 80G 929M 80G 2% /var/lib/vz/root/40113
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40113/dev
/var/lib/vz/private/40115 80G 930M 80G 2% /var/lib/vz/root/40115
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40115/dev
/var/lib/vz/private/40116 80G 929M 80G 2% /var/lib/vz/root/40116
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40116/dev
/var/lib/vz/private/40117 80G 929M 80G 2% /var/lib/vz/root/40117
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40117/dev
/var/lib/vz/private/40118 80G 930M 80G 2% /var/lib/vz/root/40118
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40118/dev
/var/lib/vz/private/40010 20G 759M 20G 4% /var/lib/vz/root/40010
none 512M 4.0K 512M 1% /var/lib/vz/root/40010/dev
/var/lib/vz/private/40109 80G 931M 80G 2% /var/lib/vz/root/40109
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40109/dev
/var/lib/vz/private/40110 80G 930M 80G 2% /var/lib/vz/root/40110
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40110/dev
/var/lib/vz/private/40114 80G 930M 80G 2% /var/lib/vz/root/40114
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40114/dev
/var/lib/vz/private/40119 80G 929M 80G 2% /var/lib/vz/root/40119
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40119/dev
/var/lib/vz/private/40108 80G 930M 80G 2% /var/lib/vz/root/40108
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40108/dev
/var/lib/vz/private/40120 80G 929M 80G 2% /var/lib/vz/root/40120
none 1.0G 8.0K 1.0G 1% /var/lib/vz/root/40120/dev
Thanks,
On Mon, Sep 23, 2013 at 11:55 AM, Bruce B <bruceb444 at gmail.com> wrote:
> Eneko,
>
> VMs are off and results are - I think it was off before too (I have some
> CentOS containers that are on which I can't turn off - production!):
>
> CPU BOGOMIPS: 72530.72
> REGEX/SECOND: 589160
> HD SIZE: 94.49 GB (/dev/mapper/pve-root)
> BUFFERED READS: 100.20 MB/sec
> AVERAGE SEEK TIME: 11.14 ms
> FSYNCS/SECOND: 19.79
> DNS EXT: 74.88 ms
>
> I am feeling the pain on Windows big time but nothing bad on Containers. *So
> far we don't have a conclusion if it's the kernel issue, HDD issue, or
> controller issue right?*
> *
> *
> *
> *
> Info asked is below:
>
> root at hp:~# lspci
> 00:00.0 Host bridge: Intel Corporation 5520 I/O Hub to ESI Port (rev 13)
> 00:01.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express
> Root Port 1 (rev 13)
> 00:03.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express
> Root Port 3 (rev 13)
> 00:07.0 PCI bridge: Intel Corporation 5520/5500/X58 I/O Hub PCI Express
> Root Port 7 (rev 13)
> 00:09.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI
> Express Root Port 9 (rev 13)
> 00:0a.0 PCI bridge: Intel Corporation 7500/5520/5500/X58 I/O Hub PCI
> Express Root Port 10 (rev 13)
> 00:14.0 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub System
> Management Registers (rev 13)
> 00:14.1 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub GPIO and Scratch
> Pad Registers (rev 13)
> 00:14.2 PIC: Intel Corporation 7500/5520/5500/X58 I/O Hub Control Status
> and RAS Registers (rev 13)
> 00:1a.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI
> Controller #4
> 00:1a.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI
> Controller #2
> 00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express
> Root Port 1
> 00:1c.4 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express
> Root Port 5
> 00:1d.0 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI
> Controller #1
> 00:1d.1 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI
> Controller #2
> 00:1d.2 USB controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI
> Controller #3
> 00:1d.7 USB controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI
> Controller #1
> 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
> 00:1f.0 ISA bridge: Intel Corporation 82801JIR (ICH10R) LPC Interface
> Controller
> 00:1f.2 SATA controller: Intel Corporation 82801JI (ICH10 Family) SATA
> AHCI Controller
> 02:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA
> G200e [Pilot] ServerEngines (SEP1) (rev 02)
> 05:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network
> Connection (rev 01)
> 05:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network
> Connection (rev 01)
>
> Regards,
>
>
>
> On Mon, Sep 23, 2013 at 11:09 AM, Eneko Lacunza <elacunza at binovo.es>wrote:
>
>> Hi Bruce,
>>
>> pveperf is quite bad. From my limited experience, a tipical 7200 rpms
>> SATA drive gives 60 fsync/s and >100 MB/sec buffered reads. Average seek
>> time is very bad too (~13ms in 7200 rpm drive). If you had VMs running,
>> please stop them all and rerun the command.
>>
>> You shouldn't used this for virtualization unless this problem is fixed
>> (you're already feeling the pain eh??)
>>
>> What hard disk controller do you have? ('lspci')
>>
>>
>> On 23/09/13 16:30, Bruce B wrote:
>>
>> Eneko,
>>
>> Thanks for the feedback. It seems that the whole Windows system is
>> slow. It happens with loading applications too and loading start menu for
>> example so if I am understanding this right, viritio drives which are
>> installed after Windows is installed may not help me a lot?! Please correct
>> me if I am wrong. Also how can I build a virtio drive to test it?
>>
>> *Below are results of pveperf. Is this very bad?*
>>
>> root at hp:~# pveperf
>> CPU BOGOMIPS: 72530.72
>> REGEX/SECOND: 583443
>> HD SIZE: 94.49 GB (/dev/mapper/pve-root)
>> BUFFERED READS: 61.13 MB/sec
>> AVERAGE SEEK TIME: 29.30 ms
>> FSYNCS/SECOND: 9.63
>> DNS EXT: 70.07 ms
>>
>> Regards,
>>
>>
>> On Mon, Sep 23, 2013 at 3:00 AM, Eneko Lacunza <elacunza at binovo.es>wrote:
>>
>>> Hi Bruce,
>>>
>>> pveperf on the disk (/) ?
>>>
>>> If you haven't, I think it will help you a lot installing virtio drivers
>>> on the Windows guest, then changing VM disks from ide to virtio.
>>>
>>>
>>> On 22/09/13 22:20, Bruce B wrote:
>>>
>>> Thanks for feedback Krzysztof and Alexandre. Below are the info:
>>>
>>> I am using 1x 500GB WD HDD. I can add another one if that helps -
>>> something like: WD5001AALS. Would that help? Where do you read the IOPS?
>>> and what is a good number of IOPS today?
>>>
>>> For VM I am using LOCAL QCOW2 - not sure how virtio drives work.
>>>
>>> Hoping following info help you tell me if I am hitting a controller
>>> bottleneck (meaning I can't help it) or if it is an HDD problem:
>>>
>>> *-storage
>>> description: SATA controller
>>> product: 82801JI (ICH10 Family) SATA AHCI Controller
>>> vendor: Intel Corporation
>>> physical id: 1f.2
>>> bus info: pci at 0000:00:1f.2
>>> logical name: scsi0
>>> version: 00
>>> width: 32 bits
>>> clock: 66MHz
>>> capabilities: storage msi pm ahci_1.0 bus_master cap_list
>>> emulated
>>> configuration: driver=ahci latency=0
>>> resources: irq:50 ioport:d880(size=8) ioport:d800(size=4)
>>> ioport:d480(size=8) ioport:d400(size=4) ioport:d080(size=32)
>>> memory:faffc000-faffc7ff
>>> *-disk
>>> description: ATA Disk
>>> product: WDC WD5001AALS-0
>>> vendor: Western Digital
>>> physical id: 0.0.0
>>> bus info: scsi at 0:0.0.0
>>> logical name: /dev/sda
>>> version: 05.0
>>> serial: WD-WCATR2413417
>>> size: 465GiB (500GB)
>>> capabilities: partitioned partitioned:dos
>>> configuration: ansiversion=5 sectorsize=512
>>> signature=00064f12
>>> *-volume:0
>>> description: EXT3 volume
>>> vendor: Linux
>>> physical id: 1
>>> bus info: scsi at 0:0.0.0,1
>>> logical name: /dev/sda1
>>> logical name: /boot
>>> version: 1.0
>>> serial: 8fe2447e-4258-4d39-b7c7-450b66460abf
>>> size: 511MiB
>>> capacity: 511MiB
>>> capabilities: primary bootable journaled
>>> extended_attributes recover ext3 ext2 initialized
>>> configuration: created=2013-08-06 16:38:36
>>> filesystem=ext3 modified=2013-08-09 17:14:18 mount.fstype=ext3
>>> mount.options=rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered
>>> mounted=2013-08-09 17:14:18 state=mounted
>>> *-volume:1
>>> description: Linux LVM Physical Volume partition
>>> physical id: 2
>>> bus info: scsi at 0:0.0.0,2
>>> logical name: /dev/sda2
>>> serial: pqnaJf-WL5Q-kz2z-a3CJ-f6rS-Jxz4-LICwof
>>> size: 465GiB
>>> capacity: 465GiB
>>> capabilities: primary multi lvm2
>>>
>>>
>>>
>>> Thanks,
>>>
>>> On Sun, Sep 22, 2013 at 6:07 AM, Krzysztof Bloniarz <kb0spam at gmail.com>wrote:
>>>
>>>> Hi Bruce,
>>>>
>>>> Could you confirm that you are using one 500GB SATA drive as your
>>>> storage ?
>>>> How many VMs are running on this drive ?
>>>>
>>>> This SATA drive is capable of 60 IOPS maybe 70IOPS, you can easily
>>>> saturate this installing windows apps, particularly if you run simultaneous
>>>> VMs on that drive
>>>>
>>>> To solve your 'performace' problems you have to build RAID and add
>>>> more spindles.
>>>>
>>>> Regards,
>>>> KB
>>>>
>>>>
>>>>
>>>> On Fri, Sep 20, 2013 at 8:33 PM, Bruce B <bruceb444 at gmail.com> wrote:
>>>>
>>>>> Hi Everyone,
>>>>>
>>>>> I am seeing very slow install of applications within a Windows
>>>>> 2008R2 VM that I built with 24Gbs of RAM (no users on it yet) and the
>>>>> Proxmox server is a DL160 G6 with Dual L5520 Xeon quad core CPUs. I don't
>>>>> see why this is acting so slow. I am looking for suggestions on how to make
>>>>> this work faster.
>>>>>
>>>>> Below are my findings of IO stats and HDD specifications. I would
>>>>> like to know if there is any hope to this server. I am running Windows 2008
>>>>> R2 in IDE0 and QCOW mode.
>>>>>
>>>>> *root at hp:~# iostat -xkd 2 (util shows over 97% below as a
>>>>> program is being installed)*
>>>>> Linux 2.6.32-22-pve (hp) 09/20/2013 _x86_64_ (16
>>>>> CPU)
>>>>>
>>>>> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s
>>>>> avgrq-sz avgqu-sz await r_await w_await svctm %util
>>>>> sda 0.00 13.50 0.00 94.50 0.00 9023.25
>>>>> 190.97 2.41 25.43 0.00 25.43 10.32 97.50
>>>>> dm-0 0.00 0.00 0.00 13.00 0.00 52.00
>>>>> 8.00 0.70 54.15 0.00 54.15 6.77 8.80
>>>>> dm-1 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> 0.00 0.00 0.00 0.00 0.00 0.00 0.00
>>>>> dm-2 0.00 0.00 0.00 95.00 0.00 8913.25
>>>>> 187.65 2.29 24.12 0.00 24.12 10.24 97.30
>>>>>
>>>>>
>>>>> *hdparm output:*
>>>>>
>>>>>
>>>>> ATA device, with non-removable media
>>>>> Model Number: WDC WD5001AALS-00E3A0
>>>>> Serial Number: WD-WCATR2413417
>>>>> Firmware Revision: 05.01D05
>>>>> Transport: Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6
>>>>> Standards:
>>>>> Supported: 8 7 6 5
>>>>> Likely used: 8
>>>>> Configuration:
>>>>> Logical max current
>>>>> cylinders 16383 16383
>>>>> heads 16 16
>>>>> sectors/track 63 63
>>>>> --
>>>>> CHS current addressable sectors: 16514064
>>>>> LBA user addressable sectors: 268435455
>>>>> LBA48 user addressable sectors: 976773168
>>>>> Logical/Physical Sector size: 512 bytes
>>>>> device size with M = 1024*1024: 476940 MBytes
>>>>> device size with M = 1000*1000: 500107 MBytes (500 GB)
>>>>> cache/buffer size = unknown
>>>>> Capabilities:
>>>>> LBA, IORDY(can be disabled)
>>>>> Queue depth: 32
>>>>> Standby timer values: spec'd by Standard, with device specific minimum
>>>>> R/W multiple sector transfer: Max = 16 Current = 0
>>>>> Recommended acoustic management value: 128, current value: 254
>>>>> DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
>>>>> Cycle time: min=120ns recommended=120ns
>>>>> PIO: pio0 pio1 pio2 pio3 pio4
>>>>> Cycle time: no flow control=120ns IORDY flow control=120ns
>>>>> Commands/features:
>>>>> Enabled Supported:
>>>>> * SMART feature set
>>>>> Security Mode feature set
>>>>> * Power Management feature set
>>>>> Write cache
>>>>> * Look-ahead
>>>>> * Host Protected Area feature set
>>>>> * WRITE_BUFFER command
>>>>> * READ_BUFFER command
>>>>> * NOP cmd
>>>>> * DOWNLOAD_MICROCODE
>>>>> Power-Up In Standby feature set
>>>>> * SET_FEATURES required to spinup after power up
>>>>> SET_MAX security extension
>>>>> Automatic Acoustic Management feature set
>>>>> * 48-bit Address feature set
>>>>> * Device Configuration Overlay feature set
>>>>> * Mandatory FLUSH_CACHE
>>>>> * FLUSH_CACHE_EXT
>>>>> * SMART error logging
>>>>> * SMART self-test
>>>>> * General Purpose Logging feature set
>>>>> * 64-bit World wide name
>>>>> * {READ,WRITE}_DMA_EXT_GPL commands
>>>>> * Segmented DOWNLOAD_MICROCODE
>>>>> * Gen1 signaling speed (1.5Gb/s)
>>>>> * Gen2 signaling speed (3.0Gb/s)
>>>>> * Native Command Queueing (NCQ)
>>>>> * Host-initiated interface power management
>>>>> * Phy event counters
>>>>> * NCQ priority information
>>>>> * DMA Setup Auto-Activate optimization
>>>>> * Software settings preservation
>>>>> * SMART Command Transport (SCT) feature set
>>>>> * SCT Long Sector Access (AC1)
>>>>> * SCT LBA Segment Access (AC2)
>>>>> * SCT Features Control (AC4)
>>>>> * SCT Data Tables (AC5)
>>>>> unknown 206[12] (vendor specific)
>>>>> unknown 206[13] (vendor specific)
>>>>> Security:
>>>>> Master password revision code = 65534
>>>>> supported
>>>>> not enabled
>>>>> not locked
>>>>> not frozen
>>>>> not expired: security count
>>>>> supported: enhanced erase
>>>>> 102min for SECURITY ERASE UNIT. 102min for ENHANCED SECURITY ERASE UNIT.
>>>>> Logical Unit WWN Device Identifier: 50014ee2af8fec40
>>>>> NAA : 5
>>>>> IEEE OUI : 0014ee
>>>>> Unique ID : 2af8fec40
>>>>>
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> pve-user mailing list
>>>>> pve-user at pve.proxmox.com
>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>
>>>>>
>>>>
>>>
>>>
>>> _______________________________________________
>>> pve-user mailing listpve-user at pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>>
>>>
>>> --
>>> Zuzendari Teknikoa / Director Técnico
>>> Binovo IT Human Project, S.L.
>>> Telf. 943575997
>>> 943493611
>>> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)www.binovo.es
>>>
>>>
>>> _______________________________________________
>>> pve-user mailing list
>>> pve-user at pve.proxmox.com
>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>
>>>
>>
>>
>> --
>> Zuzendari Teknikoa / Director Técnico
>> Binovo IT Human Project, S.L.
>> Telf. 943575997
>> 943493611
>> Astigarraga bidea 2, planta 6 dcha., ofi. 3-2; 20180 Oiartzun (Gipuzkoa)www.binovo.es
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20130923/47974bdf/attachment.htm>
More information about the pve-user
mailing list