<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html>
<head>
<meta name="Generator" content="Zarafa WebAccess v6.40.3-23410">
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
<title>RE: pve-user Digest, Vol 60, Issue 23</title>
<style type="text/css">
body
{
font-family: Arial, Verdana, Sans-Serif;
font-size: 12px;
padding: 5px 5px 5px 5px;
margin: 0px;
border-style: none;
background-color: #ffffff;
}
p, ul, li
{
margin-top: 0px;
margin-bottom: 0px;
}
</style>
</head>
<body>
<p>When I try to migrate a container from one node to another node, I get this error:<br /><br />Mar 22 17:03:05 starting migration of CT 110 to node 'delta' (192.168.104.27)<br />Mar 22 17:03:05 container data is on shared storage 'local'<br />Mar 22 17:03:05 dump 2nd level quota<br />Mar 22 17:03:05 initialize container on remote node 'delta'<br />Mar 22 17:03:05 initializing remote quota<br />Mar 22 17:03:05 # /usr/bin/ssh -o 'BatchMode=yes' root@192.168.104.27 vzctl quotainit 110<br />Mar 22 17:03:05 vzquota : (error) quota check : stat /var/lib/vz/private/110: No such file or directory<br />Mar 22 17:03:05 ERROR: Failed to initialize quota: vzquota init failed [1]<br />Mar 22 17:03:05 start final cleanup<br />Mar 22 17:03:05 ERROR: migration finished with problems (duration 00:00:00)<br />TASK ERROR: migration problems<br /><br />What should I do for migrate this contaniner?<br /> </p> Ernesto Suárez Ojeda<br /> Especialista B en Ciencias Informáticas<br /> Contraloría Provincial Matanzas<br /><br /><p> <br /> </p><blockquote style="border-left: 2px solid #325FBA; padding-left: 5px;margin-left:5px;">-----Original message-----<br /><strong>To:</strong> pve-user@pve.proxmox.com; <br /><strong>From:</strong> pve-user-request@pve.proxmox.com<br /><strong>Sent:</strong> Fri 22-03-2013 08:27<br /><strong>Subject:</strong> pve-user Digest, Vol 60, Issue 23<br />Send pve-user mailing list submissions to<br /> pve-user@pve.proxmox.com<br /><br />To subscribe or unsubscribe via the World Wide Web, visit<br /> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user<br />or, via email, send a message with subject or body 'help' to<br /> pve-user-request@pve.proxmox.com<br /><br />You can reach the person managing the list at<br /> pve-user-owner@pve.proxmox.com<br /><br />When replying, please edit your Subject line so it is more specific<br />than "Re: Contents of pve-user digest..."<br /><br /><br />Today's Topics:<br /><br /> 1. Crash after Upgrade PVE2.3 (Martin Schuchmann)<br /> 2. Re: Error openvz with gitlab "running (failure count 20)"<br /> (Knaupp, Thomas)<br /> 3. Re: Error openvz with gitlab "running (failure count 20)"<br /> (maykel@maykel.sytes.net)<br /> 4. Re: Crash after Upgrade PVE2.3 / Cron backup crashes with<br /> 2.6.32-19 but not with 2.6.32-17 (Martin Schuchmann)<br /><br /><br />----------------------------------------------------------------------<br /><br />Message: 1<br />Date: Thu, 21 Mar 2013 12:46:56 +0100<br />From: Martin Schuchmann <ms@city-pc.de><br />To: pve-user@pve.proxmox.com<br />Subject: [PVE-User] Crash after Upgrade PVE2.3<br />Message-ID: <514AF330.7090700@city-pc.de><br />Content-Type: text/plain; charset=ISO-8859-15; format=flowed<br /><br />Hi there,<br /><br />Yesterday I did the upgrade from 2.2 up to 2.3 (pveversion see below) on <br />all three nodes of our cluster (no HA).<br />At 23:00 the usual backup of a KVM Machine (801) started via vzdump.cron <br />on Node 3 and ended with errors (see syslog below).<br /><br />After this crash the VMs on Node 3 and the Webinterface had not been <br />reachable anymore.<br /><br />We restarted pvedaemond and pvestatd and had been able to reach the <br />webinterface.<br /><br />We tried to stop the vms but the processes "vzctl stop xxx" remained in <br />the process list, even kill -9 did not work for removing them.<br />"reboot" via ssh failed also - we had to execute an "echo b > <br />/proc/sysrq-trigger" to restart the host.<br /><br />After reboot everthing was fine, the VMs started again.<br /><br />Now we have on the two other nodes (no reboot) still an issue in syslog:<br /><br />Mar 21 12:09:18 promo2 pvestatd[101835]: WARNING: command 'df -P -B 1 <br />/mnt/pve/p3_storage' failed: got timeout"But an<br /><br />But on the bash the "df -P -B 1 /mnt/pve/p3_storage" works fine on every <br />of the three hosts.<br /><br /><br />Had this heavy backup issue been reported earlier?<br />Any hints to prevent from that?<br /><br />Regards, Martin<br /><br /><br />pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)<br />running kernel: 2.6.32-19-pve<br />proxmox-ve-2.6.32: 2.3-93<br />pve-kernel-2.6.32-10-pve: 2.6.32-63<br />pve-kernel-2.6.32-19-pve: 2.6.32-93<br />pve-kernel-2.6.32-17-pve: 2.6.32-83<br />lvm2: 2.02.95-1pve2<br />clvm: 2.02.95-1pve2<br />corosync-pve: 1.4.4-4<br />openais-pve: 1.1.4-2<br />libqb: 0.10.1-2<br />redhat-cluster-pve: 3.1.93-2<br />resource-agents-pve: 3.9.2-3<br />fence-agents-pve: 3.1.9-1<br />pve-cluster: 1.0-36<br />qemu-server: 2.3-18<br />pve-firmware: 1.0-21<br />libpve-common-perl: 1.0-49<br />libpve-access-control: 1.0-26<br />libpve-storage-perl: 2.3-6<br />vncterm: 1.0-3<br />vzctl: 4.0-1pve2<br />vzprocps: 2.0.11-2<br />vzquota: 3.1-1<br />pve-qemu-kvm: 1.4-8<br />ksm-control-daemon: 1.1-1<br /><br /><br />Mar 20 23:00:01 promo3 /USR/SBIN/CRON[150583]: (root) CMD (vzdump 801 <br />306 --quiet 1 --mode snapshot --compress lzo --storage p2_storage)<br />Mar 20 23:00:02 promo3 vzdump[150584]: <root@pam> starting task <br />UPID:promo3:00024C3A:00785E0A:514A3162:vzdump::root@pam:<br />Mar 20 23:00:02 promo3 vzdump[150586]: INFO: starting new backup job: <br />vzdump 801 306 --quiet 1 --mode snapshot --compress lzo --storage p2_storage<br />Mar 20 23:00:02 promo3 vzdump[150586]: INFO: Starting Backup of VM 306 <br />(openvz)<br />Mar 20 23:00:31 promo3 pvestatd[2328]: WARNING: unable to connect to VM <br />801 socket - timeout after 31 retries<br />...<br />Mar 20 23:03:11 promo3 pvestatd[2328]: WARNING: unable to connect to VM <br />801 socket - timeout after 31 retries<br />Mar 20 23:03:18 promo3 kernel: INFO: task kvm:2585 blocked for more than <br />120 seconds.<br />Mar 20 23:03:18 promo3 kernel: "echo 0 > <br />/proc/sys/kernel/hung_task_timeout_secs" disables this message.<br />Mar 20 23:03:18 promo3 kernel: kvm D ffff88107a480da0 0 <br />2585 1 0 0x00000000<br />Mar 20 23:03:18 promo3 kernel: ffff88107a92fd08 0000000000000082 <br />0000000000000000 ffff880879df35c8<br />Mar 20 23:03:18 promo3 kernel: ffff880878cc08c0 00000000000000db <br />ffff88107c415810 ffff88107a92fab8<br />Mar 20 23:03:18 promo3 kernel: ffff88107c415800 0000000104af1976 <br />ffff88107a481368 000000000001e9c0<br />Mar 20 23:03:18 promo3 kernel: Call Trace:<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8119ad69>] <br />__sb_start_write+0x169/0x1a0<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff81097200>] ? <br />autoremove_wake_function+0x0/0x40<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff81127489>] <br />generic_file_aio_write+0x69/0x100<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff811e325b>] <br />aio_rw_vect_retry+0xbb/0x220<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff811e4bc4>] aio_run_iocb+0x64/0x170<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff811e614c>] do_io_submit+0x2bc/0x670<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff811e6510>] sys_io_submit+0x10/0x20<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8100b102>] <br />system_call_fastpath+0x16/0x1b<br />Mar 20 23:03:18 promo3 kernel: INFO: task lvcreate:150596 blocked for <br />more than 120 seconds.<br />Mar 20 23:03:18 promo3 kernel: "echo 0 > <br />/proc/sys/kernel/hung_task_timeout_secs" disables this message.<br />Mar 20 23:03:18 promo3 kernel: lvcreate D ffff88087aae6d20 0 150596 <br />150595 0 0x00000000<br />Mar 20 23:03:18 promo3 kernel: ffff8802fc5bbc48 0000000000000082 <br />0000000000000000 00000000000000d2<br />Mar 20 23:03:18 promo3 kernel: ffffe8ffffffffff ffff88087bec5760 <br />ffffffff81ac37d0 ffffffff8141c110<br />Mar 20 23:03:18 promo3 kernel: 0000000000000000 0000000104af1b10 <br />ffff88087aae72e8 000000000001e9c0<br />Mar 20 23:03:18 promo3 kernel: Call Trace:<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8141c110>] ? copy_params+0x90/0x110<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8119ab6d>] sb_wait_write+0x9d/0xb0<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff81097200>] ? <br />autoremove_wake_function+0x0/0x40<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8119c2d0>] freeze_super+0x60/0x140<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff811d5ad8>] freeze_bdev+0x98/0xe0<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff81415697>] dm_suspend+0x97/0x270<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8141a1dc>] ? <br />__find_device_hash_cell+0xac/0x170<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8141b4a6>] dev_suspend+0x76/0x250<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8141c344>] ctl_ioctl+0x1b4/0x270<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8141b430>] ? dev_suspend+0x0/0x250<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8141c413>] dm_ctl_ioctl+0x13/0x20<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff811ac622>] vfs_ioctl+0x22/0xa0<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff81061bcf>] ? <br />pick_next_task_fair+0x16f/0x1f0<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8109e52d>] ? <br />sched_clock_cpu+0xcd/0x110<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff811ac7ca>] do_vfs_ioctl+0x8a/0x590<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8151dc50>] ? <br />thread_return+0xbe/0x88e<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8108e675>] ? set_one_prio+0x75/0xd0<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff811acd1f>] sys_ioctl+0x4f/0x80<br />Mar 20 23:03:18 promo3 kernel: [<ffffffff8100b102>] <br />system_call_fastpath+0x16/0x1b<br />Mar 20 23:03:21 promo3 pvestatd[2328]: WARNING: unable to connect to VM <br />801 socket - timeout after 31 retries<br />...<br /><br /><br /><br /><br /><br />------------------------------<br /><br />Message: 2<br />Date: Thu, 21 Mar 2013 13:39:45 +0000<br />From: "Knaupp, Thomas" <Thomas.Knaupp@schwarz.de><br />To: "'maykel@maykel.sytes.net'" <maykel@maykel.sytes.net>,<br /> "pve-user@pve.proxmox.com" <pve-user@pve.proxmox.com><br />Subject: Re: [PVE-User] Error openvz with gitlab "running (failure<br /> count 20)"<br />Message-ID:<br /> <C3289A7ABB6A5342945D393E64AADEA60AB3E57A@SCSMSX5.ADSCS.LAN><br />Content-Type: text/plain; charset="utf-8"<br /><br />Hello,<br /><br />> I have the problem, I backup the container openvz en other machine<br />> and I restore the backup in other machine best performance.<br />> All ok, but I start the openvz the received this error:<br />> running (failure count 20)<br /><br />If you run cat /proc/user_beancounters inside the openvz machine<br />-> do you see any failcounts? And if yes, which ones?<br /><br /><br />Regards<br />Tom<br /><br /><br /><br /><br />________________________________<br /><br />--<br />Schwarz Computer Systeme GmbH<br />Altenhofweg 2a<br />92318 Neumarkt<br />http://www.schwarz.de<br />___________________________________________<br /><br />Geschaeftsfuehrer: Manfred Schwarz<br />Sitz der Gesellschaft: Neumarkt i.d.Opf.<br />Registergericht: AG Nuernberg, HRB 11908<br />___________________________________________<br /><br />Diese eMail enthaelt moeglicherweise vertrauliche und/oder rechtlich geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese eMail irrtuemlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese eMail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet.<br /><br />This email may contain confidential and/or privileged information. If you are not the intended recipient (or have received this email in error) please notify the sender immediately and destroy this email. Any unauthorized copying, disclosure or distribution of the material in this email is strictly forbidden.<br />-------------- next part --------------<br />A non-text attachment was scrubbed...<br />Name: smime.p7s<br />Type: application/x-pkcs7-signature<br />Size: 5832 bytes<br />Desc: not available<br />URL: <http://pve.proxmox.com/pipermail/pve-user/attachments/20130321/1edd108c/attachment-0001.bin><br /><br />------------------------------<br /><br />Message: 3<br />Date: Thu, 21 Mar 2013 16:47:51 +0100<br />From: maykel@maykel.sytes.net<br />To: <pve-user@pve.proxmox.com><br />Subject: Re: [PVE-User] Error openvz with gitlab "running (failure<br /> count 20)"<br />Message-ID: <20dcfe945f39550b649a56b439d82711@maykel.sytes.net><br />Content-Type: text/plain; charset=UTF-8; format=flowed<br /><br />El 2013-03-21 14:39, Knaupp, Thomas escribi?:<br />> Hello,<br />><br />>> I have the problem, I backup the container openvz en other machine<br />>> and I restore the backup in other machine best performance.<br />>> All ok, but I start the openvz the received this error:<br />>> running (failure count 20)<br />><br />> If you run cat /proc/user_beancounters inside the openvz machine<br />> -> do you see any failcounts? And if yes, which ones?<br />><br />><br />> Regards<br />> Tom<br />><br />><br />><br />><br />> ________________________________<br />><br />> --<br />> Schwarz Computer Systeme GmbH<br />> Altenhofweg 2a<br />> 92318 Neumarkt<br />> http://www.schwarz.de<br />> ___________________________________________<br />><br />> Geschaeftsfuehrer: Manfred Schwarz<br />> Sitz der Gesellschaft: Neumarkt i.d.Opf.<br />> Registergericht: AG Nuernberg, HRB 11908<br />> ___________________________________________<br />><br />> Diese eMail enthaelt moeglicherweise vertrauliche und/oder rechtlich<br />> geschuetzte Informationen. Wenn Sie nicht der richtige Adressat sind<br />> oder diese eMail irrtuemlich erhalten haben, informieren Sie bitte<br />> sofort den Absender und vernichten Sie diese eMail. Das unerlaubte<br />> Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht<br />> gestattet.<br />><br />> This email may contain confidential and/or privileged information. If<br />> you are not the intended recipient (or have received this email in<br />> error) please notify the sender immediately and destroy this email.<br />> Any unauthorized copying, disclosure or distribution of the material<br />> in this email is strictly forbidden.<br /><br />Thanks for your response. The output of command is 7 failcount in <br />physpages:<br /><br />physpages 171989 262144 <br /> 0 262144 7<br /><br /><br />I have give more resources???<br /><br />Thanks in advanced.<br /><br /><br />------------------------------<br /><br />Message: 4<br />Date: Fri, 22 Mar 2013 02:31:01 +0100<br />From: Martin Schuchmann <ms@city-pc.de><br />To: pve-user@pve.proxmox.com<br />Subject: Re: [PVE-User] Crash after Upgrade PVE2.3 / Cron backup<br /> crashes with 2.6.32-19 but not with 2.6.32-17<br />Message-ID: <514BB455.6050301@city-pc.de><br />Content-Type: text/plain; charset=ISO-8859-15; format=flowed<br /><br />Hi,<br /><br />Today I had the same behaviour as yesterday - at 23:00h cron started the <br />backup job and immediately the whole node was out of order: Via <br />webinterface I could see the running VMs but they had not been reachable <br />via RDP/SSH anymore. Also VZCTL ENTER did not work.<br /><br />Again there had been kernel errors in syslog.<br /><br />After a hart-reset via we recognized a logical volume created at the <br />same time as the crashed:<br /><br />lvdisplay<br /><br />--- Logical volume ---<br /> LV Path /dev/promo3/vzsnap-promo3-0<br /> LV Name vzsnap-promo3-0<br /> VG Name promo3<br /> LV UUID DswPod-t1lR-yKen-vwDH-sG5D-Djpl-wo9iSX<br /> LV Write Access read/write<br /> LV Creation host, time promo3, 2013-03-21 23:00:02 +0100<br /> LV Status available<br /> # open 0<br /> LV Size 4,00 GiB<br /> Current LE 1024<br /> Segments 1<br /> Allocation inherit<br /> Read ahead sectors auto<br /> - currently set to 256<br /> Block device 253:3<br /><br /><br />After deleting via lvremove I started a manual backup (snapshot) for CTs <br />and VMs - no problem occured.<br /><br />Now I created again a new cron backup via webinterface of the same <br />machine which had been successfully updated.<br /><br />A few seconds after start the following errors occured:<br /><br />Mar 22 01:00:01 promo3 vzdump[12622]: <root@pam> starting task <br />UPID:promo3:00003150:0004D259:514B9F01:vzdump::root@pam:<br />Mar 22 01:00:01 promo3 vzdump[12624]: INFO: starting new backup job: <br />vzdump 306 --quiet 1 --mailto ms@city-pc.de --mode snapshot --compress <br />lzo --storage p2_storage<br />Mar 22 01:00:01 promo3 vzdump[12624]: INFO: Starting Backup of VM 306 <br />(openvz)<br />Mar 22 01:00:02 promo3 pmxcfs[4048]: [status] notice: received log<br />Mar 22 01:00:02 promo3 kernel: EXT3-fs: barriers disabled<br />Mar 22 01:00:02 promo3 kernel: kjournald starting. Commit interval 5 <br />seconds<br />Mar 22 01:00:02 promo3 kernel: EXT3-fs (dm-3): using internal journal<br />Mar 22 01:00:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148485<br />Mar 22 01:00:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148484<br />Mar 22 01:00:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148481<br />Mar 22 01:00:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148429<br />Mar 22 01:00:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148427<br />Mar 22 01:00:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70156303<br />Mar 22 01:00:02 promo3 kernel: EXT3-fs (dm-3): 6 orphan inodes deleted<br />Mar 22 01:00:02 promo3 kernel: EXT3-fs (dm-3): recovery complete<br />Mar 22 01:00:02 promo3 kernel: EXT3-fs (dm-3): mounted filesystem with <br />ordered data mode<br />Mar 22 01:00:02 promo3 pmxcfs[4048]: [status] notice: received log<br />Mar 22 01:01:06 promo3 pvestatd[4511]: WARNING: command 'df -P -B 1 <br />/mnt/pve/p3_storage' failed: got timeout<br />Mar 22 01:01:36 promo3 pvestatd[4511]: WARNING: command 'df -P -B 1 <br />/mnt/pve/p3_storage' failed: got timeout<br />Mar 22 01:02:58 promo3 kernel: device-mapper: snapshots: Invalidating <br />snapshot: Unable to allocate exception.<br />Mar 22 01:03:05 promo3 kernel: Aborting journal on device dm-3.<br />Mar 22 01:03:05 promo3 kernel: Buffer I/O error on device dm-3, logical <br />block 342819330<br />Mar 22 01:03:05 promo3 kernel: lost page write due to I/O error on dm-3<br />Mar 22 01:03:05 promo3 kernel: JBD: I/O error detected when updating <br />journal superblock for dm-3.<br />Mar 22 01:03:05 promo3 kernel: EXT3-fs (dm-3): error: <br />ext3_journal_start_sb: Detected aborted journal<br />Mar 22 01:03:05 promo3 kernel: EXT3-fs (dm-3): error: remounting <br />filesystem read-only<br />Mar 22 01:03:09 promo3 kernel: EXT3-fs (dm-3): error: ext3_put_super: <br />Couldn't clean up the journal<br />Mar 22 01:03:10 promo3 vzdump[12624]: ERROR: Backup of VM 306 failed - <br />command '(cd /mnt/vzsnap0/private/306;find . '(' -regex '^\.$' ')' -o <br />'(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals <br />--sparse --numeric-owner --no-recursion --one-file-system --null -T <br />-|lzop) <br /> >/mnt/pve/p2_storage/dump/vzdump-openvz-306-2013_03_22-01_00_01.tar.dat' failed: exit code 2<br />Mar 22 01:03:10 promo3 vzdump[12624]: INFO: Backup job finished with errors<br />Mar 22 01:03:12 promo3 citadel: 1 unique messages to be merged<br />Mar 22 01:03:12 promo3 citadel: 1 unique messages to be merged<br />Mar 22 01:03:12 promo3 vzdump[12624]: job errors<br />Mar 22 01:03:12 promo3 vzdump[12622]: <root@pam> end task <br />UPID:promo3:00003150:0004D259:514B9F01:vzdump::root@pam: job errors<br />Mar 22 01:03:12 promo3 /USR/SBIN/CRON[12617]: (CRON) error (grandchild <br />#12620 failed with exit status 255)<br /><br />This time the backup did not crash the whole node, but it failed.<br />Also the lvdisplay did show the lv again during the failed backup:<br /><br />--- Logical volume ---<br /> LV Path /dev/promo3/vzsnap-promo3-0<br /> LV Name vzsnap-promo3-0<br /> VG Name promo3<br /> LV UUID UkxdQW-GGM7-raEO-MSxS-k9jZ-s0D1-g2ZO9M<br /> LV Write Access read/write<br /> LV Creation host, time promo3, 2013-03-22 01:00:01 +0100<br /> LV snapshot status active destination for data<br /> LV Status available<br /> # open 1<br /> LV Size 2,55 TiB<br /> Current LE 669651<br /> COW-table size 4,00 GiB<br /> COW-table LE 1024<br /> Allocated to snapshot 60,92%<br /> Snapshot chunk size 4,00 KiB<br /> Segments 1<br /> Allocation inherit<br /> Read ahead sectors auto<br /> - currently set to 256<br /> Block device 253:3<br /><br /><br />I added a new backup cron a few minutes later. It started, and I tried <br />to have a look on the lv's - but lvdisplay did not answer.<br />I started a new SSH console and tried lvscan - and it also did not <br />answer during the backup, CTRL-C endet with:<br /><br />promo3:~# lvscan<br />^C CTRL-c detected: giving up waiting for lock<br /> /run/lock/lvm/V_promo3: flock failed: Unterbrechung w?hrend des <br />Betriebssystemaufrufs<br /> Can't get lock for promo3<br /> Skipping volume group promo3<br /><br />Syslog this time.<br /><br />Mar 22 01:08:01 promo3 /USR/SBIN/CRON[13592]: (root) CMD (vzdump 306 <br />--quiet 1 --mode snapshot --mailto ms@city-pc.de --compress lzo <br />--storage p2_storage)<br />Mar 22 01:08:02 promo3 vzdump[13593]: <root@pam> starting task <br />UPID:promo3:0000351B:00058E23:514BA0E2:vzdump::root@pam:<br />Mar 22 01:08:02 promo3 vzdump[13595]: INFO: starting new backup job: <br />vzdump 306 --quiet 1 --mailto ms@city-pc.de --mode snapshot --compress <br />lzo --storage p2_storage<br />Mar 22 01:08:02 promo3 vzdump[13595]: INFO: Starting Backup of VM 306 <br />(openvz)<br />Mar 22 01:09:54 promo3 rrdcached[4027]: flushing old values<br />Mar 22 01:09:54 promo3 rrdcached[4027]: rotating journals<br />Mar 22 01:09:54 promo3 rrdcached[4027]: started new journal <br />/var/lib/rrdcached/journal//rrd.journal.1363910994.615643<br />Mar 22 01:11:07 promo3 kernel: ct0 nfs: server 10.1.0.3 not responding, <br />still trying<br />Mar 22 01:11:22 promo3 kernel: INFO: task nfsd:3957 blocked for more <br />than 120 seconds.<br />Mar 22 01:11:22 promo3 kernel: "echo 0 > <br />/proc/sys/kernel/hung_task_timeout_secs" disables this message.<br />Mar 22 01:11:22 promo3 kernel: nfsd D ffff880879f2d1e0 0 <br />3957 2 0 0x00000000<br />Mar 22 01:11:22 promo3 kernel: ffff880879f2f900 0000000000000046 <br />ffff8808619a4fc0 0000000000000001<br />Mar 22 01:11:22 promo3 kernel: 00000000000005a8 ffff88087bb77e00 <br />0000000000000080 0000000000000004<br />Mar 22 01:11:22 promo3 kernel: ffff880879f2f8d0 ffffffff81182e1b <br />ffff880879f2d7a8 000000000001e9c0<br />Mar 22 01:11:22 promo3 kernel: Call Trace:<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81182e1b>] ? <br />cache_flusharray+0xab/0x100<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff810974ee>] ? <br />prepare_to_wait+0x4e/0x80<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff8119ad69>] <br />__sb_start_write+0x169/0x1a0<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81097200>] ? <br />autoremove_wake_function+0x0/0x40<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81127489>] <br />generic_file_aio_write+0x69/0x100<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81127420>] ? <br />generic_file_aio_write+0x0/0x100<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff8119872b>] <br />do_sync_readv_writev+0xfb/0x140<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff811b3e40>] ? iput+0x30/0x70<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81097200>] ? <br />autoremove_wake_function+0x0/0x40<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa0361e70>] ? <br />nfsd_acceptable+0x0/0x120 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81198557>] ? <br />rw_copy_check_uvector+0x97/0x120<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81199696>] <br />do_readv_writev+0xd6/0x1f0<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa0361ff2>] ? <br />nfsd_setuser_and_check_port+0x62/0xb0 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa0571e99>] ? <br />vzquota_qlnk_destroy+0x29/0x110 [vzdquota]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff811997f8>] vfs_writev+0x48/0x60<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa0363a25>] <br />nfsd_vfs_write+0x115/0x480 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa0365cbb>] ? <br />nfsd_open+0x23b/0x2c0 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa0366107>] <br />nfsd_write+0xe7/0x100 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa036e1df>] <br />nfsd3_proc_write+0xaf/0x140 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa035e52e>] <br />nfsd_dispatch+0xfe/0x240 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa027e174>] <br />svc_process_common+0x344/0x650 [sunrpc]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff8105a620>] ? <br />default_wake_function+0x0/0x20<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa027e7b2>] <br />svc_process+0x102/0x150 [sunrpc]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa035ee5d>] nfsd+0xcd/0x180 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffffa035ed90>] ? nfsd+0x0/0x180 [nfsd]<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81096c26>] kthread+0x96/0xa0<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff8100c1aa>] child_rip+0xa/0x20<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff81096b90>] ? kthread+0x0/0xa0<br />Mar 22 01:11:22 promo3 kernel: [<ffffffff8100c1a0>] ? child_rip+0x0/0x20<br />Mar 22 01:11:22 promo3 kernel: INFO: task nfsd:3958 blocked for more <br />than 120 seconds.<br />Mar 22 01:11:22 promo3 kernel: "echo 0 > <br />/proc/sys/kernel/hung_task_timeout_secs" disables this message.<br />Mar 22 01:11:22 promo3 kernel: nfsd D ffff880879f2c700 0 <br />3958 2 0 0x00000000<br />Mar 22 01:11:22 promo3 kernel: ffff88087aa97900 0000000000000046 <br />ffff88087aa978a0 ffff881064c4b000<br />Mar 22 01:11:22 promo3 kernel: ffff8808619a4fc0 ffff88087bb77e00 <br />ffff88086de62bf8 0000000000000020<br />Mar 22 01:11:22 promo3 kernel: ffff88087aa978d0 ffffffff81182e1b <br />ffff880879f2ccc8 000000000001e9c0<br /><br /><br /><br />During the job also the listing of the mounted pve storage did not work:<br /><br />ls /mnt/pve/<br /><br />ended with an hung up.<br /><br />All machines on the node had been inaccassibe again.<br /><br />I did a reboot with kernel 2.6.32-17.<br /><br />Entered a new cronjob and the backup worked as it did for 12 month <br />before, here the syslog:<br /><br />Mar 22 02:22:01 promo3 /USR/SBIN/CRON[3738]: (root) CMD (vzdump 306 <br />--quiet 1 --mode snapshot --compress lzo --storage p2_storage)<br />Mar 22 02:22:01 promo3 vzdump[3739]: <root@pam> starting task <br />UPID:promo3:00000E9D:00007A5A:514BB239:vzdump::root@pam:<br />Mar 22 02:22:01 promo3 vzdump[3741]: INFO: starting new backup job: <br />vzdump 306 --quiet 1 --mailto ms@city-pc.de --mode snapshot --compress <br />lzo --storage p2_storage<br />Mar 22 02:22:01 promo3 vzdump[3741]: INFO: Starting Backup of VM 306 <br />(openvz)<br />Mar 22 02:22:02 promo3 kernel: EXT3-fs: barriers disabled<br />Mar 22 02:22:02 promo3 kernel: kjournald starting. Commit interval 5 <br />seconds<br />Mar 22 02:22:02 promo3 kernel: EXT3-fs (dm-3): using internal journal<br />Mar 22 02:22:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148484<br />Mar 22 02:22:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148481<br />Mar 22 02:22:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148465<br />Mar 22 02:22:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148429<br />Mar 22 02:22:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70148427<br />Mar 22 02:22:02 promo3 kernel: ext3_orphan_cleanup: deleting <br />unreferenced inode 70156303<br />Mar 22 02:22:02 promo3 kernel: EXT3-fs (dm-3): 6 orphan inodes deleted<br />Mar 22 02:22:02 promo3 kernel: EXT3-fs (dm-3): recovery complete<br />Mar 22 02:22:02 promo3 kernel: EXT3-fs (dm-3): mounted filesystem with <br />ordered data mode<br />Mar 22 02:22:30 promo3 ntpd[1884]: Listen normally on 38 veth306.0 <br />fe80::c04e:44ff:fe61:ecfe UDP 123<br />Mar 22 02:25:49 promo3 vzdump[3741]: INFO: Finished Backup of VM 306 <br />(00:03:48)<br />Mar 22 02:25:49 promo3 vzdump[3741]: INFO: Backup job finished successfully<br />Mar 22 02:25:49 promo3 citadel: 1 unique messages to be merged<br />Mar 22 02:25:49 promo3 citadel: 1 unique messages to be merged<br />Mar 22 02:25:49 promo3 vzdump[3739]: <root@pam> end task <br />UPID:promo3:00000E9D:00007A5A:514BB239:vzdump::root@pam: OK<br /><br /><br />Any hints to pevent kernel 2.6.32-19 from that issue?<br /><br />Regards, Martin<br /><br /><br /><br /><br />------------------------------<br /><br />_______________________________________________<br />pve-user mailing list<br />pve-user@pve.proxmox.com<br />http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user<br /><br /><br />End of pve-user Digest, Vol 60, Issue 23<br />****************************************<br /></blockquote>
</body>
</html>