[PVE-User] lvremove -> failed: got lock timeout - aborting command and hangs
Dietmar Maurer
dietmar at proxmox.com
Wed Feb 22 06:43:29 CET 2012
You are using software RAID (/dev/md0)?
> -----Original Message-----
> From: pve-user-bounces at pve.proxmox.com [mailto:pve-user-
> bounces at pve.proxmox.com] On Behalf Of Laurent CARON
> Sent: Dienstag, 21. Februar 2012 22:40
> To: pve-user at pve.proxmox.com
> Subject: [PVE-User] lvremove -> failed: got lock timeout - aborting command
> and hangs
>
> Hi,
>
> I'm experiencing trouble with my proxmox 1.9 setup.
>
> While trying to make backups, the backup process is fine, but then it doesn't
> release the lvm snapshot (see below):
>
> Feb 21 17:35:34 INFO: Starting Backup of VM 505 (qemu) Feb 21 17:35:34 INFO:
> status = running Feb 21 17:35:35 INFO: backup mode: snapshot Feb 21 17:35:35
> INFO: bandwidth limit: 32768 KB/s Feb 21 17:35:35 INFO: ionice priority: 7
> Feb 21 17:35:35 INFO: Logical volume "vzsnap-proxmox-siege-001-0" created
> Feb 21 17:35:35 INFO: creating archive '/mnt/pve/backups_tmm/dump/vzdump-
> qemu-505-2012_02_21-17_35_34.tar'
> Feb 21 17:35:35 INFO: adding '/mnt/pve/backups_tmm/dump/vzdump-qemu-
> 505-2012_02_21-17_35_34.tmp/qemu-server.conf' to archive ('qemu-
> server.conf') Feb 21 17:35:35 INFO: adding '/dev/VG_SSD_proxmox-siege-
> 001/vzsnap-proxmox-siege-001-0' to archive ('vm-disk-ide0.raw') Feb 21
> 17:40:58 INFO: Total bytes written: 10609586176 (31.33 MiB/s) Feb 21 17:44:55
> INFO: archive file size: 9.88GB Feb 21 17:44:55 INFO: delete old backup
> '/mnt/pve/backups_tmm/dump/vzdump-qemu-505-2012_02_19-05_07_50.tgz'
> Feb 21 17:48:03 INFO: lvremove failed - trying again in 8 seconds Feb 21
> 17:49:11 INFO: lvremove failed - trying again in 16 seconds Feb 21 17:50:27
> INFO: lvremove failed - trying again in 32 seconds Feb 21 17:51:59 ERROR:
> command 'lvremove -f /dev/VG_SSD_proxmox-siege-001/vzsnap-proxmox-
> siege-001-0' failed: got lock timeout - aborting command Feb 21 17:51:59 INFO:
> Finished Backup of VM 505 (00:16:25)
>
> syslog gets filled with messages like:
>
> Feb 21 17:47:10 proxmox-siege-001 kernel: INFO: task lvremove:41689 blocked
> for more than 120 seconds.
> Feb 21 17:47:10 proxmox-siege-001 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Feb 21 17:47:10 proxmox-siege-001 kernel: lvremove D ffff883dd83b91e0 0
> 41689 39249 0 0x00000000
> Feb 21 17:47:10 proxmox-siege-001 kernel: ffff883b22dc1b88
> 0000000000000082 ffff883b22dc1b48 ffffffff813f4cac Feb 21 17:47:10
> proxmox-siege-001 kernel: 0000000000000008 0000000000001000
> 0000000000000000 000000000000000c Feb 21 17:47:10 proxmox-siege-001
> kernel: ffff883dd83b97a8 ffff883b22dc1fd8 000000000000f788
> ffff883dd83b97a8 Feb 21 17:47:10 proxmox-siege-001 kernel: Call Trace:
> Feb 21 17:47:10 proxmox-siege-001 kernel: [<ffffffff813f4cac>] ?
> dm_table_unplug_all+0x5c/0xd0 Feb 21 17:47:10 proxmox-siege-001 kernel:
> [<ffffffff8109d3c9>] ? ktime_get_ts+0xa9/0xe0 Feb 21 17:47:10 proxmox-siege-
> 001 kernel: [<ffffffff814e80d3>] io_schedule+0x73/0xc0 Feb 21 17:47:10
> proxmox-siege-001 kernel: [<ffffffff811c3b2e>]
> __blockdev_direct_IO+0x6fe/0xc20 Feb 21 17:47:10 proxmox-siege-001 kernel:
> [<ffffffff8124332d>] ? get_disk+0x7d/0xf0 Feb 21 17:47:10 proxmox-siege-001
> kernel: [<ffffffff811c1737>] blkdev_direct_IO+0x57/0x60 Feb 21 17:47:10
> proxmox-siege-001 kernel: [<ffffffff811c0900>] ? blkdev_get_blocks+0x0/0xc0
> Feb 21 17:47:10 proxmox-siege-001 kernel: [<ffffffff8111fbab>]
> generic_file_aio_read+0x70b/0x780 Feb 21 17:47:10 proxmox-siege-001 kernel:
> [<ffffffff811c2211>] ? blkdev_open+0x71/0xc0 Feb 21 17:47:10 proxmox-siege-
> 001 kernel: [<ffffffff81184fe3>] ? __dentry_open+0x113/0x330 Feb 21 17:47:10
> proxmox-siege-001 kernel: [<ffffffff8121f248>] ?
> devcgroup_inode_permission+0x48/0x50
> Feb 21 17:47:10 proxmox-siege-001 kernel: [<ffffffff8118796a>]
> do_sync_read+0xfa/0x140 Feb 21 17:47:10 proxmox-siege-001 kernel:
> [<ffffffff81198ae2>] ? user_path_at+0x62/0xa0 Feb 21 17:47:10 proxmox-siege-
> 001 kernel: [<ffffffff81092a10>] ? autoremove_wake_function+0x0/0x40 Feb 21
> 17:47:10 proxmox-siege-001 kernel: [<ffffffff811c0ccc>] ?
> block_ioctl+0x3c/0x40 Feb 21 17:47:10 proxmox-siege-001 kernel:
> [<ffffffff8119b0f2>] ? vfs_ioctl+0x22/0xa0 Feb 21 17:47:10 proxmox-siege-001
> kernel: [<ffffffff8119b29a>] ? do_vfs_ioctl+0x8a/0x5d0 Feb 21 17:47:10
> proxmox-siege-001 kernel: [<ffffffff81188375>] vfs_read+0xb5/0x1a0 Feb 21
> 17:47:10 proxmox-siege-001 kernel: [<ffffffff811884b1>] sys_read+0x51/0x90
> Feb 21 17:47:10 proxmox-siege-001 kernel: [<ffffffff8100b242>]
> system_call_fastpath+0x16/0x1b
>
> The only way out was a reboot (this issue happened twice with two different
> VMs).
>
> Do any of you have a clue about it ?
>
> --
> lcaron at unix-scripts.info
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list