[PVE-User] Backups kill one guest, semi-consistently.
Myke G
mykel at mWare.ca
Thu Nov 18 08:04:00 CET 2010
30GB... which isn't the biggest there. We will go iSCSI, and update
proxmox, but not likely until after xmas lockdown.
I've moved the VM to another node in the cluster and it survived backups
this evening, workaround that works... for now.
Myke
On 2010-11-18 02:03, guy at britewhite.net wrote:
> How big is the virtual? We found that one of our virtual servers which was using over 10th of storage was causing back some issues. Moved the storage to a NAS and the backup was fine again.
>
> ---Guy
>
> Sent from my iPad
>
> On 18 Nov 2010, at 06:02, "Justa Guy"<pve_user at proinbox.com> wrote:
>
>> You could try an updated Proxmox, though I don't have much hope for that
>> solving anything- but hey, who knows what bugs were addressed in the
>> interim.
>>
>> http://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_to_1.6
>>
>> -C
>>
>> On Wed, 17 Nov 2010 01:28 -0500, "Myke G"<mykel at mWare.ca> wrote:
>>> Approximately 4 nights a week, when the backups run, the first VM in the
>>> list seems to hang and eventually revert to the shutdown state according
>>> to the Proxmox webui, but inside the VM, the filesystems are not cleanly
>>> shutdown. The guest is a fully virtualized instance of FreeBSD 8
>>> (-STABLE as of a couple weeks ago IIRC)
>>>
>>> What's strange is that ONLY this VM gets hit, there's another other
>>> FreeBSD VM and a pair of OpenVZ VMs running on there too, but just this
>>> one seems to fail. It does happen to have the lowest VMID, I don't know
>>> if that's a factor.
>>>
>>> I was actually logged into the instance when the backup kicked off this
>>> evening, and while SSH stayed alive, it seemed as if the filesystem was
>>> completely dead:
>>>
>>> Nov 17 01:02:55 VPS1 kernel: calcru: runtime went backwards from
>>> 79964129 usec to 71205911 usec for pid 1097 (squid)
>>> [root at VPS1 /var/log]#
>>> [root at VPS1 /var/log]# uptime
>>> (nothing happens after several seconds; hit ^T a bunch...)
>>> load: 0.00 cmd: bash 15734 [vnread] 1.35r 0.00u 0.00s 0% 2420k
>>> load: 0.00 cmd: bash 15734 [vnread] 3.78r 0.00u 0.00s 0% 2420k
>>> load: 0.00 cmd: bash 15734 [vnread] 4.16r 0.00u 0.00s 0% 2420k
>>> ^C^C
>>> ^C (no effect)
>>> load: 0.00 cmd: bash 15734 [vnread] 81.93r 0.00u 0.00s 0% 2420k
>>> load: 0.00 cmd: bash 15734 [vnread] 82.24r 0.00u 0.00s 0% 2420k
>>>
>>> We're running a slightly dated Proxmox:
>>>
>>> Chisel:~# pveversion -v
>>> pve-manager: 1.5-10 (pve-manager/1.5/4822)
>>> running kernel: 2.6.24-11-pve
>>> proxmox-ve-2.6.24: 1.5-23
>>> pve-kernel-2.6.24-11-pve: 2.6.24-23
>>> pve-kernel-2.6.24-9-pve: 2.6.24-18
>>> pve-kernel-2.6.24-8-pve: 2.6.24-16
>>> qemu-server: 1.1-16
>>> pve-firmware: 1.0-5
>>> libpve-storage-perl: 1.0-13
>>> vncterm: 0.9-2
>>> vzctl: 3.0.23-1pve11
>>> vzdump: 1.2-5
>>> vzprocps: 2.0.11-1dso2
>>> vzquota: 3.0.11-1
>>> pve-qemu-kvm: 0.12.4-1
>>> Chisel:~#
>>>
>>> Any ideas or suggestions? It's getting quite frustrating to have to
>>> restart this VM almost every night.
>>> (The backups do complete successfully BTW)
>>>
>>> Myke
More information about the pve-user
mailing list