<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Hi Dietmar,<br>
<br>
I had increased the size in the past to 4096 since there had been
problems with canceled backups as discussed in the forum also, but
they never crashed the whole system. <br>
<br>
# vzdump default settings<br>
<br>
#tmpdir: DIR<br>
#dumpdir: DIR<br>
#storage: STORAGE_ID<br>
#mode: snapshot|suspend|stop<br>
#bwlimit: KBPS<br>
#ionice: PRI<br>
#lockwait: MINUTES<br>
#stopwait: MINUTES<br>
size: 4096<br>
maxfiles: 3<br>
#script: FILENAME<br>
#exclude-path: PATHLIST<br>
<br>
And this setting works perfectly with 2.6.32-17, and also with
2.6.32-19 with manual backups from the webinterface, but not with
a scheduled one via cron with 2.6.32-19 on <b>this </b>node. On
the two other nodes there had been no problems same time with
scheduled backups (with much bigger VMs and CTs). The backup
crashed even on a small, non-channging (unused owncloud file
server) CT with only 6GB HDD.<br>
<br>
Please take a look on the lvdisplay output during a small, non
system destroying crash:<br>
<br>
Allocated to snapshot 60,92%<br>
<br>
So there should be enough space left?<br>
<br>
<br>
I recognized the following difference between non-failing/failing:<br>
<br>
<br>
After a crash the LV showed:<br>
<br>
LV Size 4,00 GiB<br>
Current LE 1024<br>
Segments 1<br>
Allocation inherit<br>
<br>
During a non-failing backup it was <br>
<br>
LV Size 2,55 TiB<br>
Current LE 669651<br>
COW-table size 4,00 GiB<br>
COW-table LE 1024<br>
Allocated to snapshot 60,92%<br>
Snapshot chunk size 4,00 KiB<br>
Segments 1<br>
<br>
The 2,55 TiB in the working one correspond to the size of the
data-LV:<br>
<br>
LV Path /dev/promo3/data<br>
LV Name data<br>
LV Status available<br>
# open 1<br>
LV Size 2,55 TiB<br>
Current LE 669651<br>
Segments 1<br>
Allocation inherit<br>
<br>
<br>
I recognized during the crashingbackups, not to be able to do an
'ls /mnt/pve/' - this ends with a hung and no output during the
backup. While a non-failing backup is running, there occurs no
problem with that. 'lvscan' shows same behaviour.<br>
<br>
Funny seems also this difference between lvdisplay and lvscan
during a non-failing backup:<br>
<br>
lvdisplay:<br>
<br>
--- Logical volume ---<br>
LV Path /dev/promo3/vzsnap-promo3-0<br>
...<br>
LV Status available<br>
# open 1<br>
LV Size <b> 2,55 TiB</b><br>
Current LE 669651<br>
...<br>
<br>
lvscan:<br>
...<br>
ACTIVE Original '/dev/promo3/data' [2,55 TiB] inherit<br>
ACTIVE Snapshot '/dev/promo3/vzsnap-promo3-0' [<b>4,00 GiB</b>]
inherit<br>
<br>
<br>
I could try again the new kernel with a higher size in vzcron.conf
- but it seems to me not to be the cause of a whole system crash.
Even if the size-parameter may be to small - in my opinion there
should be no chance for crashing the whole node with that?<br>
<br>
many regards, <br>
<br>
Martin<br>
<br>
<br>
<br>
Dietmar Maurer <a class="moz-txt-link-rfc2396E" href="mailto:dietmar@proxmox.com"><dietmar@proxmox.com></a> schrieb am 22.03.2013
06:36:<br>
</div>
<blockquote
cite="mid:24E144B8C0207547AD09C467A8259F7557BBCEF9@lisa.maurer-it.com"
type="cite">
<blockquote type="cite">
<pre wrap="">snapshot: Unable to allocate exception.
</pre>
</blockquote>
<pre wrap="">
You run out of snapshot space! You should increase that (see 'man vzdump' - parameter 'size').
</pre>
</blockquote>
<div class="moz-signature"><br>
</div>
</body>
</html>