[PVE-User] VZdump: No such disk, but the disk is there!

Gilberto Nunes gilberto.nunes32 at gmail.com
Wed Feb 19 15:01:55 CET 2020


HI there

I change the bwlimit to 100000 inside /etc/vzdump and vzdump works normally
for a couple of days and it's make happy.
Now, I have the error again! No logs, no explanation! Just error pure and
simple:

110: 2020-02-18 22:18:06 INFO: Starting Backup of VM 110 (qemu)
110: 2020-02-18 22:18:06 INFO: status = running
110: 2020-02-18 22:18:07 INFO: update VM 110: -lock backup
110: 2020-02-18 22:18:07 INFO: VM Name: cliente-V-110-IP-163
110: 2020-02-18 22:18:07 INFO: include disk 'scsi0'
'local-lvm:vm-110-disk-0' 100G
110: 2020-02-18 22:18:57 ERROR: Backup of VM 110 failed - no such
volume 'local-lvm:vm-110-disk-0'

112: 2020-02-18 22:19:00 INFO: Starting Backup of VM 112 (qemu)
112: 2020-02-18 22:19:00 INFO: status = running
112: 2020-02-18 22:19:01 INFO: update VM 112: -lock backup
112: 2020-02-18 22:19:01 INFO: VM Name: cliente-V-112-IP-165
112: 2020-02-18 22:19:01 INFO: include disk 'scsi0'
'local-lvm:vm-112-disk-0' 120G
112: 2020-02-18 22:19:31 ERROR: Backup of VM 112 failed - no such
volume 'local-lvm:vm-112-disk-0'

116: 2020-02-18 22:19:31 INFO: Starting Backup of VM 116 (qemu)
116: 2020-02-18 22:19:31 INFO: status = running
116: 2020-02-18 22:19:32 INFO: update VM 116: -lock backup
116: 2020-02-18 22:19:32 INFO: VM Name: cliente-V-IP-162
116: 2020-02-18 22:19:32 INFO: include disk 'scsi0'
'local-lvm:vm-116-disk-0' 100G
116: 2020-02-18 22:20:05 ERROR: Backup of VM 116 failed - no such
volume 'local-lvm:vm-116-disk-0'



---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em sex., 14 de fev. de 2020 às 14:22, Gianni Milo <gianni.milo22 at gmail.com>
escreveu:

> If it's happening randomly, my best guess would be that it might be related
> to high i/o during the time frame that the backup takes place.
> Have you tried creating multiple backup schedules which will take place at
> different times ? Setting backup bandwidth limits might also help.
> Check the PVE administration guide for more details on this. You could
> check for any clues in syslog during the time that the failed backup takes
> place as well.
>
> G.
>
> On Fri, 14 Feb 2020 at 14:35, Gilberto Nunes <gilberto.nunes32 at gmail.com>
> wrote:
>
> > HI guys
> >
> > Some problem but with two different vms...
> > I also update Proxmox still in 5.x series, but no changes... Now this
> > problem ocurrs twice, one night after other...
> > I am very concerned about it!
> > Please, Proxmox staff, is there something I can do to solve this issue?
> > Anybody alread do a bugzilla???
> >
> > Thanks
> > ---
> > Gilberto Nunes Ferreira
> >
> > (47) 3025-5907
> > (47) 99676-7530 - Whatsapp / Telegram
> >
> > Skype: gilberto.nunes36
> >
> >
> >
> >
> >
> > Em qui., 13 de fev. de 2020 às 19:53, Atila Vasconcelos <
> > atilav at lightspeed.ca> escreveu:
> >
> > > Hi,
> > >
> > > I had the same problem in the past and it repeats once a while.... its
> > > very random; I could not find any way to reproduce it.
> > >
> > > But as it happens... it will go away.
> > >
> > > When you are almost forgetting about it, it will come again ;)
> > >
> > > I just learned to ignore it (and do manually the backup when it fails)
> > >
> > > I see in proxmox 6.x it is less frequent (but still happening once a
> > > while).
> > >
> > >
> > > ABV
> > >
> > >
> > > On 2020-02-13 4:42 a.m., Gilberto Nunes wrote:
> > > > Yeah! Me too... This problem is pretty random... Let see next week!
> > > > ---
> > > > Gilberto Nunes Ferreira
> > > >
> > > > (47) 3025-5907
> > > > (47) 99676-7530 - Whatsapp / Telegram
> > > >
> > > > Skype: gilberto.nunes36
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Em qui., 13 de fev. de 2020 às 09:29, Eneko Lacunza <
> > elacunza at binovo.es>
> > > > escreveu:
> > > >
> > > >> Maybe check dm-15 permissions, ls -l /dev/dm-15, but really out of
> > ideas
> > > >> now, sorry!!! ;)
> > > >>
> > > >> El 13/2/20 a las 13:24, Gilberto Nunes escribió:
> > > >>> I can assure you... the disk is there!
> > > >>>
> > > >>> pvesm list local-lvm
> > > >>> local-lvm:vm-101-disk-0                raw 53687091200 101
> > > >>> local-lvm:vm-102-disk-0                raw 536870912000 102
> > > >>> local-lvm:vm-103-disk-0                raw 322122547200 103
> > > >>> local-lvm:vm-104-disk-0                raw 214748364800 104
> > > >>> local-lvm:vm-104-state-LUKPLAS         raw 17704157184 104
> > > >>> local-lvm:vm-105-disk-0                raw 751619276800 105
> > > >>> local-lvm:vm-106-disk-0                raw 161061273600 106
> > > >>> local-lvm:vm-107-disk-0                raw 536870912000 107
> > > >>> local-lvm:vm-108-disk-0                raw 214748364800 108
> > > >>> local-lvm:vm-109-disk-0                raw 107374182400 109
> > > >>> local-lvm:vm-110-disk-0                raw 107374182400 110
> > > >>> local-lvm:vm-111-disk-0                raw 107374182400 111
> > > >>> local-lvm:vm-112-disk-0                raw 128849018880 112
> > > >>> local-lvm:vm-113-disk-0                raw 53687091200 113
> > > >>> local-lvm:vm-113-state-antes_balloon   raw 17704157184 113
> > > >>> local-lvm:vm-114-disk-0                raw 128849018880 114
> > > >>> local-lvm:vm-115-disk-0                raw 107374182400 115
> > > >>> local-lvm:vm-115-disk-1                raw 53687091200 115
> > > >>> local-lvm:vm-116-disk-0                raw 107374182400 116
> > > >>> local-lvm:vm-117-disk-0                raw 107374182400 117
> > > >>> local-lvm:vm-118-disk-0                raw 107374182400 118
> > > >>> local-lvm:vm-119-disk-0                raw 26843545600 119
> > > >>> local-lvm:vm-121-disk-0                raw 107374182400 121
> > > >>> local-lvm:vm-122-disk-0                raw 107374182400 122
> > > >>> local-lvm:vm-123-disk-0                raw 161061273600 123
> > > >>> local-lvm:vm-124-disk-0                raw 107374182400 124
> > > >>> local-lvm:vm-125-disk-0                raw 53687091200 125
> > > >>> local-lvm:vm-126-disk-0                raw 32212254720 126
> > > >>> local-lvm:vm-127-disk-0                raw 53687091200 127
> > > >>> local-lvm:vm-129-disk-0                raw 21474836480 129
> > > >>>
> > > >>> ls -l /dev/pve/vm-110-disk-0
> > > >>> lrwxrwxrwx 1 root root 8 Nov 11 22:05 /dev/pve/vm-110-disk-0 ->
> > > ../dm-15
> > > >>>
> > > >>>
> > > >>> ---
> > > >>> Gilberto Nunes Ferreira
> > > >>>
> > > >>> (47) 3025-5907
> > > >>> (47) 99676-7530 - Whatsapp / Telegram
> > > >>>
> > > >>> Skype: gilberto.nunes36
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > > >>> Em qui., 13 de fev. de 2020 às 09:19, Eneko Lacunza <
> > > elacunza at binovo.es>
> > > >>> escreveu:
> > > >>>
> > > >>>> What about:
> > > >>>>
> > > >>>> pvesm list local-lvm
> > > >>>> ls -l /dev/pve/vm-110-disk-0
> > > >>>>
> > > >>>> El 13/2/20 a las 12:40, Gilberto Nunes escribió:
> > > >>>>> Quite strange to say the least
> > > >>>>>
> > > >>>>>
> > > >>>>> ls /dev/pve/*
> > > >>>>> /dev/pve/root                  /dev/pve/vm-109-disk-0
> > > >>>>> /dev/pve/vm-118-disk-0
> > > >>>>> /dev/pve/swap                  /dev/pve/vm-110-disk-0
> > > >>>>> /dev/pve/vm-119-disk-0
> > > >>>>> /dev/pve/vm-101-disk-0         /dev/pve/vm-111-disk-0
> > > >>>>> /dev/pve/vm-121-disk-0
> > > >>>>> /dev/pve/vm-102-disk-0         /dev/pve/vm-112-disk-0
> > > >>>>> /dev/pve/vm-122-disk-0
> > > >>>>> /dev/pve/vm-103-disk-0         /dev/pve/vm-113-disk-0
> > > >>>>> /dev/pve/vm-123-disk-0
> > > >>>>> /dev/pve/vm-104-disk-0
>  /dev/pve/vm-113-state-antes_balloon
> > > >>>>>     /dev/pve/vm-124-disk-0
> > > >>>>> /dev/pve/vm-104-state-LUKPLAS  /dev/pve/vm-114-disk-0
> > > >>>>> /dev/pve/vm-125-disk-0
> > > >>>>> /dev/pve/vm-105-disk-0         /dev/pve/vm-115-disk-0
> > > >>>>> /dev/pve/vm-126-disk-0
> > > >>>>> /dev/pve/vm-106-disk-0         /dev/pve/vm-115-disk-1
> > > >>>>> /dev/pve/vm-127-disk-0
> > > >>>>> /dev/pve/vm-107-disk-0         /dev/pve/vm-116-disk-0
> > > >>>>> /dev/pve/vm-129-disk-0
> > > >>>>> /dev/pve/vm-108-disk-0         /dev/pve/vm-117-disk-0
> > > >>>>>
> > > >>>>> ls /dev/mapper/
> > > >>>>> control               pve-vm--104--state--LUKPLAS
> > > >>>>>     pve-vm--115--disk--1
> > > >>>>> iscsi-backup          pve-vm--105--disk--0
> > > >>>>> pve-vm--116--disk--0
> > > >>>>> mpatha                pve-vm--106--disk--0
> > > >>>>> pve-vm--117--disk--0
> > > >>>>> pve-data              pve-vm--107--disk--0
> > > >>>>> pve-vm--118--disk--0
> > > >>>>> pve-data_tdata        pve-vm--108--disk--0
> > > >>>>> pve-vm--119--disk--0
> > > >>>>> pve-data_tmeta        pve-vm--109--disk--0
> > > >>>>> pve-vm--121--disk--0
> > > >>>>> pve-data-tpool        pve-vm--110--disk--0
> > > >>>>> pve-vm--122--disk--0
> > > >>>>> pve-root              pve-vm--111--disk--0
> > > >>>>> pve-vm--123--disk--0
> > > >>>>> pve-swap              pve-vm--112--disk--0
> > > >>>>> pve-vm--124--disk--0
> > > >>>>> pve-vm--101--disk--0  pve-vm--113--disk--0
> > > >>>>> pve-vm--125--disk--0
> > > >>>>> pve-vm--102--disk--0  pve-vm--113--state--antes_balloon
> > > >>>>>     pve-vm--126--disk--0
> > > >>>>> pve-vm--103--disk--0  pve-vm--114--disk--0
> > > >>>>> pve-vm--127--disk--0
> > > >>>>> pve-vm--104--disk--0  pve-vm--115--disk--0
> > > >>>>> pve-vm--129--disk--0
> > > >>>>>
> > > >>>>>
> > > >>>>> ---
> > > >>>>> Gilberto Nunes Ferreira
> > > >>>>>
> > > >>>>> (47) 3025-5907
> > > >>>>> (47) 99676-7530 - Whatsapp / Telegram
> > > >>>>>
> > > >>>>> Skype: gilberto.nunes36
> > > >>>>>
> > > >>>>>
> > > >>>>>
> > > >>>>>
> > > >>>>>
> > > >>>>> Em qui., 13 de fev. de 2020 às 08:38, Eneko Lacunza <
> > > >> elacunza at binovo.es>
> > > >>>>> escreveu:
> > > >>>>>
> > > >>>>>> It's quite strange, what about "ls /dev/pve/*"?
> > > >>>>>>
> > > >>>>>> El 13/2/20 a las 12:18, Gilberto Nunes escribió:
> > > >>>>>>> n: Thu Feb 13 07:06:19 2020
> > > >>>>>>> a2web:~# lvs
> > > >>>>>>>       LV                               VG    Attr       LSize
> > >  Pool
> > > >>>> Origin
> > > >>>>>>>        Data%  Meta%  Move Log Cpy%Sync Convert
> > > >>>>>>>       backup                           iscsi -wi-ao----   1.61t
> > > >>>>>>>
> > > >>>>>>>       data                             pve   twi-aotz--   3.34t
> > > >>>>>>>        88.21  9.53
> > > >>>>>>>       root                             pve   -wi-ao----  96.00g
> > > >>>>>>>
> > > >>>>>>>       snap_vm-104-disk-0_LUKPLAS       pve   Vri---tz-k 200.00g
> > > data
> > > >>>>>>> vm-104-disk-0
> > > >>>>>>>       snap_vm-113-disk-0_antes_balloon pve   Vri---tz-k  50.00g
> > > data
> > > >>>>>>> vm-113-disk-0
> > > >>>>>>>       swap                             pve   -wi-ao----   8.00g
> > > >>>>>>>
> > > >>>>>>>       vm-101-disk-0                    pve   Vwi-aotz--  50.00g
> > > data
> > > >>>>>>>         24.17
> > > >>>>>>>       vm-102-disk-0                    pve   Vwi-aotz-- 500.00g
> > > data
> > > >>>>>>>         65.65
> > > >>>>>>>       vm-103-disk-0                    pve   Vwi-aotz-- 300.00g
> > > data
> > > >>>>>>>         37.28
> > > >>>>>>>       vm-104-disk-0                    pve   Vwi-aotz-- 200.00g
> > > data
> > > >>>>>>>         17.87
> > > >>>>>>>       vm-104-state-LUKPLAS             pve   Vwi-a-tz--  16.49g
> > > data
> > > >>>>>>>         35.53
> > > >>>>>>>       vm-105-disk-0                    pve   Vwi-aotz-- 700.00g
> > > data
> > > >>>>>>>         90.18
> > > >>>>>>>       vm-106-disk-0                    pve   Vwi-aotz-- 150.00g
> > > data
> > > >>>>>>>         93.55
> > > >>>>>>>       vm-107-disk-0                    pve   Vwi-aotz-- 500.00g
> > > data
> > > >>>>>>>         98.20
> > > >>>>>>>       vm-108-disk-0                    pve   Vwi-aotz-- 200.00g
> > > data
> > > >>>>>>>         98.02
> > > >>>>>>>       vm-109-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         93.68
> > > >>>>>>>       vm-110-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         34.55
> > > >>>>>>>       vm-111-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         79.03
> > > >>>>>>>       vm-112-disk-0                    pve   Vwi-aotz-- 120.00g
> > > data
> > > >>>>>>>         93.78
> > > >>>>>>>       vm-113-disk-0                    pve   Vwi-aotz--  50.00g
> > > data
> > > >>>>>>>         65.42
> > > >>>>>>>       vm-113-state-antes_balloon       pve   Vwi-a-tz--  16.49g
> > > data
> > > >>>>>>>         43.64
> > > >>>>>>>       vm-114-disk-0                    pve   Vwi-aotz-- 120.00g
> > > data
> > > >>>>>>>         100.00
> > > >>>>>>>       vm-115-disk-0                    pve   Vwi-a-tz-- 100.00g
> > > data
> > > >>>>>>>         70.28
> > > >>>>>>>       vm-115-disk-1                    pve   Vwi-a-tz--  50.00g
> > > data
> > > >>>>>>>         0.00
> > > >>>>>>>       vm-116-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         26.34
> > > >>>>>>>       vm-117-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         100.00
> > > >>>>>>>       vm-118-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         100.00
> > > >>>>>>>       vm-119-disk-0                    pve   Vwi-aotz--  25.00g
> > > data
> > > >>>>>>>         18.42
> > > >>>>>>>       vm-121-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         23.76
> > > >>>>>>>       vm-122-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         100.00
> > > >>>>>>>       vm-123-disk-0                    pve   Vwi-aotz-- 150.00g
> > > data
> > > >>>>>>>         37.89
> > > >>>>>>>       vm-124-disk-0                    pve   Vwi-aotz-- 100.00g
> > > data
> > > >>>>>>>         30.73
> > > >>>>>>>       vm-125-disk-0                    pve   Vwi-aotz--  50.00g
> > > data
> > > >>>>>>>         9.02
> > > >>>>>>>       vm-126-disk-0                    pve   Vwi-aotz--  30.00g
> > > data
> > > >>>>>>>         99.72
> > > >>>>>>>       vm-127-disk-0                    pve   Vwi-aotz--  50.00g
> > > data
> > > >>>>>>>         10.79
> > > >>>>>>>       vm-129-disk-0                    pve   Vwi-aotz--  20.00g
> > > data
> > > >>>>>>>         45.04
> > > >>>>>>>
> > > >>>>>>> cat /etc/pve/storage.cfg
> > > >>>>>>> dir: local
> > > >>>>>>>             path /var/lib/vz
> > > >>>>>>>             content backup,iso,vztmpl
> > > >>>>>>>
> > > >>>>>>> lvmthin: local-lvm
> > > >>>>>>>             thinpool data
> > > >>>>>>>             vgname pve
> > > >>>>>>>             content rootdir,images
> > > >>>>>>>
> > > >>>>>>> iscsi: iscsi
> > > >>>>>>>             portal some-portal
> > > >>>>>>>             target some-target
> > > >>>>>>>             content images
> > > >>>>>>>
> > > >>>>>>> lvm: iscsi-lvm
> > > >>>>>>>             vgname iscsi
> > > >>>>>>>             base iscsi:0.0.0.scsi-mpatha
> > > >>>>>>>             content rootdir,images
> > > >>>>>>>             shared 1
> > > >>>>>>>
> > > >>>>>>> dir: backup
> > > >>>>>>>             path /backup
> > > >>>>>>>             content images,rootdir,iso,backup
> > > >>>>>>>             maxfiles 3
> > > >>>>>>>             shared 0
> > > >>>>>>> ---
> > > >>>>>>> Gilberto Nunes Ferreira
> > > >>>>>>>
> > > >>>>>>> (47) 3025-5907
> > > >>>>>>> (47) 99676-7530 - Whatsapp / Telegram
> > > >>>>>>>
> > > >>>>>>> Skype: gilberto.nunes36
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>>
> > > >>>>>>> Em qui., 13 de fev. de 2020 às 08:11, Eneko Lacunza <
> > > >>>> elacunza at binovo.es>
> > > >>>>>>> escreveu:
> > > >>>>>>>
> > > >>>>>>>> Can you send the output for "lvs" and "cat
> > /etc/pve/storage.cfg"?
> > > >>>>>>>>
> > > >>>>>>>> El 13/2/20 a las 11:13, Gilberto Nunes escribió:
> > > >>>>>>>>> HI all
> > > >>>>>>>>>
> > > >>>>>>>>> Still in trouble with this issue
> > > >>>>>>>>>
> > > >>>>>>>>> cat daemon.log | grep "Feb 12 22:10"
> > > >>>>>>>>> Feb 12 22:10:00 a2web systemd[1]: Starting Proxmox VE
> > replication
> > > >>>>>>>> runner...
> > > >>>>>>>>> Feb 12 22:10:01 a2web systemd[1]: Started Proxmox VE
> > replication
> > > >>>>>> runner.
> > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of
> > VM
> > > >> 110
> > > >>>>>>>> (qemu)
> > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110
> > > >> failed -
> > > >>>>>> no
> > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0'
> > > >>>>>>>>>
> > > >>>>>>>>> syslog
> > > >>>>>>>>> Feb 12 22:10:05 a2web vzdump[17390]: INFO: Starting Backup of
> > VM
> > > >> 110
> > > >>>>>>>> (qemu)
> > > >>>>>>>>> Feb 12 22:10:06 a2web qm[18860]: <root at pam> update VM 110:
> > -lock
> > > >>>>>> backup
> > > >>>>>>>>> Feb 12 22:10:28 a2web vzdump[17390]: ERROR: Backup of VM 110
> > > >> failed -
> > > >>>>>> no
> > > >>>>>>>>> such volume 'local-lvm:vm-110-disk-0'
> > > >>>>>>>>>
> > > >>>>>>>>> pveversion
> > > >>>>>>>>> pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-12-pve)
> > > >>>>>>>>>
> > > >>>>>>>>> proxmox-ve: 5.4-2 (running kernel: 4.15.18-12-pve)
> > > >>>>>>>>> pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
> > > >>>>>>>>> pve-kernel-4.15: 5.4-12
> > > >>>>>>>>> pve-kernel-4.15.18-24-pve: 4.15.18-52
> > > >>>>>>>>> pve-kernel-4.15.18-12-pve: 4.15.18-36
> > > >>>>>>>>> corosync: 2.4.4-pve1
> > > >>>>>>>>> criu: 2.11.1-1~bpo90
> > > >>>>>>>>> glusterfs-client: 3.8.8-1
> > > >>>>>>>>> ksm-control-daemon: 1.2-2
> > > >>>>>>>>> libjs-extjs: 6.0.1-2
> > > >>>>>>>>> libpve-access-control: 5.1-12
> > > >>>>>>>>> libpve-apiclient-perl: 2.0-5
> > > >>>>>>>>> libpve-common-perl: 5.0-56
> > > >>>>>>>>> libpve-guest-common-perl: 2.0-20
> > > >>>>>>>>> libpve-http-server-perl: 2.0-14
> > > >>>>>>>>> libpve-storage-perl: 5.0-44
> > > >>>>>>>>> libqb0: 1.0.3-1~bpo9
> > > >>>>>>>>> lvm2: 2.02.168-pve6
> > > >>>>>>>>> lxc-pve: 3.1.0-7
> > > >>>>>>>>> lxcfs: 3.0.3-pve1
> > > >>>>>>>>> novnc-pve: 1.0.0-3
> > > >>>>>>>>> proxmox-widget-toolkit: 1.0-28
> > > >>>>>>>>> pve-cluster: 5.0-38
> > > >>>>>>>>> pve-container: 2.0-41
> > > >>>>>>>>> pve-docs: 5.4-2
> > > >>>>>>>>> pve-edk2-firmware: 1.20190312-1
> > > >>>>>>>>> pve-firewall: 3.0-22
> > > >>>>>>>>> pve-firmware: 2.0-7
> > > >>>>>>>>> pve-ha-manager: 2.0-9
> > > >>>>>>>>> pve-i18n: 1.1-4
> > > >>>>>>>>> pve-libspice-server1: 0.14.1-2
> > > >>>>>>>>> pve-qemu-kvm: 3.0.1-4
> > > >>>>>>>>> pve-xtermjs: 3.12.0-1
> > > >>>>>>>>> qemu-server: 5.0-55
> > > >>>>>>>>> smartmontools: 6.5+svn4324-1
> > > >>>>>>>>> spiceterm: 3.0-5
> > > >>>>>>>>> vncterm: 1.5-3
> > > >>>>>>>>> zfsutils-linux: 0.7.13-pve1~bpo2
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> Some help??? Sould I upgrade the server to 6.x??
> > > >>>>>>>>>
> > > >>>>>>>>> Thanks
> > > >>>>>>>>>
> > > >>>>>>>>> ---
> > > >>>>>>>>> Gilberto Nunes Ferreira
> > > >>>>>>>>>
> > > >>>>>>>>> (47) 3025-5907
> > > >>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram
> > > >>>>>>>>>
> > > >>>>>>>>> Skype: gilberto.nunes36
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>>
> > > >>>>>>>>> Em qui., 30 de jan. de 2020 às 10:10, Gilberto Nunes <
> > > >>>>>>>>> gilberto.nunes32 at gmail.com> escreveu:
> > > >>>>>>>>>
> > > >>>>>>>>>> Hi there
> > > >>>>>>>>>>
> > > >>>>>>>>>> I got a strage error last night. Vzdump complain about the
> > > >>>>>>>>>> disk no exist or lvm volume in this case but the volume
> exist,
> > > >>>> indeed!
> > > >>>>>>>>>> In the morning I have do a manually backup and it's working
> > > >> fine...
> > > >>>>>>>>>> Any advice?
> > > >>>>>>>>>>
> > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: Starting Backup of VM 112
> > (qemu)
> > > >>>>>>>>>> 112: 2020-01-29 22:20:02 INFO: status = running
> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: update VM 112: -lock backup
> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: VM Name: cliente-V-112-IP-165
> > > >>>>>>>>>> 112: 2020-01-29 22:20:03 INFO: include disk 'scsi0'
> > > >>>>>>>> 'local-lvm:vm-112-disk-0' 120G
> > > >>>>>>>>>> 112: 2020-01-29 22:20:23 ERROR: Backup of VM 112 failed - no
> > > such
> > > >>>>>>>> volume 'local-lvm:vm-112-disk-0'
> > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: Starting Backup of VM 116
> > (qemu)
> > > >>>>>>>>>> 116: 2020-01-29 22:20:23 INFO: status = running
> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: update VM 116: -lock backup
> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: VM Name: cliente-V-IP-162
> > > >>>>>>>>>> 116: 2020-01-29 22:20:24 INFO: include disk 'scsi0'
> > > >>>>>>>> 'local-lvm:vm-116-disk-0' 100G
> > > >>>>>>>>>> 116: 2020-01-29 22:20:49 ERROR: Backup of VM 116 failed - no
> > > such
> > > >>>>>>>> volume 'local-lvm:vm-116-disk-0'
> > > >>>>>>>>>> ---
> > > >>>>>>>>>> Gilberto Nunes Ferreira
> > > >>>>>>>>>>
> > > >>>>>>>>>> (47) 3025-5907
> > > >>>>>>>>>> (47) 99676-7530 - Whatsapp / Telegram
> > > >>>>>>>>>>
> > > >>>>>>>>>> Skype: gilberto.nunes36
> > > >>>>>>>>>>
> > > >>>>>>>>>>
> > > >>>>>>>>>>
> > > >>>>>>>>>>
> > > >>>>>>>>> _______________________________________________
> > > >>>>>>>>> pve-user mailing list
> > > >>>>>>>>> pve-user at pve.proxmox.com
> > > >>>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >>>>>>>> --
> > > >>>>>>>> Zuzendari Teknikoa / Director Técnico
> > > >>>>>>>> Binovo IT Human Project, S.L.
> > > >>>>>>>> Telf. 943569206
> > > >>>>>>>> Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun
> > > (Gipuzkoa)
> > > >>>>>>>> www.binovo.es
> > > >>>>>>>>
> > > >>>>>>>> _______________________________________________
> > > >>>>>>>> pve-user mailing list
> > > >>>>>>>> pve-user at pve.proxmox.com
> > > >>>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >>>>>>>>
> > > >>>>>>> _______________________________________________
> > > >>>>>>> pve-user mailing list
> > > >>>>>>> pve-user at pve.proxmox.com
> > > >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >>>>>> --
> > > >>>>>> Zuzendari Teknikoa / Director Técnico
> > > >>>>>> Binovo IT Human Project, S.L.
> > > >>>>>> Telf. 943569206
> > > >>>>>> Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun
> > (Gipuzkoa)
> > > >>>>>> www.binovo.es
> > > >>>>>>
> > > >>>>>> _______________________________________________
> > > >>>>>> pve-user mailing list
> > > >>>>>> pve-user at pve.proxmox.com
> > > >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >>>>>>
> > > >>>>> _______________________________________________
> > > >>>>> pve-user mailing list
> > > >>>>> pve-user at pve.proxmox.com
> > > >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >>>> --
> > > >>>> Zuzendari Teknikoa / Director Técnico
> > > >>>> Binovo IT Human Project, S.L.
> > > >>>> Telf. 943569206
> > > >>>> Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun
> (Gipuzkoa)
> > > >>>> www.binovo.es
> > > >>>>
> > > >>>> _______________________________________________
> > > >>>> pve-user mailing list
> > > >>>> pve-user at pve.proxmox.com
> > > >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >>>>
> > > >>> _______________________________________________
> > > >>> pve-user mailing list
> > > >>> pve-user at pve.proxmox.com
> > > >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >>
> > > >> --
> > > >> Zuzendari Teknikoa / Director Técnico
> > > >> Binovo IT Human Project, S.L.
> > > >> Telf. 943569206
> > > >> Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
> > > >> www.binovo.es
> > > >>
> > > >> _______________________________________________
> > > >> pve-user mailing list
> > > >> pve-user at pve.proxmox.com
> > > >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > >>
> > > > _______________________________________________
> > > > pve-user mailing list
> > > > pve-user at pve.proxmox.com
> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > _______________________________________________
> > > pve-user mailing list
> > > pve-user at pve.proxmox.com
> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > >
> > _______________________________________________
> > pve-user mailing list
> > pve-user at pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>


More information about the pve-user mailing list