[pve-devel] PVE 4 and problem with drbdmanage

Gilberto Nunes gilberto.nunes32 at gmail.com
Fri Sep 11 16:34:20 CEST 2015


I perform a shutdown and a startup in both server, and in PVE02, I get this:

Sep 11 11:32:12 pve02 kernel: [ 3411.819799] drbd .drbdctrl/0 drbd0 pve01:
uuid_compare()=-100 by rule 100
Sep 11 11:32:12 pve02 kernel: [ 3411.819810] drbd .drbdctrl/0 drbd0 pve01:
helper command: /sbin/drbdadm initial-split-brain
Sep 11 11:32:12 pve02 kernel: [ 3411.820627] drbd .drbdctrl/0 drbd0 pve01:
helper command: /sbin/drbdadm initial-split-brain exit code 0 (0x0)
Sep 11 11:32:12 pve02 kernel: [ 3411.820647] drbd .drbdctrl/0 drbd0:
Split-Brain detected but unresolved, dropping connection!
Sep 11 11:32:12 pve02 kernel: [ 3411.820691] drbd .drbdctrl/0 drbd0 pve01:
helper command: /sbin/drbdadm split-brain
Sep 11 11:32:12 pve02 kernel: [ 3411.821431] drbd .drbdctrl/0 drbd0 pve01:
helper command: /sbin/drbdadm split-brain exit code 0 (0x0)
Sep 11 11:32:12 pve02 kernel: [ 3411.821471] drbd .drbdctrl pve01: conn(
Connected -> Disconnecting ) peer( Secondary -> Unknown )
Sep 11 11:32:12 pve02 kernel: [ 3411.821495] drbd .drbdctrl pve01: error
receiving P_STATE, e: -5 l: 0!
Sep 11 11:32:12 pve02 kernel: [ 3411.821527] drbd .drbdctrl pve01:
ack_receiver terminated
Sep 11 11:32:12 pve02 kernel: [ 3411.821528] drbd .drbdctrl pve01:
Terminating ack_recv thread
Sep 11 11:32:12 pve02 kernel: [ 3411.839794] drbd .drbdctrl pve01:
Connection closed
Sep 11 11:32:12 pve02 kernel: [ 3411.839841] drbd .drbdctrl pve01: conn(
Disconnecting -> StandAlone )
Sep 11 11:32:12 pve02 kernel: [ 3411.839854] drbd .drbdctrl pve01:
Terminating receiver thread



2015-09-11 11:24 GMT-03:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:

> Yes... I wait for that... It's work fine about 2/3 day ago...
> Since yesterday, I get this odd behavior...
>
> Right now, I try create other VM, and get this in PVE01:
>
> pve01:/var/log# tail -f /var/log/syslog | grep drbd
> Sep 11 11:21:44 pve01 kernel: [ 2838.206262] drbd .drbdctrl: Preparing
> cluster-wide state change 3187852087 (0->-1 3/1)
> Sep 11 11:21:44 pve01 kernel: [ 2838.206265] drbd .drbdctrl: Committing
> cluster-wide state change 3187852087 (0ms)
> Sep 11 11:21:44 pve01 kernel: [ 2838.206274] drbd .drbdctrl: role(
> Secondary -> Primary )
> Sep 11 11:21:44 pve01 kernel: [ 2838.268037] drbd .drbdctrl: role( Primary
> -> Secondary )
> Sep 11 11:21:44 pve01 kernel: [ 2838.276870] drbd .drbdctrl: Preparing
> cluster-wide state change 3555025472 (0->-1 3/1)
> Sep 11 11:21:44 pve01 kernel: [ 2838.276872] drbd .drbdctrl: Committing
> cluster-wide state change 3555025472 (0ms)
> Sep 11 11:21:44 pve01 kernel: [ 2838.276880] drbd .drbdctrl: role(
> Secondary -> Primary )
> Sep 11 11:21:44 pve01 kernel: [ 2838.302317] drbd .drbdctrl: role( Primary
> -> Secondary )
> Sep 11 11:21:44 pve01 kernel: [ 2838.310209] drbd .drbdctrl: Preparing
> cluster-wide state change 2077508299 (0->-1 3/1)
> Sep 11 11:21:44 pve01 kernel: [ 2838.310211] drbd .drbdctrl: Committing
> cluster-wide state change 2077508299 (0ms)
> Sep 11 11:21:44 pve01 kernel: [ 2838.310219] drbd .drbdctrl: role(
> Secondary -> Primary )
> Sep 11 11:21:44 pve01 kernel: [ 2838.335623] drbd .drbdctrl: role( Primary
> -> Secondary )
> Sep 11 11:21:44 pve01 kernel: [ 2838.376006] drbd .drbdctrl: Preparing
> cluster-wide state change 4247485156 (0->-1 3/1)
> Sep 11 11:21:44 pve01 kernel: [ 2838.376008] drbd .drbdctrl: Committing
> cluster-wide state change 4247485156 (0ms)
> Sep 11 11:21:44 pve01 kernel: [ 2838.376017] drbd .drbdctrl: role(
> Secondary -> Primary )
> Sep 11 11:21:45 pve01 org.drbd.drbdmanaged[1309]: Failed to find logical
> volume "drbdpool/vm-103-disk-1_00"
> Sep 11 11:21:45 pve01 org.drbd.drbdmanaged[1309]: Rounding up size to full
> physical extent 32.01 GiB
> Sep 11 11:21:45 pve01 org.drbd.drbdmanaged[1309]: Logical volume
> "vm-103-disk-1_00" created.
> Sep 11 11:21:45 pve01 org.drbd.drbdmanaged[1309]: drbdmeta: unrecognized
> option '--max-peers=7'
> Sep 11 11:21:45 pve01 org.drbd.drbdmanaged[1309]: NOT initializing bitmap
> Sep 11 11:21:45 pve01 org.drbd.drbdmanaged[1309]: initializing activity
> log
> Sep 11 11:21:45 pve01 org.drbd.drbdmanaged[1309]: Writing meta data...
> Sep 11 11:21:45 pve01 org.drbd.drbdmanaged[1309]: New drbd meta data block
> successfully created.
> Sep 11 11:21:45 pve01 kernel: [ 2839.476032] drbd vm-103-disk-1: Starting
> worker thread (from drbdsetup [3813])
> Sep 11 11:21:45 pve01 kernel: [ 2839.516041] drbd vm-103-disk-1/0 drbd10:
> disk( Diskless -> Attaching )
> Sep 11 11:21:45 pve01 kernel: [ 2839.516052] drbd vm-103-disk-1/0 drbd10:
> Maximum number of peer devices = 7
> Sep 11 11:21:45 pve01 kernel: [ 2839.516184] drbd vm-103-disk-1: Method to
> ensure write ordering: flush
> Sep 11 11:21:45 pve01 kernel: [ 2839.516187] drbd vm-103-disk-1/0 drbd10:
> drbd_bm_resize called with capacity == 67108864
> Sep 11 11:21:45 pve01 kernel: [ 2839.522395] drbd vm-103-disk-1/0 drbd10:
> resync bitmap: bits=8388608 words=917504 pages=1792
> Sep 11 11:21:45 pve01 kernel: [ 2839.530295] drbd vm-103-disk-1/0 drbd10:
> recounting of set bits took additional 8ms
> Sep 11 11:21:45 pve01 kernel: [ 2839.530305] drbd vm-103-disk-1/0 drbd10:
> disk( Attaching -> Inconsistent )
> Sep 11 11:21:45 pve01 kernel: [ 2839.530309] drbd vm-103-disk-1/0 drbd10:
> attached to current UUID: 0000000000000004
> Sep 11 11:21:45 pve01 kernel: [ 2839.537821] drbd vm-103-disk-1 pve02:
> Starting sender thread (from drbdsetup [3826])
> Sep 11 11:21:45 pve01 kernel: [ 2839.539000] drbd vm-103-disk-1 pve02:
> conn( StandAlone -> Unconnected )
> Sep 11 11:21:45 pve01 kernel: [ 2839.541112] drbd vm-103-disk-1: Preparing
> cluster-wide state change 839975239 (1->-1 7683/4609)
> Sep 11 11:21:45 pve01 kernel: [ 2839.541114] drbd vm-103-disk-1:
> Committing cluster-wide state change 839975239 (0ms)
> Sep 11 11:21:45 pve01 kernel: [ 2839.541123] drbd vm-103-disk-1: role(
> Secondary -> Primary )
> Sep 11 11:21:45 pve01 kernel: [ 2839.541124] drbd vm-103-disk-1/0 drbd10:
> disk( Inconsistent -> UpToDate )
> Sep 11 11:21:45 pve01 kernel: [ 2839.545440] drbd vm-103-disk-1 pve02:
> Starting receiver thread (from drbd_w_vm-103-d [3815])
> Sep 11 11:21:45 pve01 kernel: [ 2839.545509] drbd vm-103-disk-1/0 drbd10:
> size = 32 GB (33554432 KB)
> Sep 11 11:21:45 pve01 kernel: [ 2839.553772] drbd vm-103-disk-1: Forced to
> consider local data as UpToDate!
> Sep 11 11:21:45 pve01 kernel: [ 2839.553778] drbd vm-103-disk-1/0 drbd10:
> new current UUID: F485EDDF7BDE1589 weak: FFFFFFFFFFFFFFFD
> Sep 11 11:21:45 pve01 kernel: [ 2839.562119] drbd vm-103-disk-1 pve02:
> conn( Unconnected -> Connecting )
> Sep 11 11:21:45 pve01 kernel: [ 2839.563982] drbd vm-103-disk-1: role(
> Primary -> Secondary )
> Sep 11 11:21:46 pve01 kernel: [ 2839.726144] drbd .drbdctrl: role( Primary
> -> Secondary )
>
> But nothing in syslog from PVE02....
>
> 2015-09-11 11:17 GMT-03:00 Dietmar Maurer <dietmar at proxmox.com>:
>
>> > As can you see, the disc 101 exist in PVE01 but not in PVE02.
>> > How can I force drbdmanage to sync or whatever to show disc 101 in both
>> > servers??
>>
>> Is that reproducible? Did you wait until initial sync was finished?
>> Any hints in /var/log/syslog?
>>
>>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20150911/c6ae6a00/attachment.htm>


More information about the pve-devel mailing list