[PVE-User] Creating Ceph Cluster
Markus Dellermann
li-mli at gmx.net
Sun Feb 28 13:46:35 CET 2016
Am Freitag, 26. Februar 2016, 23:28:17 CET schrieb Vadim Bulst:
> Dear List,
>
> after 2 years running a PVE-Cluster of four nodes with Ceph-RBD it got
> time to for PVE major release Four. We added a fifth node with the same
> version of PVE (3.4) and local ZFS-Volume and migrated all important VMs
> to this node and ZFS-Storage. After this was done we removed all old
> nodes from the existing cluster.
>
> Now we made a dist-upgrade to Debian Jessie and PVE 4.1 on the fifth
> (new) node. The old node we installed completely new to PVE 4.1 and
> joined to the updated fifth node.
>
> Everything fine so far. Installation and upgrade went pretty straight
> forward.
>
> PVE-Cluster is up and running again.
>
> Now we where going to install the CEPH-components. Added the Hammer-repo
> of ceph.com , installed all in the wiki mentioned packages.
>
> pveceph init --network 172.18.117.0/24 was successfully executed.
>
> Now we tried to add the first cephmon on one of the new installed nodes
> with no luck. We executed the following command:
>
> pveceph createmon
>
> The error message is:
>
> unable to find local address within network '172.18.117.0/24'
>
> The only node we were able to add is the from PVE 3.4 upgraded one.
>
> This is our network configuration:
>
> # Loopback interface
> auto lo
> iface lo inet loopback
>
> # Bond eth0 and eth1 together
> allow-vmbr1 bond0
> iface bond0 inet manual
> ovs_bridge vmbr1
> ovs_type OVSBond
> ovs_bonds eth1 eth0
> pre-up ( ifconfig eth0 mtu 1500 && ifconfig eth1 mtu 1500 )
> ovs_options bond_mode=balance-slb
> mtu 1500
>
> # Bridge for our bond and vlan virtual interfaces (our VMs will
> # also attach to this bridge)
> auto vmbr1
> allow-ovs vmbr1
> iface vmbr1 inet manual
> ovs_type OVSBridge
> ovs_ports bond0 vlan888
> mtu 1500
>
> allow-vmbr1 vlan888
> iface vlan888 inet static
> ovs_type OVSIntPort
> ovs_bridge vmbr1
> ovs_options tag=888
> ovs_extra set interface ${IFACE}
> external-ids:iface-id=$(hostname -s)-${IFACE}-vif
> address 172.18.117.93
> netmask 255.255.255.0
> gateway 172.18.117.254
> mtu 1500
>
>
> What went wrong? Does anybody has a supposition?
>
> Cheers,
>
> Vadim
>
Hi Vadim,
i think, you should look at the output from
"systemctl status networking.service -l "
Maybe there is a problem in your /etc/network/interfaces...
Markus
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
More information about the pve-user
mailing list