[PVE-User] Different IPs for iSCSI (+zfs) within cluster

Mikhail m at plus-plus.su
Tue Mar 8 09:06:11 CET 2016


Hello,

After taking a long sleep, I re-configured storage server's network
cards as bridge, and voila it works now as expected. It could be that I
missed something last night.

Cheers!

On 03/08/2016 10:32 AM, Alwin Antreich wrote:
> Hi list,
> 
> why do you use /30 networks? The bridge acts as a switch, you should be able to reach all your connected hosts on the same network.
> 
> Regards,
> Alwin
> 
> On 03/07/2016 11:49 PM, Mikhail wrote:
>> Hello Proxmox users!
>>
>> I'm in process of setting up 3-node HA cluster with shared
>> ZFS-over-iSCSI storage. I ran into problem when I cannot setup properly
>> my iSCSI storage. The setup follows:
>>
>> 1) Storage is a dedicated Linux server with ZFS filesystem. The server
>> has 4 network cards. One network card is used for administration/general
>> networking. Other three cards are cross-over connected to each of 3
>> Proxmox nodes.
>>
>> 2) Three identical servers running same latest PVE 4.1 Proxmox community
>> version. Each of these Proxmox nodes is cross-over connected using UTP-6
>> cable to storage server's network cards. In other words node1 is
>> connected to eth1 on storage, node2 is connected to the eth2 on storage,
>> node3 is connected to eth3 on storage.
>>
>> My cluster is already configured - all nodes added, and all set. Storage
>> server runs ZFS filesystem.
>>
>> Now I'm trying to setup shared storage on my cluster. First thing I
>> thought is that I have to configure proper networking between nodes and
>> storage server because all nodes have dedicated connection to storage
>> server's network cards as described above. On storage system I created
>> bridge interface "br0", added eth1,2,3 network cards into the bridge,
>> assigned br0 IP address 192.168.4.1/24. Next thing, on the nodes I
>> configured network card that is connected into storage server: eth1
>> 192.168.4.10 (.11, and 12 on next nodes). I expected this to work - so
>> that all nodes can access storage server using 192.168.4.1, but that did
>> not work - nodes cannot see 192.168.4.1 (storage) in this setup.
>>
>>
>> Then I decided I could use a hostname instead of IP address as a storage
>> server target. So I re-configured networking on storage - each NIC was
>> assigned /30 address, and each node was also assigned /30 address, so
>> that storage and node could connect to each other using individual IPs
>> using /30 subnet. Obviously /30 networking setup did the trick and now I
>> could reach storage from each node. Next thing I did was to setup
>> hostname mapping on nodes in /etc/hosts of every node:
>>
>> on node1 - /etc/hosts: 192.168.4.1 storage
>> on node2 - /etc/hosts: 192.168.4.5 storage
>> and so on.
>>
>> After that I went to Proxmox GUI to add shared ZFS-over-iSCSI storage.
>> Before entering target name I configured SSH access using key name
>> "storage" (/etc/pve/priv/zfs/storage_id_rsa) from every node. As a
>> target name, I entered "storage" assuming that each node will use the
>> "storage" name instead of IP and this way will allow each node to see
>> storage from it's /30 subnet. Obviously this failed because Proxmox
>> executed "zfs" commands to storage server using IP address and not
>> hostnames as expected. The whole idea failed because node1 can access
>> 192.168.4.1, but node2 cannot access this IP based on above setup.
>>
>> Sorry for making this story long, but hopefully I made it clear. How can
>> I overcome this problem now? Perhaps I'm doing something wrong with
>> bridge setup - I thought that bridging interfaces on storage will solve
>> this issue, but it is not working.
>>
>> Thanks!
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> 




More information about the pve-user mailing list