<div dir="ltr"><div>Also, and more seriously, its seems to bolix the cluster - all nodes start showing with red dots and the VM lists doesn't show any names. So far the only fix I have is to reboot every node.<br><br></div>Oddly it only seems that pve services are effected. ceph etc keeps working, I can still ssh onto each node.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 22 October 2015 at 21:56, Lindsay Mathieson <span dir="ltr"><<a href="mailto:lindsay.mathieson@gmail.com" target="_blank">lindsay.mathieson@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div><div>Proxmox 3.4 cluster with 3.10<br><br></div>I'm currently trying to test gluster 3.7.5 using 3 debian 8.2 VM's. The VM's are setup with 3.10.0-13-pve kernel<br></div>- kernel 4.2<br></div>- Gluster 3.7.5<br></div>- A replica 3 sharded datastore<br></div><div>- 1 virtual nic per virtual gluster node<br></div><div><br></div>The gluster datastore is exposed via gluster NFS to prxomox storage so I can test running VM's off it.<br><br></div>But whenever I try to move a VM disk to the gluster storage (via the NFS share) the nic on the VM just stops responding and its IP drops off the network. Sometime its only happens after 23GB, once got as far as 30GB.<br><br> However the VM is still there and I can use the novnc console to access it and logon. Its still thinks its nic is working but it can't ping anything.<br><br><br></div>I've tried with VIRTIO and e1000.<br><br></div>Any suggets as to what to look at next?<span class="HOEnZb"><font color="#888888"><br><div><div><div><div><br clear="all"><div><div><div><div><div><div><br>-- <br><div>Lindsay</div>
</div></div></div></div></div></div></div></div></div></div></font></span></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature">Lindsay</div>
</div>