<div dir="ltr"><div><div>Answer my own question: yes! There is a bug related NFS on kernel 3.13 that is the default on Ubuntu 14.04...<br></div>I will try with kernel 3.19...<br></div>Sorry for sent this to the list... <br></div><div class="gmail_extra"><br><div class="gmail_quote">2015-10-26 13:58 GMT-02:00 Gilberto Nunes <span dir="ltr"><<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>UPDATE<br><br></div>I try use proto=udp and soft, get kernel OPS in NFS Server side:<br><br>BUG: unable to handle kernel NULL pointer dereference at 0000000000000008<br>[ 7465.127924] IP: [<ffffffff8161d84d>] skb_copy_and_csum_datagram_iovec+0x2d/0x110<br>[ 7465.128154] PGD 0 <br>[ 7465.128224] Oops: 0000 [#1] SMP <br>[ 7465.128336] Modules linked in: ocfs2 quota_tree ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs drbd lru_cache nfsd auth_rpcgss nfs_acl nfs lockd sunrpc fscache ipmi_devintf gpio_ich dcdbas x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd lpc_ich mei_me mei shpchp wmi ipmi_si acpi_power_meter lp mac_hid parport xfs libcrc32c hid_generic usbhid hid igb i2c_algo_bit tg3 dca ahci ptp megaraid_sas libahci pps_core<br>[ 7465.130171] CPU: 8 PID: 4602 Comm: nfsd Not tainted 3.13.0-66-generic #108-Ubuntu<br>[ 7465.130407] Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.2.6 06/08/2015<br>[ 7465.130648] task: ffff88046410b000 ti: ffff88044a78a000 task.ti: ffff88044a78a000<br>[ 7465.130889] RIP: 0010:[<ffffffff8161d84d>] [<ffffffff8161d84d>] skb_copy_and_csum_datagram_iovec+0x2d/0x110<br>[ 7465.131213] RSP: 0018:ffff88044a78bbc0 EFLAGS: 00010206<br>[ 7465.131385] RAX: 0000000000000000 RBX: ffff8804607c2300 RCX: 00000000000000ec<br>[ 7465.131613] RDX: 0000000000000000 RSI: 0000000000000c7c RDI: ffff880464c0e600<br>[ 7465.131844] RBP: ffff88044a78bbf8 R08: 0000000000000000 R09: 00000000aea75158<br>[ 7465.132074] R10: 00000000000000c0 R11: 0000000000000003 R12: 0000000000000008<br>[ 7465.132304] R13: ffff880464c0e600 R14: 0000000000000c74 R15: ffff880464c0e600<br>[ 7465.132535] FS: 0000000000000000(0000) GS:ffff88046e500000(0000) knlGS:0000000000000000<br>[ 7465.132798] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033<br>[ 7465.132981] CR2: 0000000000000008 CR3: 0000000001c0e000 CR4: 00000000001407e0<br>[ 7465.133211] Stack:<br>[ 7465.133275] ffffffff81616f66 ffffffff81616fb0 ffff8804607c2300 ffff88044a78bdf8<br>[ 7465.133529] 0000000000000000 0000000000000c74 ffff880464c0e600 ffff88044a78bc60<br>[ 7465.133780] ffffffff8168b2ec ffff88044a430028 ffff8804607c2370 0000000200000000<br>[ 7465.134032] Call Trace:<br>[ 7465.134109] [<ffffffff81616f66>] ? skb_checksum+0x26/0x30<br>[ 7465.134284] [<ffffffff81616fb0>] ? skb_push+0x40/0x40<br>[ 7465.134451] [<ffffffff8168b2ec>] udp_recvmsg+0x1dc/0x380<br>[ 7465.134624] [<ffffffff8169650c>] inet_recvmsg+0x6c/0x80<br>[ 7465.134790] [<ffffffff8160f0aa>] sock_recvmsg+0x9a/0xd0<br>[ 7465.134956] [<ffffffff8107576a>] ? del_timer_sync+0x4a/0x60<br>[ 7465.135131] [<ffffffff8172762d>] ? schedule_timeout+0x17d/0x2d0<br>[ 7465.135318] [<ffffffff8160f11a>] kernel_recvmsg+0x3a/0x50<br>[ 7465.135497] [<ffffffffa02bfd29>] svc_udp_recvfrom+0x89/0x440 [sunrpc]<br>[ 7465.135699] [<ffffffff8172c01b>] ? _raw_spin_unlock_bh+0x1b/0x40<br>[ 7465.135902] [<ffffffffa02cccc8>] ? svc_get_next_xprt+0xd8/0x310 [sunrpc]<br>[ 7465.136120] [<ffffffffa02cd450>] svc_recv+0x4a0/0x5c0 [sunrpc]<br>[ 7465.136307] [<ffffffffa041470d>] nfsd+0xad/0x130 [nfsd]<br>[ 7465.136476] [<ffffffffa0414660>] ? nfsd_destroy+0x80/0x80 [nfsd]<br>[ 7465.136673] [<ffffffff8108b7d2>] kthread+0xd2/0xf0<br>[ 7465.136829] [<ffffffff8108b700>] ? kthread_create_on_node+0x1c0/0x1c0<br>[ 7465.137039] [<ffffffff81734ba8>] ret_from_fork+0x58/0x90<br>[ 7465.137212] [<ffffffff8108b700>] ? kthread_create_on_node+0x1c0/0x1c0<br>[ 7465.145470] Code: 44 00 00 55 31 c0 48 89 e5 41 57 41 56 41 55 49 89 fd 41 54 41 89 f4 53 48 83 ec 10 8b 77 68 41 89 f6 45 29 e6 0f 84 89 00 00 00 <48> 8b 42 08 48 89 d3 48 85 c0 75 14 0f 1f 80 00 00 00 00 48 83 <br>[ 7465.163046] RIP [<ffffffff8161d84d>] skb_copy_and_csum_datagram_iovec+0x2d/0x110<br>[ 7465.171731] RSP <ffff88044a78bbc0><br>[ 7465.180231] CR2: 0000000000000008<br>[ 7465.205987] ---[ end trace 1edb9cef822eb074 ]---<br><br><br></div>Somebody know any bug relate this issue???<br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">2015-10-26 13:35 GMT-02:00 Gilberto Nunes <span dir="ltr"><<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">But i thing that with this limitation, transfer 30 gb of mail's over network it talks forever, don't you agree??<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-10-26 13:25 GMT-02:00 Gilberto Nunes <span dir="ltr"><<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Regard badnwidth limitation, you mean like this:<br><br><br>irtio0: stg:120/vm-120-disk-1.qcow2,iops_rd=100,iops_wr=100,iops_rd_max=100,iops_wr_max=100,mbps_rd=70,mbps_wr=70,mbps_rd_max=70,mbps_wr_max=70,size=2000G<br></div><div><div><div class="gmail_extra"><br><div class="gmail_quote">2015-10-26 11:25 GMT-02:00 Dmitry Petuhov <span dir="ltr"><<a href="mailto:mityapetuhov@gmail.com" target="_blank">mityapetuhov@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>There's issue with NFS: if you try to
send over it more than network can deal with (100-120 MBps for
1-gigabit), it imposes several-second pauses, which are being
interpreted like hardware errors. These bursts may be just few
seconds long to trigger issue.<br>
<br>
You can try to limit bandwidth in virtual HDD config to something
like 60-70 MBps. This should be enough for 1-gigabit network.<br>
<br>
But my opinion is that it's better to switch to iSCSI.<br>
<br>
26.10.2015 16:13, Gilberto Nunes пишет:<br>
</div><div><div>
<blockquote type="cite">
<div dir="ltr">
<div>BTW, all HD is SAS with gigaethernet between the servers...<br>
</div>
I already try with gigaethernet switch in order to isoleted the
Proxmox and Storage from external (LAN) traffic.... Not works at
all!<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">2015-10-26 11:12 GMT-02:00 Gilberto
Nunes <span dir="ltr"><<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>HDD config is standard... I do not
make any change...<br>
</div>
I wonder why I have others VM, as I said
before, with Ubuntu, CentOS, Windows 7 and
2012, and work fine! <br>
</div>
Not of all have huge big files or a lot of
connections, but they stand solid!<br>
</div>
But when require a lot access and deal with
big files, here's came the devil! The VM just
get IO error and die before 2 or 3 days...<br>
</div>
On both physical servers, not hight load. I
check with HTOP and TOP as well.<br>
</div>
iostat doesn't show nothing wrong.<br>
</div>
In side the VM, a lot I/O error from time to time...<br>
</div>
IO goes high endeed, but is so expect because I am
using imapsync to sync mails to old server to Zimbra
Mail Server...<br>
</div>
But I do not expect the VM die with IO errors!<br>
</div>
It's so frustrating... :(<br>
</div>
<div>
<div>
<div class="gmail_extra"><br>
<div class="gmail_quote">2015-10-26 11:04 GMT-02:00
Dmitry Petuhov <span dir="ltr"><<a href="mailto:mityapetuhov@gmail.com" target="_blank"></a><a href="mailto:mityapetuhov@gmail.com" target="_blank">mityapetuhov@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div>What's virtual HDD config? Which
controller, which cache mode?<br>
<br>
I suppose it's bad idea to run KVM machines
via NFS: it may produce big enough delays
under high loads, which may look like timeouts
on client side.<br>
If you want some network storage, iSCSI can be
better chiose.<br>
<br>
26.10.2015 12:48, Gilberto Nunes пишет:<br>
</div>
<div>
<div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>Admin or
whatever is
your name... I
have more than
10 years deal
with Unix
Linux and
Windows.<br>
</div>
I know what I
done.<br>
</div>
To the others:
Yes! Should be
straightforward
any way...<br>
</div>
Proxmox server is
a PowerEdge R430
with 32 GB of
memory.<br>
</div>
Storage is the same
server.<br>
</div>
Both with SAS hard
driver.<br>
</div>
Between this servers
there's a cable in order
to provide a
gigaethernet link.<br>
</div>
In second server, I have
ubuntu 15.04 installed
with DRBD and OCFS mounted
in /data FS.<br>
</div>
In the same server, I have
NFS installed and server FS
to Proxmox machine.<br>
</div>
In proxmox machine I have
nothing, except a VM with
Ubuntu 14.04 installed, where
Zimbra Mail Server was
deploy...<br>
</div>
Inside bothe physical servers,
everything is ok... NO error in
disc and everything is running
smoothly.<br>
</div>
But, INSIDE THE VM HOSTED WITH
PROXMOX, many IO errors!...<br>
</div>
This make FS corrputed in some point
that make Zimbra crash!<br>
<br>
</div>
BTW, I will return Zimbra to a
physical machine right now and deploy
lab env for test purpose<br>
<br>
</div>
Best regards<br>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">2015-10-25
22:38 GMT-02:00 <a href="mailto:admin@extremeshok.com" target="_blank"></a><a href="mailto:admin@extremeshok.com" target="_blank">admin@extremeshok.com</a>
<span dir="ltr"><<a href="mailto:admin@extremeshok.com" target="_blank"></a><a href="mailto:admin@extremeshok.com" target="_blank">admin@extremeshok.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Your nfs
settings.<br>
<br>
Hire you people with the knowledge
or up skill your knowledge.<br>
<br>
Sent from my iPhone<br>
<div>
<div><br>
> On 26 Oct 2015, at 1:33 AM,
Gilberto Nunes <<a href="mailto:gilberto.nunes32@gmail.com" target="_blank"></a><a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>>
wrote:<br>
><br>
> Well friends...<br>
><br>
> I really try hard to work
with PVE, but is a pain in the
ass...<br>
> Nothing seems to work..<br>
> I deploy Ubuntu with NFS
storage connected through direct
cable ( 1 gb ) and beside follow
all docs available in the wiki
and internet, one single VM
continue to crash over and over
again...<br>
><br>
> So I realise that is time
to say good bye to Proxmox...<br>
><br>
> Live long and prosper...<br>
><br>
><br>
><br>
><br>
> --<br>
><br>
> Gilberto Ferreira<br>
> <a href="tel:%2B55%20%2847%29%209676-7530" value="+554796767530" target="_blank">+55 (47)
9676-7530</a><br>
> Skype: gilberto.nunes36<br>
><br>
</div>
</div>
>
_______________________________________________<br>
> pve-user mailing list<br>
> <a href="mailto:pve-user@pve.proxmox.com" target="_blank">pve-user@pve.proxmox.com</a><br>
> <a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" rel="noreferrer" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a><br>
_______________________________________________<br>
pve-user mailing list<br>
<a href="mailto:pve-user@pve.proxmox.com" target="_blank">pve-user@pve.proxmox.com</a><br>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" rel="noreferrer" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a><br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div><br>
</div>
<div>Gilberto
Ferreira<br>
<a href="tel:%2B55%20%2847%29%209676-7530" value="+554796767530" target="_blank">+55
(47) 9676-7530</a><br>
Skype:
gilberto.nunes36<br>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
pve-user mailing list
<a href="mailto:pve-user@pve.proxmox.com" target="_blank">pve-user@pve.proxmox.com</a>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a>
</pre>
</blockquote>
<br>
</div>
</div>
</div>
<br>
_______________________________________________<br>
pve-user mailing list<br>
<a href="mailto:pve-user@pve.proxmox.com" target="_blank">pve-user@pve.proxmox.com</a><br>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" rel="noreferrer" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a><br>
<br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div><br>
</div>
<div>Gilberto Ferreira<br>
<a href="tel:%2B55%20%2847%29%209676-7530" value="+554796767530" target="_blank">+55
(47) 9676-7530</a><br>
Skype: gilberto.nunes36<br>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div><br>
</div>
<div>Gilberto Ferreira<br>
<a href="tel:%2B55%20%2847%29%209676-7530" value="+554796767530" target="_blank">+55 (47) 9676-7530</a><br>
Skype: gilberto.nunes36<br>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
pve-user mailing list
<a href="mailto:pve-user@pve.proxmox.com" target="_blank">pve-user@pve.proxmox.com</a>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a>
</pre>
</blockquote>
<br>
</div></div></div>
<br>_______________________________________________<br>
pve-user mailing list<br>
<a href="mailto:pve-user@pve.proxmox.com" target="_blank">pve-user@pve.proxmox.com</a><br>
<a href="http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user" rel="noreferrer" target="_blank">http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><br></div><div>Gilberto Ferreira<br><a href="tel:%2B55%20%2847%29%209676-7530" value="+554796767530" target="_blank">+55 (47) 9676-7530</a><br>Skype: gilberto.nunes36<br></div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</div>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><br></div><div>Gilberto Ferreira<br><a href="tel:%2B55%20%2847%29%209676-7530" value="+554796767530" target="_blank">+55 (47) 9676-7530</a><br>Skype: gilberto.nunes36<br></div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</div>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><br></div><div>Gilberto Ferreira<br><a href="tel:%2B55%20%2847%29%209676-7530" value="+554796767530" target="_blank">+55 (47) 9676-7530</a><br>Skype: gilberto.nunes36<br></div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</div>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><br></div><div>Gilberto Ferreira<br>+55 (47) 9676-7530<br>Skype: gilberto.nunes36<br></div><br></div></div></div></div></div></div></div></div></div></div></div></div></div></div>
</div>