[PVE-User] I try hard but...

Gilberto Nunes gilberto.nunes32 at gmail.com
Mon Oct 26 17:02:32 CET 2015


Answer my own question: yes! There is a bug related NFS on kernel 3.13 that
is the default on Ubuntu 14.04...
I will try with kernel 3.19...
Sorry for sent this to the list...

2015-10-26 13:58 GMT-02:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:

> UPDATE
>
> I try use proto=udp and soft, get kernel OPS in NFS Server side:
>
> BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> [ 7465.127924] IP: [<ffffffff8161d84d>]
> skb_copy_and_csum_datagram_iovec+0x2d/0x110
> [ 7465.128154] PGD 0
> [ 7465.128224] Oops: 0000 [#1] SMP
> [ 7465.128336] Modules linked in: ocfs2 quota_tree ocfs2_dlmfs
> ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs drbd
> lru_cache nfsd auth_rpcgss nfs_acl nfs lockd sunrpc fscache ipmi_devintf
> gpio_ich dcdbas x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel
> kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel
> aes_x86_64 lrw gf128mul glue_helper ablk_helper cryptd lpc_ich mei_me mei
> shpchp wmi ipmi_si acpi_power_meter lp mac_hid parport xfs libcrc32c
> hid_generic usbhid hid igb i2c_algo_bit tg3 dca ahci ptp megaraid_sas
> libahci pps_core
> [ 7465.130171] CPU: 8 PID: 4602 Comm: nfsd Not tainted 3.13.0-66-generic
> #108-Ubuntu
> [ 7465.130407] Hardware name: Dell Inc. PowerEdge R430/03XKDV, BIOS 1.2.6
> 06/08/2015
> [ 7465.130648] task: ffff88046410b000 ti: ffff88044a78a000 task.ti:
> ffff88044a78a000
> [ 7465.130889] RIP: 0010:[<ffffffff8161d84d>]  [<ffffffff8161d84d>]
> skb_copy_and_csum_datagram_iovec+0x2d/0x110
> [ 7465.131213] RSP: 0018:ffff88044a78bbc0  EFLAGS: 00010206
> [ 7465.131385] RAX: 0000000000000000 RBX: ffff8804607c2300 RCX:
> 00000000000000ec
> [ 7465.131613] RDX: 0000000000000000 RSI: 0000000000000c7c RDI:
> ffff880464c0e600
> [ 7465.131844] RBP: ffff88044a78bbf8 R08: 0000000000000000 R09:
> 00000000aea75158
> [ 7465.132074] R10: 00000000000000c0 R11: 0000000000000003 R12:
> 0000000000000008
> [ 7465.132304] R13: ffff880464c0e600 R14: 0000000000000c74 R15:
> ffff880464c0e600
> [ 7465.132535] FS:  0000000000000000(0000) GS:ffff88046e500000(0000)
> knlGS:0000000000000000
> [ 7465.132798] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 7465.132981] CR2: 0000000000000008 CR3: 0000000001c0e000 CR4:
> 00000000001407e0
> [ 7465.133211] Stack:
> [ 7465.133275]  ffffffff81616f66 ffffffff81616fb0 ffff8804607c2300
> ffff88044a78bdf8
> [ 7465.133529]  0000000000000000 0000000000000c74 ffff880464c0e600
> ffff88044a78bc60
> [ 7465.133780]  ffffffff8168b2ec ffff88044a430028 ffff8804607c2370
> 0000000200000000
> [ 7465.134032] Call Trace:
> [ 7465.134109]  [<ffffffff81616f66>] ? skb_checksum+0x26/0x30
> [ 7465.134284]  [<ffffffff81616fb0>] ? skb_push+0x40/0x40
> [ 7465.134451]  [<ffffffff8168b2ec>] udp_recvmsg+0x1dc/0x380
> [ 7465.134624]  [<ffffffff8169650c>] inet_recvmsg+0x6c/0x80
> [ 7465.134790]  [<ffffffff8160f0aa>] sock_recvmsg+0x9a/0xd0
> [ 7465.134956]  [<ffffffff8107576a>] ? del_timer_sync+0x4a/0x60
> [ 7465.135131]  [<ffffffff8172762d>] ? schedule_timeout+0x17d/0x2d0
> [ 7465.135318]  [<ffffffff8160f11a>] kernel_recvmsg+0x3a/0x50
> [ 7465.135497]  [<ffffffffa02bfd29>] svc_udp_recvfrom+0x89/0x440 [sunrpc]
> [ 7465.135699]  [<ffffffff8172c01b>] ? _raw_spin_unlock_bh+0x1b/0x40
> [ 7465.135902]  [<ffffffffa02cccc8>] ? svc_get_next_xprt+0xd8/0x310
> [sunrpc]
> [ 7465.136120]  [<ffffffffa02cd450>] svc_recv+0x4a0/0x5c0 [sunrpc]
> [ 7465.136307]  [<ffffffffa041470d>] nfsd+0xad/0x130 [nfsd]
> [ 7465.136476]  [<ffffffffa0414660>] ? nfsd_destroy+0x80/0x80 [nfsd]
> [ 7465.136673]  [<ffffffff8108b7d2>] kthread+0xd2/0xf0
> [ 7465.136829]  [<ffffffff8108b700>] ? kthread_create_on_node+0x1c0/0x1c0
> [ 7465.137039]  [<ffffffff81734ba8>] ret_from_fork+0x58/0x90
> [ 7465.137212]  [<ffffffff8108b700>] ? kthread_create_on_node+0x1c0/0x1c0
> [ 7465.145470] Code: 44 00 00 55 31 c0 48 89 e5 41 57 41 56 41 55 49 89 fd
> 41 54 41 89 f4 53 48 83 ec 10 8b 77 68 41 89 f6 45 29 e6 0f 84 89 00 00 00
> <48> 8b 42 08 48 89 d3 48 85 c0 75 14 0f 1f 80 00 00 00 00 48 83
> [ 7465.163046] RIP  [<ffffffff8161d84d>]
> skb_copy_and_csum_datagram_iovec+0x2d/0x110
> [ 7465.171731]  RSP <ffff88044a78bbc0>
> [ 7465.180231] CR2: 0000000000000008
> [ 7465.205987] ---[ end trace 1edb9cef822eb074 ]---
>
>
> Somebody know any bug relate this issue???
>
> 2015-10-26 13:35 GMT-02:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>
>> But i thing that with this limitation, transfer 30 gb of mail's over
>> network it talks forever, don't you agree??
>>
>> 2015-10-26 13:25 GMT-02:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>>
>>> Regard badnwidth limitation, you mean like this:
>>>
>>>
>>> irtio0:
>>> stg:120/vm-120-disk-1.qcow2,iops_rd=100,iops_wr=100,iops_rd_max=100,iops_wr_max=100,mbps_rd=70,mbps_wr=70,mbps_rd_max=70,mbps_wr_max=70,size=2000G
>>>
>>> 2015-10-26 11:25 GMT-02:00 Dmitry Petuhov <mityapetuhov at gmail.com>:
>>>
>>>> There's issue with NFS: if you try to send over it more than network
>>>> can deal with (100-120 MBps for 1-gigabit), it imposes several-second
>>>> pauses, which are being interpreted like hardware errors. These bursts may
>>>> be just few seconds long to trigger issue.
>>>>
>>>> You can try to limit bandwidth in virtual HDD config to something like
>>>> 60-70 MBps. This should be enough for 1-gigabit network.
>>>>
>>>> But my opinion is that it's better to switch to iSCSI.
>>>>
>>>> 26.10.2015 16:13, Gilberto Nunes пишет:
>>>>
>>>> BTW, all HD is SAS with gigaethernet between the servers...
>>>> I already try with gigaethernet switch in order to isoleted the Proxmox
>>>> and Storage from external (LAN) traffic.... Not works at all!
>>>>
>>>> 2015-10-26 11:12 GMT-02:00 Gilberto Nunes <gilberto.nunes32 at gmail.com>:
>>>>
>>>>> HDD config is standard... I do not make any change...
>>>>> I wonder why I have others VM, as I said before, with Ubuntu, CentOS,
>>>>> Windows 7 and 2012, and work fine!
>>>>> Not of all have huge big files or a lot of connections, but they stand
>>>>> solid!
>>>>> But when require a lot access and deal with big files, here's came the
>>>>> devil! The VM just get IO error and die before 2 or 3 days...
>>>>> On both physical servers, not hight load. I check with HTOP and TOP as
>>>>> well.
>>>>> iostat doesn't show nothing wrong.
>>>>> In side the VM, a lot I/O error from time to time...
>>>>> IO goes high endeed, but is so expect because I am using imapsync to
>>>>> sync mails to old server to Zimbra Mail Server...
>>>>> But I do not expect the VM die with IO errors!
>>>>> It's so frustrating... :(
>>>>>
>>>>> 2015-10-26 11:04 GMT-02:00 Dmitry Petuhov < <mityapetuhov at gmail.com>
>>>>> mityapetuhov at gmail.com>:
>>>>>
>>>>>> What's virtual HDD config? Which controller, which cache mode?
>>>>>>
>>>>>> I suppose it's bad idea to run KVM machines via NFS: it may produce
>>>>>> big enough delays under high loads, which may look like timeouts on client
>>>>>> side.
>>>>>> If you want some network storage, iSCSI can be better chiose.
>>>>>>
>>>>>> 26.10.2015 12:48, Gilberto Nunes пишет:
>>>>>>
>>>>>> Admin or whatever is your name... I have more than 10 years deal with
>>>>>> Unix Linux and Windows.
>>>>>> I know what I done.
>>>>>> To the others: Yes! Should be straightforward any way...
>>>>>> Proxmox server is a PowerEdge R430 with 32 GB of memory.
>>>>>> Storage is the same server.
>>>>>> Both with SAS hard driver.
>>>>>> Between this servers there's a cable in order to provide a
>>>>>> gigaethernet link.
>>>>>> In second server, I have ubuntu 15.04 installed with DRBD and OCFS
>>>>>> mounted in /data FS.
>>>>>> In the same server, I have NFS installed and server FS to Proxmox
>>>>>> machine.
>>>>>> In proxmox machine I have nothing, except a VM with Ubuntu 14.04
>>>>>> installed, where Zimbra Mail Server was deploy...
>>>>>> Inside bothe physical servers, everything is ok... NO error in disc
>>>>>> and everything is running smoothly.
>>>>>> But, INSIDE THE VM HOSTED WITH PROXMOX, many IO errors!...
>>>>>> This make FS corrputed in some point that make Zimbra crash!
>>>>>>
>>>>>> BTW, I will return Zimbra to a physical machine right now and deploy
>>>>>> lab env for test purpose
>>>>>>
>>>>>> Best regards
>>>>>>
>>>>>>
>>>>>> 2015-10-25 22:38 GMT-02:00 <admin at extremeshok.com>
>>>>>> admin at extremeshok.com < <admin at extremeshok.com>admin at extremeshok.com>
>>>>>> :
>>>>>>
>>>>>>> Your nfs settings.
>>>>>>>
>>>>>>> Hire you people with the knowledge or up skill your knowledge.
>>>>>>>
>>>>>>> Sent from my iPhone
>>>>>>>
>>>>>>> > On 26 Oct 2015, at 1:33 AM, Gilberto Nunes <
>>>>>>> <gilberto.nunes32 at gmail.com>gilberto.nunes32 at gmail.com> wrote:
>>>>>>> >
>>>>>>> > Well friends...
>>>>>>> >
>>>>>>> > I really try hard to work with PVE, but is a pain in the ass...
>>>>>>> > Nothing seems to work..
>>>>>>> > I deploy Ubuntu with NFS storage connected through direct cable (
>>>>>>> 1 gb ) and beside follow all docs available in the wiki and internet, one
>>>>>>> single VM continue to crash over and over again...
>>>>>>> >
>>>>>>> > So I realise that is time to say good bye to Proxmox...
>>>>>>> >
>>>>>>> > Live long and prosper...
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>> >
>>>>>>> > --
>>>>>>> >
>>>>>>> > Gilberto Ferreira
>>>>>>> > +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
>>>>>>> > Skype: gilberto.nunes36
>>>>>>> >
>>>>>>> > _______________________________________________
>>>>>>> > pve-user mailing list
>>>>>>> > pve-user at pve.proxmox.com
>>>>>>> > http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>> _______________________________________________
>>>>>>> pve-user mailing list
>>>>>>> pve-user at pve.proxmox.com
>>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Gilberto Ferreira
>>>>>> +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
>>>>>> Skype: gilberto.nunes36
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> pve-user mailing listpve-user at pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> pve-user mailing list
>>>>>> pve-user at pve.proxmox.com
>>>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Gilberto Ferreira
>>>>> +55 (47) 9676-7530 <%2B55%20%2847%29%209676-7530>
>>>>> Skype: gilberto.nunes36
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Gilberto Ferreira
>>>> +55 (47) 9676-7530
>>>> Skype: gilberto.nunes36
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> pve-user mailing listpve-user at pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> pve-user mailing list
>>>> pve-user at pve.proxmox.com
>>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530
>>> Skype: gilberto.nunes36
>>>
>>>
>>
>>
>> --
>>
>> Gilberto Ferreira
>> +55 (47) 9676-7530
>> Skype: gilberto.nunes36
>>
>>
>
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pve.proxmox.com/pipermail/pve-user/attachments/20151026/e5bfea00/attachment-0015.html>


More information about the pve-user mailing list