[PVE-User] Problems with backup process and NFS

Uwe Sauter uwe.sauter.de at gmail.com
Tue May 23 10:05:01 CEST 2017


Hi Fabian,

>> I was following https://pve.proxmox.com/wiki/Storage:_NFS , quote: "To avoid DNS lookup delays, it is usually preferable to use an
>> IP address instead of a DNS name". But yes, the DNS in our environment is configured to allow reverse lookups.
> 
> which - AFAIK - is still true, especially since failing DNS means
> failing NFS storage if you put the host name there. I think for NFSv4
> the situation is slightly different, as reverse lookups are part of the
> authentication process, but I haven't played around with that yet.

My goal was to use NFSv4 but with the Portmapper problem and no way to specify mount options in the WebUI, this thread is actually
based on NFSv3 (as you can see in the configuration and /proc/mounts).

> I cannot reproduce the behaviour you report with an NFS server with
> working reverse lookup (proto and mountproto set to tcp, so the
> resulting options string looks identical to yours modulo the addresses).
> /proc/mounts contains the IP address as source if I put the IP address
> into storage.cfg, and the hostname if I put the hostname in storage.cfg
> (both on 4.4 and 5.0 Beta).
> 
> is there anything else in your setup/environment that might cause this
> behaviour? what OS is the NFS server on? any entries in /etc/hosts
> relating to the NFS server?

* CentOS 7 with manually tweaked NFS options (I can share if needed)
* /etc/hosts only has entries for the PVE cluster hosts

I'd say that the DNS configuration in our network is state-of-the-art (we have capable people looking after our network services).



> 
>>
>>> can you test using the hostname in your storage.cfg instead of the IP?
>>
>> I removed the former definition and umounted the NFS share on all nodes. BTW, why is a storage not umounted when it is deleted
>> from the WebUI?
> 
> because storage deactivation in PVE happens mostly on a volume level,
> and only when needed. deactivating something that is (potentially) still
> needed is more dangerous than leaving something activated that is not ;)

Not true if some other share should be mounted in the same place but won't because there still is something mounted. I did not
look into the process how PVE manages storage but I had the case that when I remove a storage definition and replace it with
almost the same (hostname instead of IP), it wouldn't mount because a) there still was mounted something and b) technically it was
the same share.
But that's not the topic of this thread :D


> 
>> Now storage definition looks like:
>>
>> nfs: aurel
>> 	export /backup/proxmox-infra
>> 	path /mnt/pve/aurel
>> 	server aurel.XXXXX.de
>> 	content backup
>> 	maxfiles 30
>> 	options vers=3
>>
>> With this definition, the backup succeeded (and I got mails back from each host).
> 
> I suspected as much.
> 
>> So it seems that the recommendation from the wiki prevents PVE's mechanism from working properly (when being used in an
>> environment where reverse name lookups are correctly configured).
> 
> ... on your machine in your specific environment. Your report is the
> first showing this behaviour that I know of, so until we get more
> information I am inclined to not blame our instructions here :P running
> with IP addresses instead of host names with NFSv3 has been shown to be
> more robust (as in, we've had multiple cases where people experienced
> NFS storage outages because of DNS problems).

Fair point.

Truth be told this issue might also be caused by me playing around on both ends, PVE and NFS server.
I now have 2 shares defined (using IP addresses) and did reboot all cluster nodes. The shares get mounted and backups run without
problem.

So I would put this ad acta…

Thanks for your help,

	Uwe







More information about the pve-user mailing list