From f.gruenbichler at proxmox.com Wed Apr 1 08:38:21 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Wed, 01 Apr 2020 08:38:21 +0200 Subject: [PVE-User] Problem with QEMU drive-mirror after cancelling VM disk move In-Reply-To: <366d838d-2f4d-acc6-3440-8938e8235215@plus-plus.su> References: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> <1585655214.ioujt1t68q.astroid@nora.none> <366d838d-2f4d-acc6-3440-8938e8235215@plus-plus.su> Message-ID: <1585723006.gajar53s82.astroid@nora.none> On March 31, 2020 5:07 pm, Mikhail wrote: > On 3/31/20 2:53 PM, Fabian Gr?nbichler wrote: >> you should be able to manually clean the messup using the QMP/monitor >> interface: >> >> `man qemu-qmp-ref` gives a detailed tour, you probably want >> `query-block-jobs` and `query-block`, and then, depending on the output >> `block-job-cancel` or `block-job-complete`. >> >> the HMP interface accessible via 'qm monitor ' has slightly >> different commands: `info block -v`, `info block-jobs` and >> `block_job_cancel`/`block_job_complete` ('_' instead of '-'). > > Thanks for your prompt response. > I've tried the following under VM's "Monitor" section within Proxmox WEB > GUI: > > # info block-jobs > Type mirror, device drive-scsi0: Completed 6571425792 of 10725883904 > bytes, speed limit 0 bytes/s > > and after that I tried to cancel this block job using: > > # block_job_cancel -f drive-scsi0 > > However, the block job is still there even after 3 attempts trying to > cancel it: > > # info block-jobs > Type mirror, device drive-scsi0: Completed 6571425792 of 10725883904 > bytes, speed limit 0 bytes/s > > Same happens when I connect to it via root console using "qm monitor". > > I guess this is now completely stuck and the only way would be to power > off/on the VM? well, you could investigate more with the QMP interface (it gives a lot more information). but yes, a shutdown/boot cycle should get rid of the block-job. >> feel free to post the output of the query/info commands before deciding >> how to proceed. the complete task log of the failed 'move disk' >> operation would also be interesting, if it is still available. > > I just asked my colleague who was cancelling this Disk move operation. > He said he had to cancel it because it was stuck at 61.27%. The Disk > move task log is below, I truncated repeating lines: > > deprecated setting 'migration_unsecure' and new 'migration: type' set at > same time! Ignore 'migration_unsecure' > create full clone of drive scsi0 (nvme-local-vm:123/vm-123-disk-0.qcow2) > drive mirror is starting for drive-scsi0 > drive-scsi0: transferred: 24117248 bytes remaining: 10713300992 bytes > total: 10737418240 bytes progression: 0.22 % busy: 1 ready: 0 > drive-scsi0: transferred: 2452619264 bytes remaining: 6635388928 bytes > total: 9088008192 bytes progression: 26.99 % busy: 1 ready: 0 > drive-scsi0: transferred: 3203399680 bytes remaining: 6643777536 bytes > total: 9847177216 bytes progression: 32.53 % busy: 1 ready: 0 > drive-scsi0: transferred: 4001366016 bytes remaining: 6632243200 bytes > total: 10633609216 bytes progression: 37.63 % busy: 1 ready: 0 > drive-scsi0: transferred: 4881121280 bytes remaining: 5856296960 bytes > total: 10737418240 bytes progression: 45.46 % busy: 1 ready: 0 > drive-scsi0: transferred: 6554648576 bytes remaining: 4171235328 bytes > total: 10725883904 bytes progression: 61.11 % busy: 1 ready: 0 > drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes > total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 > [ same line repeats like 250+ times ] > drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes > total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 > drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes > total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 > drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes > total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 > drive-scsi0: Cancelling block job was the target some sort of network storage that started hanging? this looks rather unusual.. From m at plus-plus.su Wed Apr 1 09:45:20 2020 From: m at plus-plus.su (Mikhail) Date: Wed, 1 Apr 2020 10:45:20 +0300 Subject: [PVE-User] Problem with QEMU drive-mirror after cancelling VM disk move In-Reply-To: <1585723006.gajar53s82.astroid@nora.none> References: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> <1585655214.ioujt1t68q.astroid@nora.none> <366d838d-2f4d-acc6-3440-8938e8235215@plus-plus.su> <1585723006.gajar53s82.astroid@nora.none> Message-ID: <58a9bf90-bcf1-80d1-9312-5af3573a20cf@plus-plus.su> Hello Fabian! On 4/1/20 9:38 AM, Fabian Gr?nbichler wrote: >> drive-scsi0: transferred: 6571425792 bytes remaining: 4154458112 bytes >> total: 10725883904 bytes progression: 61.27 % busy: 1 ready: 0 >> drive-scsi0: Cancelling block job > was the target some sort of network storage that started hanging? this > looks rather unusual.. We were able to reproduce this issue right now on the same cluster. The Disk move operation was doing move from local "directory" type of storage (VM disks reside as .qcow2 files) to attached CEPH storage pool. We did 2 attempts on the same VM, with the same disk - first attempt failed (disk transfer got stuck on 10.25 % progress): deprecated setting 'migration_unsecure' and new 'migration: type' set at same time! Ignore 'migration_unsecure' create full clone of drive scsi1 (nvme-local-vm:82082108/vm-82082108-disk-1.qcow2) drive mirror is starting for drive-scsi1 drive-scsi1: transferred: 737148928 bytes remaining: 20737687552 bytes total: 21474836480 bytes progression: 3.43 % busy: 1 ready: 0 drive-scsi1: transferred: 1512046592 bytes remaining: 19962789888 bytes total: 21474836480 bytes progression: 7.04 % busy: 1 ready: 0 drive-scsi1: transferred: 2198994944 bytes remaining: 19260243968 bytes total: 21459238912 bytes progression: 10.25 % busy: 1 ready: 0 [[[[ here goes 230+ lines of the same 10.25 % progress status ]]]] drive-scsi1: Cancelling block job After cancelling this job I looked into VM's QM monitor to see if the block-job is still there, and of course it is: # info block-jobs Type mirror, device drive-scsi1: Completed 2198994944 of 21459238912 bytes, speed limit 0 bytes/s trying to cancel this block-job does nothing and our next step is to shutdown the VM from Proxmox GUI - this also fails with the following in task log: TASK ERROR: VM quit/powerdown failed - got timeout After that, we tried the following from SSH root console: # qm stop 82082108 VM quit/powerdown failed - terminating now with SIGTERM VM still running - terminating now with SIGKILL and after that QM Monitor stopped to respond from Proxmox GUI as expected: # info block-jobs ERROR: VM 82082108 qmp command 'human-monitor-command' failed - unable to connect to VM 82082108 qmp socket - timeout after 31 retries So at this point the VM is completely stopped, disk not moved. The VM was started again and we did same steps (Disk move) exactly as above. We got identical restults - the Disk move operation got stuck: drive-scsi1: transferred: 2187460608 bytes remaining: 19271778304 bytes total: 21459238912 bytes progression: 10.19 % busy: 1 ready: 0 After cancelling Disk move operation all above symptoms persist - VM won't shutdown from GUI, the block-job is visible from QM monitor and won't cancel. Our next test case was to do a Disk move offline - when VM is shutdown. And guess what - this worked without a glitch. Same storage, same disk, but VM is in stopped state. But even after that, when the disk is on CEPH and VM is started and running, we attempted to do Disk move from CEPH back to local storage ONLINE - this also worked like a charm without any blocks or issues. VM disk size we were moving back and forth isn't very big - only 20GB. The problem is that this issue does not appear to be happening to every virtual machine disk - we moved several disks before we hit this issue again. At the time of writing this message my colleague is doing some other Disk move on the cluster and he said he hit same problem with another VM's disk - 40GB in size - task stuck at the very beggining: drive-scsi1: transferred: 427819008 bytes remaining: 70243188736 bytes total: 70671007744 bytes progression: 0.61 % busy: 1 ready: 0 Let me know if I can provide some further information or do some debugging - as we can reproduce this problem 100% now. regards, Mikhail. From m at plus-plus.su Wed Apr 1 09:49:20 2020 From: m at plus-plus.su (Mikhail) Date: Wed, 1 Apr 2020 10:49:20 +0300 Subject: [PVE-User] Problem with QEMU drive-mirror after cancelling VM disk move In-Reply-To: <58a9bf90-bcf1-80d1-9312-5af3573a20cf@plus-plus.su> References: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> <1585655214.ioujt1t68q.astroid@nora.none> <366d838d-2f4d-acc6-3440-8938e8235215@plus-plus.su> <1585723006.gajar53s82.astroid@nora.none> <58a9bf90-bcf1-80d1-9312-5af3573a20cf@plus-plus.su> Message-ID: On 4/1/20 10:45 AM, Mikhail wrote: > At the time of writing this message my colleague is doing some other > Disk move on the cluster and he said he hit same problem with another > VM's disk - 40GB in size - task stuck at the very beggining: > drive-scsi1: transferred: 427819008 bytes remaining: 70243188736 bytes > total: 70671007744 bytes progression: 0.61 % busy: 1 ready: 0 I just want to add that issue does not appear to be related to VM disk size - right now we have 3 stucked disk moves with different disk sizes: 10GB, 20GB and 40GB. The most recent one is 10GB disk move: drive-scsi0: transferred: 2086797312 bytes remaining: 8572895232 bytes total: 10659692544 bytes progression: 19.58 % busy: 1 ready: 0 Mikhail. From f.gruenbichler at proxmox.com Wed Apr 1 10:03:25 2020 From: f.gruenbichler at proxmox.com (Fabian =?iso-8859-1?q?Gr=FCnbichler?=) Date: Wed, 01 Apr 2020 10:03:25 +0200 Subject: [PVE-User] Problem with QEMU drive-mirror after cancelling VM disk move In-Reply-To: References: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> <1585655214.ioujt1t68q.astroid@nora.none> <366d838d-2f4d-acc6-3440-8938e8235215@plus-plus.su> <1585723006.gajar53s82.astroid@nora.none> <58a9bf90-bcf1-80d1-9312-5af3573a20cf@plus-plus.su> Message-ID: <1585728135.qizd3xrwft.astroid@nora.none> On April 1, 2020 9:49 am, Mikhail wrote: > On 4/1/20 10:45 AM, Mikhail wrote: >> At the time of writing this message my colleague is doing some other >> Disk move on the cluster and he said he hit same problem with another >> VM's disk - 40GB in size - task stuck at the very beggining: >> drive-scsi1: transferred: 427819008 bytes remaining: 70243188736 bytes >> total: 70671007744 bytes progression: 0.61 % busy: 1 ready: 0 > > I just want to add that issue does not appear to be related to VM disk > size - right now we have 3 stucked disk moves with different disk sizes: > > 10GB, 20GB and 40GB. > > The most recent one is 10GB disk move: > > drive-scsi0: transferred: 2086797312 bytes remaining: 8572895232 bytes > total: 10659692544 bytes progression: 19.58 % busy: 1 ready: 0 probably makes sense to move this to a bug report over at https://bugzilla.proxmox.com please include the following information: pveversion -v storage config VM config ceph setup details thanks! From m at plus-plus.su Wed Apr 1 12:35:51 2020 From: m at plus-plus.su (Mikhail) Date: Wed, 1 Apr 2020 13:35:51 +0300 Subject: [PVE-User] Problem with QEMU drive-mirror after cancelling VM disk move In-Reply-To: <1585728135.qizd3xrwft.astroid@nora.none> References: <79242d65-6e02-3332-1289-f4c15737bea8@plus-plus.su> <1585655214.ioujt1t68q.astroid@nora.none> <366d838d-2f4d-acc6-3440-8938e8235215@plus-plus.su> <1585723006.gajar53s82.astroid@nora.none> <58a9bf90-bcf1-80d1-9312-5af3573a20cf@plus-plus.su> <1585728135.qizd3xrwft.astroid@nora.none> Message-ID: <089cdb0b-1378-b8d1-e5b7-1a62a6a42b97@plus-plus.su> On 4/1/20 11:03 AM, Fabian Gr?nbichler wrote: > probably makes sense to move this to a bug report over at > https://bugzilla.proxmox.com > > please include the following information: > > pveversion -v > storage config > VM config > ceph setup details > > thanks! Bug report submitted at https://bugzilla.proxmox.com/show_bug.cgi?id=2659 Mikhail. From rhopman at tuxis.nl Wed Apr 1 14:41:24 2020 From: rhopman at tuxis.nl (Richard Hopman) Date: Wed, 01 Apr 2020 14:41:24 +0200 Subject: [PVE-User] pve webgui auto logoff (5m) Message-ID: Looking on some input on this: 3 node cluster running pve 6.1-7, users logged into the webgui get logged out after 5 minutes. systemd-timesyncd is running on all 3 machines, time is in sync. /var/log/pveproxy/access.log is just reporting a 401 at the time of auto logoff /var/log/syslog not reporting any issues Time was probably off on one node when i installed the cluster as 1 node had a certificate becoming valid in the future. Based on this i replaced all node certificates after creating a new /etc/pve/pve-root-ca. Any help in this matter is greatly appreciated. -- BR, Richard From contact+dev at gilouweb.com Thu Apr 2 04:10:29 2020 From: contact+dev at gilouweb.com (Gilles Pietri) Date: Thu, 2 Apr 2020 04:10:29 +0200 Subject: [PVE-User] Datacenter firewall rules vs Subnet Router Anycast Adress ping Message-ID: Hi, We stumbled upon an issue with IPv6, Subnet router anycast addresses, and Proxmox firewall. We fought to get to the bottom of it. Situation: we have a router, configured with a subnet router anycast address, let's say the router is A::1/64 and the anycast address is A::/64 We have a VM in a bridge, connected to that router, address A::2. We want A::2 to ping A::, and in return, A::1 will reply, with that source address. We get: A::2 echo request to A:: A::1 echo reply to A::2 If we enable the Datacenter firewall, this rule: -A PVEFW-FORWARD -m conntrack --ctstate INVALID -j DROP will kill that packet, as the source address does not match the original destination, and that's to be expected. We can ask the router to reply using the anycast address, but if we do that, we loose the information we'd like to keep: the source IP on the router's side. The VM or CT itself has the firewall disabled, and so has the host hosting them. So questions! A) Is it expected that such a rule be enabled for VM bridges, when firewall is disabled for the VM? It is so now, and it's not exactly a happy situation: enabling the firewall on the datacenter/hosts is not exactly supposed to have an impact on the VMs, is it? My guess is it's not easy to distinguish that path from any other, but that's not clear in the doc. B) Can we plug ourself in somewhere to have a rule like: -I PVEFW-FORWARD -p icmpv6 --icmpv6-type echo-reply -j ACCEPT included BEFORE the --ctstate INVALID one? I don't see any way to do that in the chain, but I may be missing something. C) Or can we have a specific option for IPv6 ICMP echo reply, but that seems a bit specific. Cheers, Gilou From tb at robhost.de Thu Apr 2 15:22:10 2020 From: tb at robhost.de (Tobias =?utf-8?B?QsO2aG0=?=) Date: Thu, 2 Apr 2020 15:22:10 +0200 Subject: [PVE-User] Datacenter firewall rules vs Subnet Router Anycast Adress ping In-Reply-To: References: Message-ID: <20200402132210.72vm5hpthczeigd5@macbook> Am 02.04.2020 um 04:10 schrieb Gilles Pietri: Hi, just stumbled across this rule as well, although in an IPv4 related issue. > A) Is it expected that such a rule be enabled for VM bridges, when > firewall is disabled for the VM? This rule is always there when PVE-Firewall is enabled for the cluster. > B) Can we plug ourself in somewhere to have a rule like: > -I PVEFW-FORWARD -p icmpv6 --icmpv6-type echo-reply -j ACCEPT > included BEFORE the --ctstate INVALID one? > > I don't see any way to do that in the chain, but I may be missing something. There is an option to disable this rule at all. You can set "nf_conntrack_allow_invalid: 1" in the host specific config files at /etc/pve/nodes//host.fw. Apparently you'd want this to be in all of them. This directive is not visible in the panel but documented and works as intended on Proxmox 5 and 6: https://pve.proxmox.com/wiki/Firewall#pve_firewall_host_specific_configuration Happy pinging, Tobias From contact+dev at gilouweb.com Thu Apr 2 22:38:32 2020 From: contact+dev at gilouweb.com (Gilles Pietri) Date: Thu, 2 Apr 2020 22:38:32 +0200 Subject: [PVE-User] Datacenter firewall rules vs Subnet Router Anycast Adress ping In-Reply-To: <20200402132210.72vm5hpthczeigd5@macbook> References: <20200402132210.72vm5hpthczeigd5@macbook> Message-ID: <9951ec40-09e0-6283-325e-b31cb16f5fca@gilouweb.com> Le 02/04/2020 ? 15:22, Tobias B?hm a ?crit?: > Am 02.04.2020 um 04:10 schrieb Gilles Pietri: > Hi, > > just stumbled across this rule as well, although in an IPv4 related > issue. > >> A) Is it expected that such a rule be enabled for VM bridges, when >> firewall is disabled for the VM? > > This rule is always there when PVE-Firewall is enabled for the cluster. Hi, It is, but should it be? It seemed to me that Datacenter rules were meant to apply to hosts, not VMs ? Because this means even though I disable the firewall on the VM, there IS a firewall still filtering! > >> B) Can we plug ourself in somewhere to have a rule like: >> -I PVEFW-FORWARD -p icmpv6 --icmpv6-type echo-reply -j ACCEPT >> included BEFORE the --ctstate INVALID one? >> >> I don't see any way to do that in the chain, but I may be missing something. > > There is an option to disable this rule at all. You can set > "nf_conntrack_allow_invalid: 1" in the host specific config files at > /etc/pve/nodes//host.fw. Apparently you'd want this to be in > all of them. This directive is not visible in the panel but documented > and works as intended on Proxmox 5 and 6: > https://pve.proxmox.com/wiki/Firewall#pve_firewall_host_specific_configuration Agreed (and confirmed), but that is not what I meant, there is a perfectly valid reason to filter those on the hosts, while allowing this specific echo reply to happen (especially to the VM, but that's point A :P), but I can't find an easy way to hook myself here. > > Happy pinging, > Tobias Thanks for the feedback! Gilles From contact+dev at gilouweb.com Fri Apr 3 07:09:21 2020 From: contact+dev at gilouweb.com (Gilles Pietri) Date: Fri, 3 Apr 2020 07:09:21 +0200 Subject: [PVE-User] Datacenter firewall rules vs Subnet Router Anycast Adress ping In-Reply-To: <9951ec40-09e0-6283-325e-b31cb16f5fca@gilouweb.com> References: <20200402132210.72vm5hpthczeigd5@macbook> <9951ec40-09e0-6283-325e-b31cb16f5fca@gilouweb.com> Message-ID: <184876f7-6e74-66b1-6d00-6e7266ff54b4@gilouweb.com> Le 02/04/2020 ? 22:38, Gilles Pietri a ?crit?: > Le 02/04/2020 ? 15:22, Tobias B?hm a ?crit?: >> Am 02.04.2020 um 04:10 schrieb Gilles Pietri: Hi again! >>> B) Can we plug ourself in somewhere to have a rule like: >>> -I PVEFW-FORWARD -p icmpv6 --icmpv6-type echo-reply -j ACCEPT >>> included BEFORE the --ctstate INVALID one? >>> >>> I don't see any way to do that in the chain, but I may be missing something. >> >> There is an option to disable this rule at all. You can set >> "nf_conntrack_allow_invalid: 1" in the host specific config files at >> /etc/pve/nodes//host.fw. Apparently you'd want this to be in >> all of them. This directive is not visible in the panel but documented >> and works as intended on Proxmox 5 and 6: >> https://pve.proxmox.com/wiki/Firewall#pve_firewall_host_specific_configuration > > Agreed (and confirmed), but that is not what I meant, there is a > perfectly valid reason to filter those on the hosts, while allowing this > specific echo reply to happen (especially to the VM, but that's point A > :P), but I can't find an easy way to hook myself here. > Hmm, so it appears that this option... does in fact what we want, as you pointed out, thanks! Then it begs the question.. Why does it only disable the rules in PVEFW-FORWARD then? The name implies that it would also remove the rule in PVEFW-HOST-IN (it doesn't), but I'm glad it doesn't in that case :P Cheers Gilou From krienke at uni-koblenz.de Fri Apr 3 10:00:58 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Fri, 3 Apr 2020 10:00:58 +0200 Subject: [PVE-User] Need some advice on pve writeback caching Message-ID: Hello, I played around with rbd caching by activating "Writeback" mode in proxmox6. This really helps for write performance so I would like to use it but the documenattion says that a possible danger is a power outage. Now I have a battery backup but you never know if the outage is just a minute or an hour, and batteries would not keep the servers up for more that say 15min. So what I thought about is to enable rbd caching but when a power outage occurs I would like to automatically disable it. Using proxmox this requires a reboot of the VM so this is no option. But perhaps I could set globally a ceph variable eg "rbd cache size" to "0" and thus effectively would disable writeback caching. Should this work? Another question about "Wrtiteback" rbd caching is if I should also enable it for disks with a lvm striped over 4 rbd images? Each striped LVs has a filesystem on top. Is it also advisable to use rbd caching in this scenario as it is for a single disk? Thanks a lot for your help Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From a.antreich at proxmox.com Fri Apr 3 10:44:47 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Fri, 3 Apr 2020 10:44:47 +0200 Subject: [PVE-User] Need some advice on pve writeback caching In-Reply-To: References: Message-ID: <20200403084447.GH40796@dona.proxmox.com> Hello Rainer, On Fri, Apr 03, 2020 at 10:00:58AM +0200, Rainer Krienke wrote: > Hello, > > I played around with rbd caching by activating "Writeback" mode in > proxmox6. This really helps for write performance so I would like to use > it but the documenattion says that a possible danger is a power outage. The default cache size is 25 MiB. That data might be lost, but the image should still be consistent. > > Now I have a battery backup but you never know if the outage is just a > minute or an hour, and batteries would not keep the servers up for more > that say 15min. > > So what I thought about is to enable rbd caching but when a power outage > occurs I would like to automatically disable it. Using proxmox this > requires a reboot of the VM so this is no option. But perhaps I could > set globally a ceph variable eg "rbd cache size" to "0" and thus > effectively would disable writeback caching. Should this work? These settings only take effect after a VM was cold booted. Wouldn't it just be better to shutdown the VMs, once the UPS notices the power outage? > > Another question about "Wrtiteback" rbd caching is if I should also > enable it for disks with a lvm striped over 4 rbd images? Each striped > LVs has a filesystem on top. Is it also advisable to use rbd caching in > this scenario as it is for a single disk? Why should it differ? -- Cheers, Alwin From krienke at uni-koblenz.de Fri Apr 3 12:54:36 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Fri, 3 Apr 2020 12:54:36 +0200 Subject: [PVE-User] Need some advice on pve writeback caching In-Reply-To: <20200403084447.GH40796@dona.proxmox.com> References: <20200403084447.GH40796@dona.proxmox.com> Message-ID: Hello Alwin, thanks for you very much answer. Regarding LVM: I initially thought the cache size is 25MB for each RBD device. So a LVM based on in my case 4 RBD devices could loose 100MB. This would have been a difference. But after reading the docu again it seems that this cache is for all rbds of one ceph client (pve host). Then you are right and it does not make a difference. Thanks Rainer Am 03.04.20 um 10:44 schrieb Alwin Antreich: > Hello Rainer, > > On Fri, Apr 03, 2020 at 10:00:58AM +0200, Rainer Krienke wrote: >> Hello, >> >> I played around with rbd caching by activating "Writeback" mode in >> proxmox6. This really helps for write performance so I would like to use >> it but the documenattion says that a possible danger is a power outage. > The default cache size is 25 MiB. That data might be lost, but the image > should still be consistent. > >> >> Now I have a battery backup but you never know if the outage is just a >> minute or an hour, and batteries would not keep the servers up for more >> that say 15min. >> >> So what I thought about is to enable rbd caching but when a power outage >> occurs I would like to automatically disable it. Using proxmox this >> requires a reboot of the VM so this is no option. But perhaps I could >> set globally a ceph variable eg "rbd cache size" to "0" and thus >> effectively would disable writeback caching. Should this work? > These settings only take effect after a VM was cold booted. Wouldn't it > just be better to shutdown the VMs, once the UPS notices the power > outage? > >> >> Another question about "Wrtiteback" rbd caching is if I should also >> enable it for disks with a lvm striped over 4 rbd images? Each striped >> LVs has a filesystem on top. Is it also advisable to use rbd caching in >> this scenario as it is for a single disk? > Why should it differ? > > -- > Cheers, > Alwin > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From a.antreich at proxmox.com Fri Apr 3 14:14:15 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Fri, 3 Apr 2020 14:14:15 +0200 Subject: [PVE-User] Need some advice on pve writeback caching In-Reply-To: References: <20200403084447.GH40796@dona.proxmox.com> Message-ID: <20200403121415.GI40796@dona.proxmox.com> On Fri, Apr 03, 2020 at 12:54:36PM +0200, Rainer Krienke wrote: > Hello Alwin, > > thanks for you very much answer. > > Regarding LVM: > I initially thought the cache size is 25MB for each RBD device. So a LVM > based on in my case 4 RBD devices could loose 100MB. This would have > been a difference. But after reading the docu again it seems that this > cache is for all rbds of one ceph client (pve host). Then you are right > and it does not make a difference. You can activate io-thread, to get a thread per disk. This will gain performance, but as you mentioned it will also raise the total rbd cache for the VM. In any case, the VM should be gracefully shutdown prior running out of power. A sudden power loss, will result in any number of issues. Also for the data loss, memory usage of the VM itself is usually bigger than the rbd cache. -- Cheers, Alwin From sysadmin at tashicell.com Fri Apr 3 20:19:56 2020 From: sysadmin at tashicell.com (System Administrator) Date: Sat, 4 Apr 2020 00:19:56 +0600 Subject: Web interface issues with latest PVE Message-ID: <5d786a70-b9e3-7c87-8960-38f70af19d34@tashicell.com> Hello, I have installed the latest Proxmox VE 6.1-2 and when modifying? network information from web, the prefix (netmask) is slashed off from old interface config irrespective of whether I specify CIDR or not. And then, I cannot create cluster from web GUI with link0 address in dropdown if IP address has prefix (/cidr). I had to create it from CLI. Finally, I cannot apply pending network changes from web even after installing *ifupdown2* (without reboot). It complains about subscription. Do I need enterprise subscription for that? I am new to Proxmox. I will be grateful if anyone can help me fix those issue. Thank you in advance. Regards, Sonam From leesteken at pm.me Sat Apr 4 09:56:19 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Sat, 04 Apr 2020 07:56:19 +0000 Subject: [PVE-User] Web interface issues with latest PVE In-Reply-To: References: Message-ID: ??????? Original Message ??????? On Friday, April 3, 2020 8:19 PM, System Administrator via pve-user wrote: > Hello, > > I have installed the latest Proxmox VE 6.1-2 and when modifying? network Maybe upgrading to the latest (non-subscription) version 6.1-8 using apt-get dist-upgrade helps? > information from web, the prefix (netmask) is slashed off from old > interface config irrespective of whether I specify CIDR or not. And I did notice that a recent update remove the netmask line from /etc/network/interfaces, and puts the /CIDR after the IP address. I'm not sure what you mean here. > then, I cannot create cluster from web GUI with link0 address in > dropdown if IP address has prefix (/cidr). I had to create it from CLI. Sorry, I have no experience with clusters. > Finally, I cannot apply pending network changes from web even after > installing ifupdown2 (without reboot). It complains about > subscription. Do I need enterprise subscription for that? It works for me without enterprise subscription. I do not get a complaint when I apply changes. I believe some issues were fixes recently. Maybe dist-upgrade to Proxmox 6.1-8? You do not need a subscription for this, but you do need to specify the right repository. See also: https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo > I am new to Proxmox. I will be grateful if anyone can help me fix those > issue. Thank you in advance. I'm not sure if any of my remarks will help you, but I suggest upgrading first. Maybe someone can help you if you provide the output of pveversion -v and the contents of /etc/network/interfaces ? > Regards, > > Sonam kind regards, Arjen From info at aminvakil.com Sat Apr 4 13:02:24 2020 From: info at aminvakil.com (Amin Vakil) Date: Sat, 4 Apr 2020 15:32:24 +0430 Subject: [PVE-User] Web interface issues with latest PVE In-Reply-To: References: Message-ID: <78a95f1d-84be-93b6-e076-dea553a9371e@aminvakil.com> > I have installed the latest Proxmox VE 6.1-2 and when modifying? network > information from web, the prefix (netmask) is slashed off from old > interface config irrespective of whether I specify CIDR or not. And > then, I cannot create cluster from web GUI with link0 address in > dropdown if IP address has prefix (/cidr). I had to create it from CLI. I faced this problem two weeks ago and ended up configuring it from ssh as well. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From info at aminvakil.com Sat Apr 4 13:02:24 2020 From: info at aminvakil.com (Amin Vakil) Date: Sat, 4 Apr 2020 15:32:24 +0430 Subject: [PVE-User] *****SPAM***** Re: Web interface issues with latest PVE Message-ID: <78a95f1d-84be-93b6-e076-dea553a9371e@aminvakil.com> Die Erkennung unerwuenschter Email (SPAM) hat eine Email identifiziert, die SPAM enthaelt. Die Original Nachricht ist im folgenden angehangen. Fuer Rueckfragen weden Sie sich bitte an support at csc.de. Content preview: > I have installed the latest Proxmox VE 6.1-2 and when modifying? network > information from web, the prefix (netmask) is slashed off from old > interface config irrespective of whether I specify CID [...] Content analysis details: (6.2 points, 1.5 required) pts rule name description ---- ---------------------- ----------------------------------------- 0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [URIs: proxmox.com] 5.0 FROM_NOT_REPLYTO_SAME_DOMAIN From domain does not match Reply-To domain 0.0 FROM_AND_TO_IS_SAME_DOMAIN From domain and To domain is the same 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 2.0 FROM_NOT_REPLYTO From does not match Reply-To 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -1.0 MAILING_LIST_MULTI Multiple indicators imply a widely-seen list manager 0.1 DKIM_INVALID DKIM or DK signature exists, but is not valid Die urspr?ngliche Nachricht enthielt nicht ausschlie?lich Klartext (plain text) und kann eventuell eine Gefahr f?r einige E-Mail-Programme darstellen (falls sie z.B. einen Computervirus enth?lt). M?chten Sie die Nachricht dennoch ansehen, ist es wahrscheinlich sicherer, sie zuerst in einer Datei zu speichern und diese Datei danach mit einem Texteditor zu ?ffnen. From devzero at web.de Sat Apr 4 13:48:43 2020 From: devzero at web.de (Roland) Date: Sat, 4 Apr 2020 13:48:43 +0200 Subject: [PVE-User] Web interface issues with latest PVE In-Reply-To: <78a95f1d-84be-93b6-e076-dea553a9371e@aminvakil.com> References: <78a95f1d-84be-93b6-e076-dea553a9371e@aminvakil.com> Message-ID: <3343b2f5-c63d-79b2-8bd9-c0359930446d@web.de> that may explain some weirdness i observed yesterday: i added a bridge, rebooted server and after that the management ip adress was gone - i had a look into the configuration file for networking to see that netmask 255.255.255.0 had been converted into /24 after removing the whitespace and reboot network was ok again. Am 04.04.20 um 13:02 schrieb Amin Vakil: >> I have installed the latest Proxmox VE 6.1-2 and when modifying? network >> information from web, the prefix (netmask) is slashed off from old >> interface config irrespective of whether I specify CIDR or not. And >> then, I cannot create cluster from web GUI with link0 address in >> dropdown if IP address has prefix (/cidr). I had to create it from CLI. > I faced this problem two weeks ago and ended up configuring it from ssh > as well. > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From devzero at web.de Sat Apr 4 13:48:43 2020 From: devzero at web.de (Roland) Date: Sat, 4 Apr 2020 13:48:43 +0200 Subject: [PVE-User] *****SPAM***** Re: Web interface issues with latest PVE Message-ID: <3343b2f5-c63d-79b2-8bd9-c0359930446d@web.de> Die Erkennung unerwuenschter Email (SPAM) hat eine Email identifiziert, die SPAM enthaelt. Die Original Nachricht ist im folgenden angehangen. Fuer Rueckfragen weden Sie sich bitte an support at csc.de. Content preview: that may explain some weirdness i observed yesterday: i added a bridge, rebooted server and after that the management ip adress was gone - i had a look into the configuration file for networking to see Content analysis details: (6.2 points, 1.5 required) pts rule name description ---- ---------------------- ----------------------------------------- 0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [URIs: proxmox.com] 5.0 FROM_NOT_REPLYTO_SAME_DOMAIN From domain does not match Reply-To domain 0.0 FROM_AND_TO_IS_SAME_DOMAIN From domain and To domain is the same 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (devzero[at]web.de) 2.0 FROM_NOT_REPLYTO From does not match Reply-To 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -1.0 MAILING_LIST_MULTI Multiple indicators imply a widely-seen list manager 0.1 DKIM_INVALID DKIM or DK signature exists, but is not valid From aderumier at odiso.com Sat Apr 4 18:25:05 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Sat, 4 Apr 2020 18:25:05 +0200 (CEST) Subject: [PVE-User] Web interface issues with latest PVE In-Reply-To: <3343b2f5-c63d-79b2-8bd9-c0359930446d@web.de> References: <78a95f1d-84be-93b6-e076-dea553a9371e@aminvakil.com> <3343b2f5-c63d-79b2-8bd9-c0359930446d@web.de> Message-ID: <486035789.6270330.1586017505230.JavaMail.zimbra@odiso.com> @Amin >> I have installed the latest Proxmox VE 6.1-2 and when modifying? network >> information from web, the prefix (netmask) is slashed off from old >> interface config irrespective of whether I specify CIDR or not. And >> then, I cannot create cluster from web GUI with link0 address in >> dropdown if IP address has prefix (/cidr). I had to create it from CLI. > I faced this problem two weeks ago and ended up configuring it from ssh > as well. > > Do you have updated to last version ? They was a small bug when new cidr format for address has been introduced, and fixed some days later. @Roland, can you send ouput of #pveversion -v ? >>that netmask 255.255.255.0 had been converted into >ip>/24 Some change have been done recently to convert old "address + netmask ", to "address ". but this whitespace is strange. Do you have initial /etc/network/interfaces before the change ? and also the file after the config change ? ----- Mail original ----- De: "Roland" ?: "proxmoxve" , "Amin Vakil" Envoy?: Samedi 4 Avril 2020 13:48:43 Objet: Re: [PVE-User] Web interface issues with latest PVE that may explain some weirdness i observed yesterday: i added a bridge, rebooted server and after that the management ip adress was gone - i had a look into the configuration file for networking to see that netmask 255.255.255.0 had been converted into /24 after removing the whitespace and reboot network was ok again. Am 04.04.20 um 13:02 schrieb Amin Vakil: >> I have installed the latest Proxmox VE 6.1-2 and when modifying? network >> information from web, the prefix (netmask) is slashed off from old >> interface config irrespective of whether I specify CIDR or not. And >> then, I cannot create cluster from web GUI with link0 address in >> dropdown if IP address has prefix (/cidr). I had to create it from CLI. > I faced this problem two weeks ago and ended up configuring it from ssh > as well. > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list pve-user at pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From aderumier at odiso.com Sat Apr 4 18:25:05 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Sat, 4 Apr 2020 18:25:05 +0200 (CEST) Subject: [PVE-User] *****SPAM***** Re: Web interface issues with latest PVE Message-ID: <486035789.6270330.1586017505230.JavaMail.zimbra@odiso.com> Die Erkennung unerwuenschter Email (SPAM) hat eine Email identifiziert, die SPAM enthaelt. Die Original Nachricht ist im folgenden angehangen. Fuer Rueckfragen weden Sie sich bitte an support at csc.de. Content preview: @Amin >> I have installed the latest Proxmox VE 6.1-2 and when modifying? network >> information from web, the prefix (netmask) is slashed off from old >> interface config irrespective of whether I sp [...] Content analysis details: (6.0 points, 1.5 required) pts rule name description ---- ---------------------- ----------------------------------------- 0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [URIs: aminvakil.com] 5.0 FROM_NOT_REPLYTO_SAME_DOMAIN From domain does not match Reply-To domain 0.0 FROM_AND_TO_IS_SAME_DOMAIN From domain and To domain is the same 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 2.0 FROM_NOT_REPLYTO From does not match Reply-To -1.0 MAILING_LIST_MULTI Multiple indicators imply a widely-seen list manager From info at aminvakil.com Sat Apr 4 19:02:31 2020 From: info at aminvakil.com (Amin Vakil) Date: Sat, 4 Apr 2020 21:32:31 +0430 Subject: [PVE-User] Web interface issues with latest PVE In-Reply-To: <486035789.6270330.1586017505230.JavaMail.zimbra@odiso.com> References: <78a95f1d-84be-93b6-e076-dea553a9371e@aminvakil.com> <3343b2f5-c63d-79b2-8bd9-c0359930446d@web.de> <486035789.6270330.1586017505230.JavaMail.zimbra@odiso.com> Message-ID: <1e7fcfd4-a402-3129-db7c-af5988ba3681@aminvakil.com> > @Amin >>> I have installed the latest Proxmox VE 6.1-2 and when modifying? network >>> information from web, the prefix (netmask) is slashed off from old >>> interface config irrespective of whether I specify CIDR or not. And >>> then, I cannot create cluster from web GUI with link0 address in >>> dropdown if IP address has prefix (/cidr). I had to create it from CLI. >> I faced this problem two weeks ago and ended up configuring it from ssh >> as well. >> >> > Do you have updated to last version ? They was a small bug when new cidr format for address has been introduced, > and fixed some days later. > Unfortunately I can't remember the exact time this happened it was something like 3 or 4 weeks ago, I've updated since then but as all servers are in production right now, I can't test and verify if it's been fixed or not. I know it should be fixed, but it was not a big problem as it's supposed to be configured one time only in most cases and it could be fixed from console in the time of problem from GUI too. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From info at aminvakil.com Sat Apr 4 19:02:31 2020 From: info at aminvakil.com (Amin Vakil) Date: Sat, 4 Apr 2020 21:32:31 +0430 Subject: [PVE-User] *****SPAM***** Re: Web interface issues with latest PVE Message-ID: <1e7fcfd4-a402-3129-db7c-af5988ba3681@aminvakil.com> Die Erkennung unerwuenschter Email (SPAM) hat eine Email identifiziert, die SPAM enthaelt. Die Original Nachricht ist im folgenden angehangen. Fuer Rueckfragen weden Sie sich bitte an support at csc.de. Content preview: > @Amin >>> I have installed the latest Proxmox VE 6.1-2 and when modifying? network >>> information from web, the prefix (netmask) is slashed off from old >>> interface config irrespective of whether [...] Content analysis details: (6.2 points, 1.5 required) pts rule name description ---- ---------------------- ----------------------------------------- 0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [URIs: aminvakil.com] 5.0 FROM_NOT_REPLYTO_SAME_DOMAIN From domain does not match Reply-To domain 0.0 FROM_AND_TO_IS_SAME_DOMAIN From domain and To domain is the same 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 2.0 FROM_NOT_REPLYTO From does not match Reply-To 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -1.0 MAILING_LIST_MULTI Multiple indicators imply a widely-seen list manager 0.1 DKIM_INVALID DKIM or DK signature exists, but is not valid Die urspr?ngliche Nachricht enthielt nicht ausschlie?lich Klartext (plain text) und kann eventuell eine Gefahr f?r einige E-Mail-Programme darstellen (falls sie z.B. einen Computervirus enth?lt). M?chten Sie die Nachricht dennoch ansehen, ist es wahrscheinlich sicherer, sie zuerst in einer Datei zu speichern und diese Datei danach mit einem Texteditor zu ?ffnen. From aderumier at odiso.com Sun Apr 5 09:12:08 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Sun, 5 Apr 2020 09:12:08 +0200 (CEST) Subject: [PVE-User] Web interface issues with latest PVE In-Reply-To: <5d786a70-b9e3-7c87-8960-38f70af19d34@tashicell.com> References: Message-ID: <987063392.6279578.1586070728652.JavaMail.zimbra@odiso.com> @Sonam: >> >>Finally, I cannot apply pending network changes from web even after >>installing *ifupdown2* (without reboot). It complains about >>subscription. Do I need enterprise subscription for that? Do you have change the proxmox repository to no-subscription? https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo From aderumier at odiso.com Sun Apr 5 09:12:08 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Sun, 5 Apr 2020 09:12:08 +0200 (CEST) Subject: [PVE-User] *****SPAM***** Re: Web interface issues with latest PVE Message-ID: <987063392.6279578.1586070728652.JavaMail.zimbra@odiso.com> Die Erkennung unerwuenschter Email (SPAM) hat eine Email identifiziert, die SPAM enthaelt. Die Original Nachricht ist im folgenden angehangen. Fuer Rueckfragen weden Sie sich bitte an support at csc.de. Content preview: @Sonam: >> >>Finally, I cannot apply pending network changes from web even after >>installing *ifupdown2* (without reboot). It complains about >>subscription. Do I need enterprise subscription for tha [...] Content analysis details: (6.0 points, 1.5 required) pts rule name description ---- ---------------------- ----------------------------------------- 0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [URIs: proxmox.com] 2.0 FROM_NOT_REPLYTO From does not match Reply-To 0.0 FROM_AND_TO_IS_SAME_DOMAIN From domain and To domain is the same 5.0 FROM_NOT_REPLYTO_SAME_DOMAIN From domain does not match Reply-To domain 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -1.0 MAILING_LIST_MULTI Multiple indicators imply a widely-seen list manager From nada at verdnatura.es Sun Apr 5 11:09:48 2020 From: nada at verdnatura.es (nada) Date: Sun, 05 Apr 2020 11:09:48 +0200 Subject: [PVE-User] spamassasin Message-ID: <355362bd07d3d5c1c9997d60a88b06e6@verdnatura.es> good day PLS reconfigure your antiSPAM all mails are "bad" scoring because of the following +2.0 FROM_NOT_REPLYTO +5.0 FROM_NOT_REPLY_SAME_DOMAIN and all mails have "good" scoring -1.0 MAILING_LIST_MULTI so PLS look at yout /usr/share/spamassassin and add something like to your local.cf or to your XX_user.cf score MAILING_LIST_MULTI -7 or score FROM_NOT_REPLY_SAME_DOMAIN -7 hope it helps have a nice weekend ;-) Nada From nada at verdnatura.es Sun Apr 5 11:09:48 2020 From: nada at verdnatura.es (nada) Date: Sun, 05 Apr 2020 11:09:48 +0200 Subject: [PVE-User] *****SPAM***** spamassasin Message-ID: <355362bd07d3d5c1c9997d60a88b06e6@verdnatura.es> Die Erkennung unerwuenschter Email (SPAM) hat eine Email identifiziert, die SPAM enthaelt. Die Original Nachricht ist im folgenden angehangen. Fuer Rueckfragen weden Sie sich bitte an support at csc.de. Content preview: good day PLS reconfigure your antiSPAM all mails are "bad" scoring because of the following +2.0 FROM_NOT_REPLYTO +5.0 FROM_NOT_REPLY_SAME_DOMAIN and all mails have "good" scoring -1.0 MAILING_LIST_MULTI Content analysis details: (6.2 points, 1.5 required) pts rule name description ---- ---------------------- ----------------------------------------- 0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [URIs: proxmox.com] 2.0 FROM_NOT_REPLYTO From does not match Reply-To 0.0 FROM_AND_TO_IS_SAME_DOMAIN From domain and To domain is the same 5.0 FROM_NOT_REPLYTO_SAME_DOMAIN From domain does not match Reply-To domain 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid 0.1 DKIM_INVALID DKIM or DK signature exists, but is not valid -1.0 MAILING_LIST_MULTI Multiple indicators imply a widely-seen list manager From alain.pean at c2n.upsaclay.fr Sun Apr 5 11:38:26 2020 From: alain.pean at c2n.upsaclay.fr (=?UTF-8?Q?Alain_p=c3=a9an?=) Date: Sun, 5 Apr 2020 11:38:26 +0200 Subject: [PVE-User] spamassasin In-Reply-To: <355362bd07d3d5c1c9997d60a88b06e6@verdnatura.es> References: <355362bd07d3d5c1c9997d60a88b06e6@verdnatura.es> Message-ID: Le 05/04/2020 ? 11:09, nada a ?crit?: > PLS reconfigure your antiSPAM > > all mails are "bad" scoring because of the following > +2.0 FROM_NOT_REPLYTO > +5.0 FROM_NOT_REPLY_SAME_DOMAIN > > and all mails have "good" scoring > -1.0 MAILING_LIST_MULTI > > so PLS look at yout /usr/share/spamassassin > and add something like to your local.cf or to your XX_user.cf > score MAILING_LIST_MULTI -7 > or > score FROM_NOT_REPLY_SAME_DOMAIN -7 ? Perhaps it is your mail server spamassassin that you have to tweak ? Alain -- Administrateur Syst?me/R?seau C2N (ex LPN) Centre de Nanosciences et Nanotechnologies (UMR 9001) 10 Boulevard Thomas Gobert (ex Avenue de la Vauve), 91920 Palaiseau Tel : 01-70-27-06-88 From alain.pean at c2n.upsaclay.fr Sun Apr 5 11:38:26 2020 From: alain.pean at c2n.upsaclay.fr (=?UTF-8?Q?Alain_p=c3=a9an?=) Date: Sun, 5 Apr 2020 11:38:26 +0200 Subject: [PVE-User] *****SPAM***** Re: spamassasin Message-ID: Die Erkennung unerwuenschter Email (SPAM) hat eine Email identifiziert, die SPAM enthaelt. Die Original Nachricht ist im folgenden angehangen. Fuer Rueckfragen weden Sie sich bitte an support at csc.de. Content preview: Le 05/04/2020 ?? 11:09, nada a ??crit??: > PLS reconfigure your antiSPAM > > all mails are "bad" scoring because of the following > +2.0 FROM_NOT_REPLYTO > +5.0 FROM_NOT_REPLY_SAME_DOMAIN > > and all [...] Content analysis details: (6.8 points, 1.5 required) pts rule name description ---- ---------------------- ----------------------------------------- 0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [URIs: xx_user.cf] 0.8 BAYES_50 BODY: Spamwahrscheinlichkeit nach Bayes-Test: 40-60% [score: 0.4607] 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 2.0 FROM_NOT_REPLYTO From does not match Reply-To 0.0 FROM_AND_TO_IS_SAME_DOMAIN From domain and To domain is the same 5.0 FROM_NOT_REPLYTO_SAME_DOMAIN From domain does not match Reply-To domain -1.0 MAILING_LIST_MULTI Multiple indicators imply a widely-seen list manager From aderumier at odiso.com Sun Apr 5 16:25:34 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Sun, 5 Apr 2020 16:25:34 +0200 (CEST) Subject: [PVE-User] Web interface issues with latest PVE In-Reply-To: <1e7fcfd4-a402-3129-db7c-af5988ba3681@aminvakil.com> References: <78a95f1d-84be-93b6-e076-dea553a9371e@aminvakil.com> <3343b2f5-c63d-79b2-8bd9-c0359930446d@web.de> <486035789.6270330.1586017505230.JavaMail.zimbra@odiso.com> <1e7fcfd4-a402-3129-db7c-af5988ba3681@aminvakil.com> Message-ID: <865349726.6282743.1586096734875.JavaMail.zimbra@odiso.com> >>Unfortunately I can't remember the exact time this happened it was >>something like 3 or 4 weeks ago, I've updated since then but as all >>servers are in production right now, I can't test and verify if it's >>been fixed or not. Ok, no problem. I has been able to reproduce it. It was indeed when the config has an extra space after the ip address or the netmask in /etc/network/interfaces. (Maybe did you have wrote it manually before ?) I have send a patch to pve-devel mailing to fix the parsing. ----- Mail original ----- De: "Amin Vakil" ?: "proxmoxve" Cc: "aderumier" Envoy?: Samedi 4 Avril 2020 19:02:31 Objet: Re: [PVE-User] Web interface issues with latest PVE > @Amin >>> I have installed the latest Proxmox VE 6.1-2 and when modifying? network >>> information from web, the prefix (netmask) is slashed off from old >>> interface config irrespective of whether I specify CIDR or not. And >>> then, I cannot create cluster from web GUI with link0 address in >>> dropdown if IP address has prefix (/cidr). I had to create it from CLI. >> I faced this problem two weeks ago and ended up configuring it from ssh >> as well. >> >> > Do you have updated to last version ? They was a small bug when new cidr format for address has been introduced, > and fixed some days later. > Unfortunately I can't remember the exact time this happened it was something like 3 or 4 weeks ago, I've updated since then but as all servers are in production right now, I can't test and verify if it's been fixed or not. I know it should be fixed, but it was not a big problem as it's supposed to be configured one time only in most cases and it could be fixed from console in the time of problem from GUI too. From aderumier at odiso.com Sun Apr 5 16:25:34 2020 From: aderumier at odiso.com (Alexandre DERUMIER) Date: Sun, 5 Apr 2020 16:25:34 +0200 (CEST) Subject: [PVE-User] *****SPAM***** Re: Web interface issues with latest PVE Message-ID: <865349726.6282743.1586096734875.JavaMail.zimbra@odiso.com> Die Erkennung unerwuenschter Email (SPAM) hat eine Email identifiziert, die SPAM enthaelt. Die Original Nachricht ist im folgenden angehangen. Fuer Rueckfragen weden Sie sich bitte an support at csc.de. Content preview: >>Unfortunately I can't remember the exact time this happened it was >>something like 3 or 4 weeks ago, I've updated since then but as all >>servers are in production right now, I can't test and verif [...] Content analysis details: (6.8 points, 1.5 required) pts rule name description ---- ---------------------- ----------------------------------------- 0.0 URIBL_BLOCKED ADMINISTRATOR NOTICE: The query to URIBL was blocked. See http://wiki.apache.org/spamassassin/DnsBlocklists#dnsbl-block for more information. [URIs: aminvakil.com] 0.8 BAYES_50 BODY: Spamwahrscheinlichkeit nach Bayes-Test: 40-60% [score: 0.5018] 2.0 FROM_NOT_REPLYTO From does not match Reply-To 0.0 FROM_AND_TO_IS_SAME_DOMAIN From domain and To domain is the same 5.0 FROM_NOT_REPLYTO_SAME_DOMAIN From domain does not match Reply-To domain 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -1.0 MAILING_LIST_MULTI Multiple indicators imply a widely-seen list manager From sysadmin at tashicell.com Sun Apr 5 19:21:36 2020 From: sysadmin at tashicell.com (Sonam Namgyel) Date: Sun, 5 Apr 2020 23:21:36 +0600 Subject: [PVE-User] Web interface issues with latest PVE In-Reply-To: <865349726.6282743.1586096734875.JavaMail.zimbra@odiso.com> References: <78a95f1d-84be-93b6-e076-dea553a9371e@aminvakil.com> <3343b2f5-c63d-79b2-8bd9-c0359930446d@web.de> <486035789.6270330.1586017505230.JavaMail.zimbra@odiso.com> <1e7fcfd4-a402-3129-db7c-af5988ba3681@aminvakil.com> <865349726.6282743.1586096734875.JavaMail.zimbra@odiso.com> Message-ID: Dear all, Thank you for the help. All the problem was indeed solved after updating the repository to *pve-no-subscription* (don't know why I missed this important documentation).? Thank you once again for all the help. I will come again with another simple problems. I hope you all will help me get Proxmoxing DONE! Cheers. Sonam On 4/5/20 8:25 PM, Alexandre DERUMIER wrote: >>> Unfortunately I can't remember the exact time this happened it was >>> something like 3 or 4 weeks ago, I've updated since then but as all >>> servers are in production right now, I can't test and verify if it's >>> been fixed or not. > Ok, no problem. I has been able to reproduce it. It was indeed when the config > has an extra space after the ip address or the netmask in /etc/network/interfaces. > (Maybe did you have wrote it manually before ?) > > I have send a patch to pve-devel mailing to fix the parsing. > > ----- Mail original ----- > De: "Amin Vakil" > ?: "proxmoxve" > Cc: "aderumier" > Envoy?: Samedi 4 Avril 2020 19:02:31 > Objet: Re: [PVE-User] Web interface issues with latest PVE > >> @Amin >>>> I have installed the latest Proxmox VE 6.1-2 and when modifying? network >>>> information from web, the prefix (netmask) is slashed off from old >>>> interface config irrespective of whether I specify CIDR or not. And >>>> then, I cannot create cluster from web GUI with link0 address in >>>> dropdown if IP address has prefix (/cidr). I had to create it from CLI. >>> I faced this problem two weeks ago and ended up configuring it from ssh >>> as well. >>> >>> >> Do you have updated to last version ? They was a small bug when new cidr format for address has been introduced, >> and fixed some days later. >> > Unfortunately I can't remember the exact time this happened it was > something like 3 or 4 weeks ago, I've updated since then but as all > servers are in production right now, I can't test and verify if it's > been fixed or not. > > I know it should be fixed, but it was not a big problem as it's supposed > to be configured one time only in most cases and it could be fixed from > console in the time of problem from GUI too. > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From leandro at tecnetmza.com.ar Mon Apr 6 16:22:28 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Mon, 6 Apr 2020 11:22:28 -0300 Subject: [PVE-User] cpus assign when creating vm or ct. Message-ID: Hi guys, Mi pve node shows: CPU usage 0.53% of 24 CPU(s) Meaning I have 24 cpus available. In order to follow best practices , I would like to know: If I create a vm or ct , should I assign those 24 cpus ? If not ; Then if I create a vm or ct and assign 4 cpus , it means im adding load only to 4 of 24 cpus and wasting 20 cpus ? is this ok ? Any thoughts about this would be greatly appreciated. Leandro. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> From gianni.milo22 at gmail.com Mon Apr 6 18:28:41 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Mon, 6 Apr 2020 17:28:41 +0100 Subject: [PVE-User] cpus assign when creating vm or ct. In-Reply-To: References: Message-ID: People are usually using more than one VM/CT per host, so allocating less cpu cores and ram per VM/CT than the host can support it's considered a "normal" thing to do. On the other hand if you plan using just one VM/CT on a single host, nothing is preventing you from allocating all cpu cores and ram that the host supports on that single VM/CT. Over provisioning (using more than host can support) cpu cores and ram may also work as soon as you don't utilise all VMs at the same time. Depending on the workload requirements of each VM you may have to allocate a specific number of cpu cores and ram per VM depending on their specific needs. G. On Mon, 6 Apr 2020 at 15:23, Leandro Roggerone wrote: > Hi guys, > Mi pve node shows: > CPU usage > 0.53% of 24 CPU(s) > Meaning I have 24 cpus available. > In order to follow best practices , I would like to know: > If I create a vm or ct , should I assign those 24 cpus ? > If not ; > Then if I create a vm or ct and assign 4 cpus , it means im adding load > only to 4 of 24 cpus and wasting 20 cpus ? is this ok ? > > Any thoughts about this would be greatly appreciated. > Leandro. > > > > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > Libre > de virus. www.avast.com > < > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > > > <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From sivakumar.saravanan.jv.ext at valeo-siemens.com Tue Apr 7 19:24:57 2020 From: sivakumar.saravanan.jv.ext at valeo-siemens.com (Sivakumar SARAVANAN) Date: Tue, 7 Apr 2020 19:24:57 +0200 Subject: [PVE-User] Not able to select the option Message-ID: Hello, I am trying to install the debian OS on VM. I mounted the ISO as DVD and booted the VM. So Now, I am trying from VM console or noVNC mode. Now that VM is booted with OS and I can see all installation options. I selected the First option ?Graphical Install? and tried to enter. but not responding at all. Kindly advise. why enter is not responding. Mit freundlichen Gr??en / Best regards / Cordialement, Sivakumar SARAVANAN Externer Dienstleister f?r / External service provider for Valeo Siemens eAutomotive Germany GmbH Research & Development R & D SWENG TE 1 INFTE Frauenauracher Stra?e 85 91056 Erlangen, Germany Tel.: +49 9131 9892 0000 Mobile: +49 176 7698 5441 sivakumar.saravanan.jv.ext at valeo-siemens.com valeo-siemens.com Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger Schwab, Michael Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 -- *This e-mail message is intended for the internal use of the intended recipient(s) only. The information contained herein is confidential/privileged. Its disclosure or reproduction is strictly prohibited. If you are not the intended recipient, please inform the sender immediately, do not disclose it internally or to third parties and destroy it. In the course of our business relationship and for business purposes only, Valeo may need to process some of your personal data. For more information, please refer to the Valeo Data Protection Statement and Privacy notice available on Valeo.com * From martin at proxmox.com Tue Apr 7 19:59:41 2020 From: martin at proxmox.com (Martin Maurer) Date: Tue, 7 Apr 2020 19:59:41 +0200 Subject: [PVE-User] Not able to select the option In-Reply-To: References: Message-ID: <21e10e54-c08d-8016-97b5-972a249d03cd@proxmox.com> On 07.04.20 19:24, Sivakumar SARAVANAN wrote: > Hello, > > I am trying to install the debian OS on VM. > > I mounted the ISO as DVD and booted the VM. So Now, I am trying from VM > console or noVNC mode. > > Now that VM is booted with OS and I can see all installation options. > > I selected the First option ?Graphical Install? and tried to enter. but not > responding at all. > > Kindly advise. why enter is not responding. No enough memory for your VM? -- Best Regards, Martin Maurer martin at proxmox.com https://www.proxmox.com From leandro at tecnetmza.com.ar Tue Apr 7 20:58:15 2020 From: leandro at tecnetmza.com.ar (Leandro Roggerone) Date: Tue, 7 Apr 2020 15:58:15 -0300 Subject: [PVE-User] sending special keys over noVNC. Message-ID: Hi guys , Im running windows server 2012. After doing process described here: http://www.kieranlane.com/2013/09/18/resetting-administrator-password-windows-2012/ for password recovery, I need to send windows + u combination. How can I do it from NOVNC ? windows keys is executed on my pc when I press it. Any idea would be appreciated. Leanro. From sivakumar.saravanan.jv.ext at valeo-siemens.com Tue Apr 7 21:08:47 2020 From: sivakumar.saravanan.jv.ext at valeo-siemens.com (Sivakumar SARAVANAN) Date: Tue, 7 Apr 2020 21:08:47 +0200 Subject: [PVE-User] Not able to select the option In-Reply-To: <21e10e54-c08d-8016-97b5-972a249d03cd@proxmox.com> References: <21e10e54-c08d-8016-97b5-972a249d03cd@proxmox.com> Message-ID: Hello Martin, Thank you for your support. It works now. Mit freundlichen Gr??en / Best regards / Cordialement, Sivakumar SARAVANAN Externer Dienstleister f?r / External service provider for Valeo Siemens eAutomotive Germany GmbH Research & Development R & D SWENG TE 1 INFTE Frauenauracher Stra?e 85 91056 Erlangen, Germany Tel.: +49 9131 9892 0000 Mobile: +49 176 7698 5441 sivakumar.saravanan.jv.ext at valeo-siemens.com valeo-siemens.com Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger Schwab, Michael Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 On Tue, Apr 7, 2020 at 7:59 PM Martin Maurer wrote: > On 07.04.20 19:24, Sivakumar SARAVANAN wrote: > > Hello, > > > > I am trying to install the debian OS on VM. > > > > I mounted the ISO as DVD and booted the VM. So Now, I am trying from VM > > console or noVNC mode. > > > > Now that VM is booted with OS and I can see all installation options. > > > > I selected the First option ?Graphical Install? and tried to enter. but > not > > responding at all. > > > > Kindly advise. why enter is not responding. > > No enough memory for your VM? > > > -- > Best Regards, > > Martin Maurer > > martin at proxmox.com > https://www.proxmox.com > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- *This e-mail message is intended for the internal use of the intended recipient(s) only. The information contained herein is confidential/privileged. Its disclosure or reproduction is strictly prohibited. If you are not the intended recipient, please inform the sender immediately, do not disclose it internally or to third parties and destroy it. In the course of our business relationship and for business purposes only, Valeo may need to process some of your personal data. For more information, please refer to the Valeo Data Protection Statement and Privacy notice available on Valeo.com * From leesteken at pm.me Tue Apr 7 21:12:36 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Tue, 07 Apr 2020 19:12:36 +0000 Subject: [PVE-User] sending special keys over noVNC. In-Reply-To: References: Message-ID: ??????? Original Message ??????? On Tuesday, April 7, 2020 8:58 PM, Leandro Roggerone wrote: > Hi guys , Im running windows server 2012. > After doing process described here: > http://www.kieranlane.com/2013/09/18/resetting-administrator-password-windows-2012/ > for password recovery, I need to send windows + u combination. > How can I do it from NOVNC ? > windows keys is executed on my pc when I press it. Open the menu (the little triangle) on the side of the noVNC window (usually on the left). Click topmost button (labeled [A]), which opens another menu that allows you to "virtually press" the Ctrl, Alt, Windows, Tab, Escape or Ctrl+Alt+Del keys, respectively. Press the third button from the top to "virtually hold the Windows key", then press the U-key on the keyboard. Remember to "unpress" the Windows key in the menu to release it. > > Any idea would be appreciated. > Leanro. > Hope this helps, Arjen From leesteken at pm.me Wed Apr 8 17:35:55 2020 From: leesteken at pm.me (leesteken at pm.me) Date: Wed, 08 Apr 2020 15:35:55 +0000 Subject: [PVE-User] sending special keys over noVNC. In-Reply-To: References: Message-ID: <1cOO61bwMCc9cr_CUUCsfYmA4TI8BgnBC8VYBsLASv-fONDQdbzNN_aUif3wXeErFJn52ThGGaKlSRF8w1f7bjTUiM_HIncsvH8W6ALMeb4=@pm.me> Yes, I'm using (no-subscription) Proxmox 6.1-8: [noVNC-6.1-8.png] ??????? Original Message ??????? On Wednesday, April 8, 2020 1:56 PM, Leandro Roggerone wrote: > Dear Arjen , thanks for your comment. > Does your novnc menu provides windows key ? mine does not. > [noVNC.png] > If so ... > I will try to find out how to add it. > Thanks!! > Leandro. > > https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail > Libre de virus. [www.avast.com](https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail) > > #DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2 > > El mar., 7 abr. 2020 a las 16:12, leesteken--- via pve-user () escribi?: > >> ---------- Forwarded message ---------- >> From: leesteken at pm.me >> To: PVE User List >> Cc: >> Bcc: >> Date: Tue, 07 Apr 2020 19:12:36 +0000 >> Subject: Re: [PVE-User] sending special keys over noVNC. >> ??????? Original Message ??????? >> On Tuesday, April 7, 2020 8:58 PM, Leandro Roggerone wrote: >> >>> Hi guys , Im running windows server 2012. >>> After doing process described here: >>> http://www.kieranlane.com/2013/09/18/resetting-administrator-password-windows-2012/ >>> for password recovery, I need to send windows + u combination. >>> How can I do it from NOVNC ? >>> windows keys is executed on my pc when I press it. >> >> Open the menu (the little triangle) on the side of the noVNC window (usually on the left). >> Click topmost button (labeled [A]), which opens another menu that allows you to "virtually >> press" the Ctrl, Alt, Windows, Tab, Escape or Ctrl+Alt+Del keys, respectively. >> Press the third button from the top to "virtually hold the Windows key", then press the U-key >> on the keyboard. Remember to "unpress" the Windows key in the menu to release it. >> >>> >>> Any idea would be appreciated. >>> Leanro. >>> >> Hope this helps, Arjen >> >> ---------- Forwarded message ---------- >> From: leesteken--- via pve-user >> To: PVE User List >> Cc: leesteken at pm.me >> Bcc: >> Date: Tue, 07 Apr 2020 19:12:36 +0000 >> Subject: Re: [PVE-User] sending special keys over noVNC. >> _______________________________________________ >> pve-user mailing list >> pve-user at pve.proxmox.com >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user From krienke at uni-koblenz.de Tue Apr 14 15:54:30 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Tue, 14 Apr 2020 15:54:30 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> Message-ID: <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> Hello, in between I learned a lot from this group (thanks a lot) to solve many performance problems I initially faced with proxmox in VMs having their storage on CEPH rbds. I parallelized access to many disks on a vm where possible, used iothreads and activated writeback cache. Running a bonnie++ I am now able to get about 300Mbytes/sec block write performance, which is a great value because it even scales out with ceph if I run the same bonnie++ on eg two machines. In this case I get 600MBytes/sec. Great. The last strangeness I am experiencing is read performance. The same bonnie on a VMs xfs_filesystem that yields 300MB write performance only gets a block read of about 90MB/sec. So on one of the pxa-hosts and later also on one of the ceph cluster nodes (nautilus 14.2.8, 144OSDs) I ran a rados bench test to see if ceph is slowing down reads. The results on both systems were very similar. So here is the test result from the pxa-host: # rados bench -p my-rbd 60 write --no-cleanup Total time run: 60.284332 Total writes made: 5376 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 356.71 Stddev Bandwidth: 46.8361 Max bandwidth (MB/sec): 424 Min bandwidth (MB/sec): 160 Average IOPS: 89 Stddev IOPS: 11 Max IOPS: 106 Min IOPS: 40 Average Latency(s): 0.179274 Stddev Latency(s): 0.105626 Max latency(s): 1.00746 Min latency(s): 0.0656261 # echo 3 > /proc/sys/vm/drop_caches # rados bench -p pxa-rbd 60 seq Total time run: 24.208097 Total reads made: 5376 Read size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 888.298 Average IOPS: 222 Stddev IOPS: 33 Max IOPS: 249 Min IOPS: 92 Average Latency(s): 0.0714553 Max latency(s): 0.63154 Min latency(s): 0.0237746 According to these numbers the relation from write and read performance should be the other way round: writes should be slower than reads, but on a VM its exactly the other way round? Any idea why nevertheless writes on a VM are ~3 times faster then reads and what I could try to speed up reading? Thanks a lot Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From a.antreich at proxmox.com Tue Apr 14 16:42:27 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Tue, 14 Apr 2020 16:42:27 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> Message-ID: <20200414144227.GA411486@dona.proxmox.com> On Tue, Apr 14, 2020 at 03:54:30PM +0200, Rainer Krienke wrote: > Hello, > > in between I learned a lot from this group (thanks a lot) to solve many > performance problems I initially faced with proxmox in VMs having their > storage on CEPH rbds. > > I parallelized access to many disks on a vm where possible, used > iothreads and activated writeback cache. > > Running a bonnie++ I am now able to get about 300Mbytes/sec block write > performance, which is a great value because it even scales out with ceph > if I run the same bonnie++ on eg two machines. In this case I get > 600MBytes/sec. Great. > > The last strangeness I am experiencing is read performance. The same > bonnie on a VMs xfs_filesystem that yields 300MB write performance only > gets a block read of about 90MB/sec. > > So on one of the pxa-hosts and later also on one of the ceph cluster > nodes (nautilus 14.2.8, 144OSDs) I ran a rados bench test to see if ceph > is slowing down reads. The results on both systems were very similar. So > here is the test result from the pxa-host: > > # rados bench -p my-rbd 60 write --no-cleanup > Total time run: 60.284332 > Total writes made: 5376 > Write size: 4194304 > Object size: 4194304 > Bandwidth (MB/sec): 356.71 > Stddev Bandwidth: 46.8361 > Max bandwidth (MB/sec): 424 > Min bandwidth (MB/sec): 160 > Average IOPS: 89 > Stddev IOPS: 11 > Max IOPS: 106 > Min IOPS: 40 > Average Latency(s): 0.179274 > Stddev Latency(s): 0.105626 > Max latency(s): 1.00746 > Min latency(s): 0.0656261 > > # echo 3 > /proc/sys/vm/drop_caches > # rados bench -p pxa-rbd 60 seq > Total time run: 24.208097 > Total reads made: 5376 > Read size: 4194304 > Object size: 4194304 > Bandwidth (MB/sec): 888.298 > Average IOPS: 222 > Stddev IOPS: 33 > Max IOPS: 249 > Min IOPS: 92 > Average Latency(s): 0.0714553 > Max latency(s): 0.63154 > Min latency(s): 0.0237746 > > > According to these numbers the relation from write and read performance > should be the other way round: writes should be slower than reads, but > on a VM its exactly the other way round? Ceph does reads in parallel, while writes are done to the primary OSD by the client. And that OSD is responsible for distributing the other copies. > > Any idea why nevertheless writes on a VM are ~3 times faster then reads > and what I could try to speed up reading? What is the byte size of bonnie++? If it uses 4 KB and data isn't in the cache, whole objects need to be requested from the cluster. -- Cheers, Alwin From krienke at uni-koblenz.de Tue Apr 14 17:21:44 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Tue, 14 Apr 2020 17:21:44 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <20200414144227.GA411486@dona.proxmox.com> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> <20200414144227.GA411486@dona.proxmox.com> Message-ID: <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> Am 14.04.20 um 16:42 schrieb Alwin Antreich: >> According to these numbers the relation from write and read performance >> should be the other way round: writes should be slower than reads, but >> on a VM its exactly the other way round? > Ceph does reads in parallel, while writes are done to the primary OSD by > the client. And that OSD is responsible for distributing the other > copies. Ah yes right. The primary OSD has to wait until all the OSDs in the pg have confirmed that data has been written to each of the OSDs. Reads as you said are parallel so I would expect reading to be faster than writing, but for me it is *not* in a proxmox VM with ceph rbd storage. However reads are faster on a ceph level, in a rados bench directly on a pxa host (no VM) which is what I would expect also for reads/writes inside a VM. > >> >> Any idea why nevertheless writes on a VM are ~3 times faster then reads >> and what I could try to speed up reading? > What is the byte size of bonnie++? If it uses 4 KB and data isn't in the > cache, whole objects need to be requested from the cluster. I did not find information about blocksizes used. The whole file that is written and later on read again by bonnie++ is however by default at least twice the size of your RAM. In a VM I also tried to read its own striped LV device: dd if=/dev/vg/testlv of=/dev/null bs=1024k status=progress (after clearing the VMs cache). /dev/vg/testlv is a striped LV (on 4 disks) with xfs on it on which I tested the speed using bonnie++ before. This dd also did not go beyond about 100MB/sec, whereas the rados bench promises much more. Thanks Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From a.antreich at proxmox.com Tue Apr 14 18:09:00 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Tue, 14 Apr 2020 18:09:00 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> <20200414144227.GA411486@dona.proxmox.com> <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> Message-ID: <20200414160900.GB411486@dona.proxmox.com> On Tue, Apr 14, 2020 at 05:21:44PM +0200, Rainer Krienke wrote: > Am 14.04.20 um 16:42 schrieb Alwin Antreich: > >> According to these numbers the relation from write and read performance > >> should be the other way round: writes should be slower than reads, but > >> on a VM its exactly the other way round? > > > Ceph does reads in parallel, while writes are done to the primary OSD by > > the client. And that OSD is responsible for distributing the other > > copies. > > Ah yes right. The primary OSD has to wait until all the OSDs in the pg > have confirmed that data has been written to each of the OSDs. Reads as > you said are parallel so I would expect reading to be faster than > writing, but for me it is *not* in a proxmox VM with ceph rbd storage. > > However reads are faster on a ceph level, in a rados bench directly on a > pxa host (no VM) which is what I would expect also for reads/writes > inside a VM. > > > >> > >> Any idea why nevertheless writes on a VM are ~3 times faster then reads > >> and what I could try to speed up reading? > > > What is the byte size of bonnie++? If it uses 4 KB and data isn't in the > > cache, whole objects need to be requested from the cluster. > > I did not find information about blocksizes used. The whole file that is > written and later on read again by bonnie++ is however by default at > least twice the size of your RAM. According to the man page the chunk size is 8192 bytes by default. > > In a VM I also tried to read its own striped LV device: dd > if=/dev/vg/testlv of=/dev/null bs=1024k status=progress (after clearing > the VMs cache). /dev/vg/testlv is a striped LV (on 4 disks) with xfs on > it on which I tested the speed using bonnie++ before. > This dd also did not go beyond about 100MB/sec, whereas the rados bench > promises much more. Do you have a VM without stripped volumes? I suppose there will be two requests, for each half of the data. That could slow down the read as well. And you can disable the cache to verify that cache misses don't impact the performance. -- Cheers, Alwin From gilberto.nunes32 at gmail.com Tue Apr 14 19:35:55 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 14 Apr 2020 14:35:55 -0300 Subject: [PVE-User] Create secondary pool on ceph servers.. Message-ID: Hi there I have 7 servers with PVE 6 all updated... All servers has named pve1,pve2 and so on... On pve3, pve4 and pve5 has SSD HD of 960GB. So we decided to create a second pool that will use only this SSD. I have readed Ceph CRUSH & device classes in order to do that! So just to do things right, I need check that out: 1 - first create OSD's with all HD, SAS and SSD 2 - second create different pool with command bellow: ruleset: ceph osd crush rule create-replicated create pool ceph osd pool set crush_rule Well, my question is: can I create OSD with all disk either SAS and SSD, and then after that, create the ruleset and the pool? Is this generated some impact during this operations?? Thanks a lot Gilberto From krienke at uni-koblenz.de Tue Apr 14 20:15:15 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Tue, 14 Apr 2020 20:15:15 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <20200414160900.GB411486@dona.proxmox.com> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> <20200414144227.GA411486@dona.proxmox.com> <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> <20200414160900.GB411486@dona.proxmox.com> Message-ID: <5dd07746-e276-9545-91ed-d5d49af2bd41@uni-koblenz.de> Am 14.04.20 um 18:09 schrieb Alwin Antreich: >> >> In a VM I also tried to read its own striped LV device: dd >> if=/dev/vg/testlv of=/dev/null bs=1024k status=progress (after clearing >> the VMs cache). /dev/vg/testlv is a striped LV (on 4 disks) with xfs on >> it on which I tested the speed using bonnie++ before. >> This dd also did not go beyond about 100MB/sec, whereas the rados bench >> promises much more. > Do you have a VM without stripped volumes? I suppose there will be two > requests, for each half of the data. That could slow down the read as> well. Yes the logical volume is striped using 4 physical volumes (RBDs). But since exactly this setup helped to boost up writing (more paralellism) it should do exactly the same since blocks can be read from more separate rbd devices and thus more disks in general. I also tested a VM with just a single rbd used for the VMs disk and there the effect ist quite the same. > > And you can disable the cache to verify that cache misses don't impact > the performance. I tried and disabled the writeback cache but the effect was only minimal. Have a nice day Rainer -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From a.antreich at proxmox.com Tue Apr 14 20:30:33 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Tue, 14 Apr 2020 20:30:33 +0200 Subject: [PVE-User] Create secondary pool on ceph servers.. In-Reply-To: References: Message-ID: <20200414183033.GA2812966@dona.proxmox.com> On Tue, Apr 14, 2020 at 02:35:55PM -0300, Gilberto Nunes wrote: > Hi there > > I have 7 servers with PVE 6 all updated... > All servers has named pve1,pve2 and so on... > On pve3, pve4 and pve5 has SSD HD of 960GB. > So we decided to create a second pool that will use only this SSD. > I have readed Ceph CRUSH & device classes in order to do that! > So just to do things right, I need check that out: > 1 - first create OSD's with all HD, SAS and SSD > 2 - second create different pool with command bellow: > ruleset: > > ceph osd crush rule create-replicated > > > create pool > > ceph osd pool set crush_rule > > > Well, my question is: can I create OSD with all disk either SAS and > SSD, and then after that, create the ruleset and the pool? > Is this generated some impact during this operations?? If your OSD types aren't mixed, then best create the rule for the existing pool first. All data will move, once the rule is applied. So, not much to movement if they are already on the correct OSD type. -- Cheers, Alwin From gilberto.nunes32 at gmail.com Tue Apr 14 20:57:05 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Tue, 14 Apr 2020 15:57:05 -0300 Subject: [PVE-User] Create secondary pool on ceph servers.. In-Reply-To: <20200414183033.GA2812966@dona.proxmox.com> References: <20200414183033.GA2812966@dona.proxmox.com> Message-ID: Oh! Sorry Alwin. I have some urgence to do this. So this what I do... First, I insert all HDDs, both SAS and SSD, into OSD tree. Than, I check if the system could detect SSD as ssd and SAS as hdd, but there's not difference! It's show all HDDs as hdd! So, I change the class with this commands: ceph osd crush rm-device-class osd.7 ceph osd crush set-device-class ssd osd.7 ceph osd crush rm-device-class osd.8 ceph osd crush set-device-class ssd osd.8 ceph osd crush rm-device-class osd.12 ceph osd crush set-device-class ssd osd.12 ceph osd crush rm-device-class osd.13 ceph osd crush set-device-class ssd osd.13 ceph osd crush rm-device-class osd.14 ceph osd crush set-device-class ssd osd.14 After that, ceph osd crush tree --show-shadow show me different types of HDD... ceph osd crush tree --show-shadow ID CLASS WEIGHT TYPE NAME -24 ssd 4.36394 root default~ssd -20 ssd 0 host pve1~ssd -21 ssd 0 host pve2~ssd -17 ssd 0.87279 host pve3~ssd 7 ssd 0.87279 osd.7 -18 ssd 0.87279 host pve4~ssd 8 ssd 0.87279 osd.8 -19 ssd 0.87279 host pve5~ssd 12 ssd 0.87279 osd.12 -22 ssd 0.87279 host pve6~ssd 13 ssd 0.87279 osd.13 -23 ssd 0.87279 host pve7~ssd 14 ssd 0.87279 osd.14 -2 hdd 12.00282 root default~hdd -10 hdd 1.09129 host pve1~hdd 0 hdd 1.09129 osd.0 ..... ..... Then, I have created the rule ceph osd crush rule create-replicated SSDPOOL default host ssd Then create a POOL named SSDs and then assigned the new pool ceph osd pool set SSDs crush_rule SSDPOOL It's seems to work properly... What you thing? --- Gilberto Nunes Ferreira Em ter., 14 de abr. de 2020 ?s 15:30, Alwin Antreich escreveu: > On Tue, Apr 14, 2020 at 02:35:55PM -0300, Gilberto Nunes wrote: > > Hi there > > > > I have 7 servers with PVE 6 all updated... > > All servers has named pve1,pve2 and so on... > > On pve3, pve4 and pve5 has SSD HD of 960GB. > > So we decided to create a second pool that will use only this SSD. > > I have readed Ceph CRUSH & device classes in order to do that! > > So just to do things right, I need check that out: > > 1 - first create OSD's with all HD, SAS and SSD > > 2 - second create different pool with command bellow: > > ruleset: > > > > ceph osd crush rule create-replicated > > > > > > create pool > > > > ceph osd pool set crush_rule > > > > > > Well, my question is: can I create OSD with all disk either SAS and > > SSD, and then after that, create the ruleset and the pool? > > Is this generated some impact during this operations?? > If your OSD types aren't mixed, then best create the rule for the > existing pool first. All data will move, once the rule is applied. So, > not much to movement if they are already on the correct OSD type. > > -- > Cheers, > Alwin > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From a.antreich at proxmox.com Wed Apr 15 09:24:48 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Wed, 15 Apr 2020 09:24:48 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <5dd07746-e276-9545-91ed-d5d49af2bd41@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> <20200414144227.GA411486@dona.proxmox.com> <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> <20200414160900.GB411486@dona.proxmox.com> <5dd07746-e276-9545-91ed-d5d49af2bd41@uni-koblenz.de> Message-ID: <20200415072448.GC2812966@dona.proxmox.com> On Tue, Apr 14, 2020 at 08:15:15PM +0200, Rainer Krienke wrote: > Am 14.04.20 um 18:09 schrieb Alwin Antreich: > > >> > >> In a VM I also tried to read its own striped LV device: dd > >> if=/dev/vg/testlv of=/dev/null bs=1024k status=progress (after clearing > >> the VMs cache). /dev/vg/testlv is a striped LV (on 4 disks) with xfs on > >> it on which I tested the speed using bonnie++ before. > >> This dd also did not go beyond about 100MB/sec, whereas the rados bench > >> promises much more. > > Do you have a VM without stripped volumes? I suppose there will be two > > requests, for each half of the data. That could slow down the read as> well. > > Yes the logical volume is striped using 4 physical volumes (RBDs). But > since exactly this setup helped to boost up writing (more paralellism) > it should do exactly the same since blocks can be read from more > separate rbd devices and thus more disks in general. > > I also tested a VM with just a single rbd used for the VMs disk and > there the effect ist quite the same. > > > > > And you can disable the cache to verify that cache misses don't impact > > the performance. > > I tried and disabled the writeback cache but the effect was only minimal. It seems that at this point the optimizations need to be done inside the VM (eg. readahead). I think the data that is requested is not in the cache and to small to be done within one read operation. -- Cheers, Alwin From gbr at majentis.com Thu Apr 16 15:31:44 2020 From: gbr at majentis.com (Gerald Brandt) Date: Thu, 16 Apr 2020 08:31:44 -0500 Subject: [PVE-User] Proxmox 6 loses network every 24 hours Message-ID: <4973416c-55fa-bf03-831a-dd32014e4120@majentis.com> Hi, I have a Proxmox 6 server at SoYouStart https://www.soyoustart.com. I've been running a server there for years, running Proxmox 4. Instead of upgrading, I backed up all my VMs, did a fresh Proxmox 6 install, and restored my VMs. It works well, except for NFS, but that's another topic. Every 24 hours, vmbr0 disappears. The SoYouStart staff can't ping the machine, assume it's gone down, and hard reboot the server. 24 hours after it's up, the cycle repeats. I'm not sure what is going on. One weird thing, is that vmbr0 isn't configured in Proxmox, SoYouStart has it configured some other way. Could this be part of the problem? It looks like vmbr0 gets its IP via DHCP. (Yup, check in /etc/network/interfaces) Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to 255.255.255.255 port 67 interval 7 Apr 15 21:32:32 ns500184 dhclient[744]: Sending on LPF/vmbr0/00:25:90:7b:a2:b8 Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to 255.255.255.255 port 67 interval 7 Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to 255.255.255.255 port 67 interval 13 Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to 255.255.255.255 port 67 interval 13 Apr 15 21:32:32 ns500184 ifup[695]: DHCPREQUEST for x.x.x.x on vmbr0 to 255.255.255.255 port 67 Apr 15 21:32:32 ns500184 dhclient[744]: DHCPREQUEST for x.x.x.x on vmbr0 to 255.255.255.255 port 67 Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0 | True | x.x.x.x | 255.255.255.0 | global | 00:25:90:7b:a2:b8 | Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0 | True | xxxxxxxxxxxxxxxxxxx/64 | . | link | 00:25:90:7b:a2:b8 | Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 0 | 0.0.0.0 | x.x.x.254 | 0.0.0.0 | vmbr0 | UG | Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 1 | x.x.x.0 | 0.0.0.0 | 255.255.255.0 | vmbr0 | U | Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 1 | fe80::/64 | :: | vmbr0 | U | Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 3 | local | :: | vmbr0 | U | Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 4 | ff00::/8 | :: | vmbr0 | U | Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 3 vmbr0 x.x.x.x:123 Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 5 vmbr0 [xxxxxxxxxx]:123 Apr 15 21:32:32 ns500184 kernel: [ 12.212574] vmbr0: port 1(enp1s0) entered blocking state Apr 15 21:32:32 ns500184 kernel: [ 12.212639] vmbr0: port 1(enp1s0) entered disabled state Apr 15 21:32:32 ns500184 kernel: [ 15.603713] vmbr0: port 1(enp1s0) entered blocking state Apr 15 21:32:32 ns500184 kernel: [ 15.603773] vmbr0: port 1(enp1s0) entered forwarding state Apr 15 21:32:32 ns500184 kernel: [ 15.603914] IPv6: ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready Apr 15 21:32:47 ns500184 kernel: [ 35.452560] vmbr0: port 2(tap5000i0) entered blocking state Apr 15 21:32:47 ns500184 kernel: [ 35.452608] vmbr0: port 2(tap5000i0) entered disabled state Apr 15 21:32:47 ns500184 kernel: [ 35.452772] vmbr0: port 2(tap5000i0) entered blocking state Apr 15 21:32:47 ns500184 kernel: [ 35.452819] vmbr0: port 2(tap5000i0) entered forwarding state Gerald proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve) pve-manager: 6.1-8 (running version: 6.1-8/806edfe1) pve-kernel-helper: 6.1-8 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.3.18-3-pve: 5.3.18-3 pve-kernel-5.0.21-5-pve: 5.0.21-10 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.0.3-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35 libjs-extjs: 6.0.1-10 libknet1: 1.15-pve1 libpve-access-control: 6.0-6 libpve-apiclient-perl: 3.0-3 libpve-common-perl: 6.0-17 libpve-guest-common-perl: 3.0-5 libpve-http-server-perl: 3.0-5 libpve-storage-perl: 6.1-5 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 3.2.1-1 lxcfs: 4.0.1-pve1 novnc-pve: 1.1.0-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.1-3 pve-cluster: 6.1-4 pve-container: 3.0-23 pve-docs: 6.1-6 pve-edk2-firmware: 2.20200229-1 pve-firewall: 4.0-10 pve-firmware: 3.0-7 pve-ha-manager: 3.0-9 pve-i18n: 2.0-4 pve-qemu-kvm: 4.1.1-4 pve-xtermjs: 4.3.0-1 pve-zsync: 2.0-3 qemu-server: 6.1-7 smartmontools: 7.1-pve2 spiceterm: 3.1-1 vncterm: 1.6-1 zfsutils-linux: 0.8.3-pve1 From gianni.milo22 at gmail.com Thu Apr 16 17:07:37 2020 From: gianni.milo22 at gmail.com (Gianni Milo) Date: Thu, 16 Apr 2020 16:07:37 +0100 Subject: [PVE-User] Proxmox 6 loses network every 24 hours In-Reply-To: <4973416c-55fa-bf03-831a-dd32014e4120@majentis.com> References: <4973416c-55fa-bf03-831a-dd32014e4120@majentis.com> Message-ID: Maybe you can find some clues in syslogs during the time it goes off ? Check for either enp1s0 or vmbr0 for example. G. On Thu, 16 Apr 2020 at 14:32, Gerald Brandt wrote: > Hi, > > I have a Proxmox 6 server at SoYouStart https://www.soyoustart.com. I've > been running a server there for years, running Proxmox 4. Instead of > upgrading, I backed up all my VMs, did a fresh Proxmox 6 install, and > restored my VMs. It works well, except for NFS, but that's another topic. > > > Every 24 hours, vmbr0 disappears. The SoYouStart staff can't ping the > machine, assume it's gone down, and hard reboot the server. 24 hours > after it's up, the cycle repeats. I'm not sure what is going on. > > One weird thing, is that vmbr0 isn't configured in Proxmox, SoYouStart > has it configured some other way. Could this be part of the problem? It > looks like vmbr0 gets its IP via DHCP. (Yup, check in > /etc/network/interfaces) > > Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 7 > Apr 15 21:32:32 ns500184 dhclient[744]: Sending on > LPF/vmbr0/00:25:90:7b:a2:b8 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 7 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 13 > Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 13 > Apr 15 21:32:32 ns500184 ifup[695]: DHCPREQUEST for x.x.x.x on vmbr0 to > 255.255.255.255 port 67 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPREQUEST for x.x.x.x on vmbr0 > to 255.255.255.255 port 67 > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0 | True | > x.x.x.x | 255.255.255.0 | global | 00:25:90:7b:a2:b8 | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0 | True | > xxxxxxxxxxxxxxxxxxx/64 | . | link | 00:25:90:7b:a2:b8 | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 0 | 0.0.0.0 > | x.x.x.254 | 0.0.0.0 | vmbr0 | UG | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 1 | x.x.x.0 | > 0.0.0.0 | 255.255.255.0 | vmbr0 | U | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 1 | fe80::/64 > | :: | vmbr0 | U | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 3 | local > | :: | vmbr0 | U | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | 4 | ff00::/8 > | :: | vmbr0 | U | > Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 3 vmbr0 x.x.x.x:123 > Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 5 vmbr0 > [xxxxxxxxxx]:123 > Apr 15 21:32:32 ns500184 kernel: [ 12.212574] vmbr0: port 1(enp1s0) > entered blocking state > Apr 15 21:32:32 ns500184 kernel: [ 12.212639] vmbr0: port 1(enp1s0) > entered disabled state > Apr 15 21:32:32 ns500184 kernel: [ 15.603713] vmbr0: port 1(enp1s0) > entered blocking state > Apr 15 21:32:32 ns500184 kernel: [ 15.603773] vmbr0: port 1(enp1s0) > entered forwarding state > Apr 15 21:32:32 ns500184 kernel: [ 15.603914] IPv6: > ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready > Apr 15 21:32:47 ns500184 kernel: [ 35.452560] vmbr0: port 2(tap5000i0) > entered blocking state > Apr 15 21:32:47 ns500184 kernel: [ 35.452608] vmbr0: port 2(tap5000i0) > entered disabled state > Apr 15 21:32:47 ns500184 kernel: [ 35.452772] vmbr0: port 2(tap5000i0) > entered blocking state > Apr 15 21:32:47 ns500184 kernel: [ 35.452819] vmbr0: port 2(tap5000i0) > entered forwarding state > > Gerald > > > > proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve) > pve-manager: 6.1-8 (running version: 6.1-8/806edfe1) > pve-kernel-helper: 6.1-8 > pve-kernel-5.3: 6.1-6 > pve-kernel-5.0: 6.0-11 > pve-kernel-5.3.18-3-pve: 5.3.18-3 > pve-kernel-5.0.21-5-pve: 5.0.21-10 > ceph-fuse: 12.2.11+dfsg1-2.1+b1 > corosync: 3.0.3-pve1 > criu: 3.11-3 > glusterfs-client: 5.5-3 > ifupdown: 0.8.35 > libjs-extjs: 6.0.1-10 > libknet1: 1.15-pve1 > libpve-access-control: 6.0-6 > libpve-apiclient-perl: 3.0-3 > libpve-common-perl: 6.0-17 > libpve-guest-common-perl: 3.0-5 > libpve-http-server-perl: 3.0-5 > libpve-storage-perl: 6.1-5 > libqb0: 1.0.5-1 > libspice-server1: 0.14.2-4~pve6+1 > lvm2: 2.03.02-pve4 > lxc-pve: 3.2.1-1 > lxcfs: 4.0.1-pve1 > novnc-pve: 1.1.0-1 > proxmox-mini-journalreader: 1.1-1 > proxmox-widget-toolkit: 2.1-3 > pve-cluster: 6.1-4 > pve-container: 3.0-23 > pve-docs: 6.1-6 > pve-edk2-firmware: 2.20200229-1 > pve-firewall: 4.0-10 > pve-firmware: 3.0-7 > pve-ha-manager: 3.0-9 > pve-i18n: 2.0-4 > pve-qemu-kvm: 4.1.1-4 > pve-xtermjs: 4.3.0-1 > pve-zsync: 2.0-3 > qemu-server: 6.1-7 > smartmontools: 7.1-pve2 > spiceterm: 3.1-1 > vncterm: 1.6-1 > zfsutils-linux: 0.8.3-pve1 > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From elacunza at binovo.es Thu Apr 16 18:00:46 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Thu, 16 Apr 2020 18:00:46 +0200 Subject: [PVE-User] Proxmox 6 loses network every 24 hours In-Reply-To: <4973416c-55fa-bf03-831a-dd32014e4120@majentis.com> References: <4973416c-55fa-bf03-831a-dd32014e4120@majentis.com> Message-ID: <5be3f8bd-ae20-93e6-497a-3f30a39fa2bf@binovo.es> Hi Gerald, I'm sorry about your issue. I tried Soyoustart some time ago (3-4 years I'd say), but my experience was really awful. Had to phone about 5 numbers, talked to people in half the countries in Europe and finally the support guy hanged the call. Probably there's some kind of network problem and vmbr0 DHCP lease expires. I remember experiencing this problem sometime, but I'm not sure it was the issue on Soyoustart. I decided it wasn't worth the trouble. Use a bit pricier OVH dedicated server, or better, some Hetzner server. Cheers Eneko El 16/4/20 a las 15:31, Gerald Brandt escribi?: > Hi, > > I have a Proxmox 6 server at SoYouStart https://www.soyoustart.com. > I've been running a server there for years, running Proxmox 4. Instead > of upgrading, I backed up all my VMs, did a fresh Proxmox 6 install, > and restored my VMs. It works well, except for NFS, but that's another > topic. > > > Every 24 hours, vmbr0 disappears. The SoYouStart staff can't ping the > machine, assume it's gone down, and hard reboot the server. 24 hours > after it's up, the cycle repeats. I'm not sure what is going on. > > One weird thing, is that vmbr0 isn't configured in Proxmox, SoYouStart > has it configured some other way. Could this be part of the problem? > It looks like vmbr0 gets its IP via DHCP. (Yup, check in > /etc/network/interfaces) > > Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 7 > Apr 15 21:32:32 ns500184 dhclient[744]: Sending on > LPF/vmbr0/00:25:90:7b:a2:b8 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 7 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 13 > Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 13 > Apr 15 21:32:32 ns500184 ifup[695]: DHCPREQUEST for x.x.x.x on vmbr0 > to 255.255.255.255 port 67 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPREQUEST for x.x.x.x on > vmbr0 to 255.255.255.255 port 67 > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0? | True > |???????? x.x.x.x??????? | 255.255.255.0 | global | 00:25:90:7b:a2:b8 | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0? | True | > xxxxxxxxxxxxxxxxxxx/64 |?????? .?????? |? link? | 00:25:90:7b:a2:b8 | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 0?? | > 0.0.0.0??? | x.x.x.254 |??? 0.0.0.0??? |?? vmbr0?? |?? UG? | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 1?? | x.x.x.0? > |?? 0.0.0.0??? | 255.255.255.0 |?? vmbr0?? |?? U?? | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 1?? | > fe80::/64? |??? ::?? |?? vmbr0?? |?? U?? | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 3?? | local??? > |??? ::?? |?? vmbr0?? |?? U?? | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 4?? | ff00::/8? > |??? ::?? |?? vmbr0?? |?? U?? | > Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 3 vmbr0 > x.x.x.x:123 > Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 5 vmbr0 > [xxxxxxxxxx]:123 > Apr 15 21:32:32 ns500184 kernel: [?? 12.212574] vmbr0: port 1(enp1s0) > entered blocking state > Apr 15 21:32:32 ns500184 kernel: [?? 12.212639] vmbr0: port 1(enp1s0) > entered disabled state > Apr 15 21:32:32 ns500184 kernel: [?? 15.603713] vmbr0: port 1(enp1s0) > entered blocking state > Apr 15 21:32:32 ns500184 kernel: [?? 15.603773] vmbr0: port 1(enp1s0) > entered forwarding state > Apr 15 21:32:32 ns500184 kernel: [?? 15.603914] IPv6: > ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready > Apr 15 21:32:47 ns500184 kernel: [?? 35.452560] vmbr0: port > 2(tap5000i0) entered blocking state > Apr 15 21:32:47 ns500184 kernel: [?? 35.452608] vmbr0: port > 2(tap5000i0) entered disabled state > Apr 15 21:32:47 ns500184 kernel: [?? 35.452772] vmbr0: port > 2(tap5000i0) entered blocking state > Apr 15 21:32:47 ns500184 kernel: [?? 35.452819] vmbr0: port > 2(tap5000i0) entered forwarding state > > Gerald > > > > proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve) > pve-manager: 6.1-8 (running version: 6.1-8/806edfe1) > pve-kernel-helper: 6.1-8 > pve-kernel-5.3: 6.1-6 > pve-kernel-5.0: 6.0-11 > pve-kernel-5.3.18-3-pve: 5.3.18-3 > pve-kernel-5.0.21-5-pve: 5.0.21-10 > ceph-fuse: 12.2.11+dfsg1-2.1+b1 > corosync: 3.0.3-pve1 > criu: 3.11-3 > glusterfs-client: 5.5-3 > ifupdown: 0.8.35 > libjs-extjs: 6.0.1-10 > libknet1: 1.15-pve1 > libpve-access-control: 6.0-6 > libpve-apiclient-perl: 3.0-3 > libpve-common-perl: 6.0-17 > libpve-guest-common-perl: 3.0-5 > libpve-http-server-perl: 3.0-5 > libpve-storage-perl: 6.1-5 > libqb0: 1.0.5-1 > libspice-server1: 0.14.2-4~pve6+1 > lvm2: 2.03.02-pve4 > lxc-pve: 3.2.1-1 > lxcfs: 4.0.1-pve1 > novnc-pve: 1.1.0-1 > proxmox-mini-journalreader: 1.1-1 > proxmox-widget-toolkit: 2.1-3 > pve-cluster: 6.1-4 > pve-container: 3.0-23 > pve-docs: 6.1-6 > pve-edk2-firmware: 2.20200229-1 > pve-firewall: 4.0-10 > pve-firmware: 3.0-7 > pve-ha-manager: 3.0-9 > pve-i18n: 2.0-4 > pve-qemu-kvm: 4.1.1-4 > pve-xtermjs: 4.3.0-1 > pve-zsync: 2.0-3 > qemu-server: 6.1-7 > smartmontools: 7.1-pve2 > spiceterm: 3.1-1 > vncterm: 1.6-1 > zfsutils-linux: 0.8.3-pve1 > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From gbr at majentis.com Sun Apr 19 05:40:18 2020 From: gbr at majentis.com (Gerald Brandt) Date: Sat, 18 Apr 2020 22:40:18 -0500 Subject: [PVE-User] Proxmox 6 loses network every 24 hours In-Reply-To: <4973416c-55fa-bf03-831a-dd32014e4120@majentis.com> References: <4973416c-55fa-bf03-831a-dd32014e4120@majentis.com> Message-ID: <1d10c4d4-b452-f99a-9842-6ff7477945ee@majentis.com> Changing Proxmox from DHCP on vmbr0 to static stopped the issue. Gerald On 2020-04-16 8:31 a.m., Gerald Brandt wrote: > Hi, > > I have a Proxmox 6 server at SoYouStart https://www.soyoustart.com. > I've been running a server there for years, running Proxmox 4. Instead > of upgrading, I backed up all my VMs, did a fresh Proxmox 6 install, > and restored my VMs. It works well, except for NFS, but that's another > topic. > > > Every 24 hours, vmbr0 disappears. The SoYouStart staff can't ping the > machine, assume it's gone down, and hard reboot the server. 24 hours > after it's up, the cycle repeats. I'm not sure what is going on. > > One weird thing, is that vmbr0 isn't configured in Proxmox, SoYouStart > has it configured some other way. Could this be part of the problem? > It looks like vmbr0 gets its IP via DHCP. (Yup, check in > /etc/network/interfaces) > > Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 7 > Apr 15 21:32:32 ns500184 dhclient[744]: Sending on > LPF/vmbr0/00:25:90:7b:a2:b8 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 7 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 13 > Apr 15 21:32:32 ns500184 ifup[695]: DHCPDISCOVER on vmbr0 to > 255.255.255.255 port 67 interval 13 > Apr 15 21:32:32 ns500184 ifup[695]: DHCPREQUEST for x.x.x.x on vmbr0 > to 255.255.255.255 port 67 > Apr 15 21:32:32 ns500184 dhclient[744]: DHCPREQUEST for x.x.x.x on > vmbr0 to 255.255.255.255 port 67 > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0? | True > |???????? x.x.x.x??????? | 255.255.255.0 | global | 00:25:90:7b:a2:b8 | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: | vmbr0? | True | > xxxxxxxxxxxxxxxxxxx/64 |?????? .?????? |? link? | 00:25:90:7b:a2:b8 | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 0?? | > 0.0.0.0??? | x.x.x.254 |??? 0.0.0.0??? |?? vmbr0?? |?? UG? | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 1?? | x.x.x.0? > |?? 0.0.0.0??? | 255.255.255.0 |?? vmbr0?? |?? U?? | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 1?? | > fe80::/64? |??? ::?? |?? vmbr0?? |?? U?? | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 3?? | local??? > |??? ::?? |?? vmbr0?? |?? U?? | > Apr 15 21:32:32 ns500184 cloud-init[819]: ci-info: |?? 4?? | ff00::/8? > |??? ::?? |?? vmbr0?? |?? U?? | > Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 3 vmbr0 > x.x.x.x:123 > Apr 15 21:32:32 ns500184 ntpd[891]: Listen normally on 5 vmbr0 > [xxxxxxxxxx]:123 > Apr 15 21:32:32 ns500184 kernel: [?? 12.212574] vmbr0: port 1(enp1s0) > entered blocking state > Apr 15 21:32:32 ns500184 kernel: [?? 12.212639] vmbr0: port 1(enp1s0) > entered disabled state > Apr 15 21:32:32 ns500184 kernel: [?? 15.603713] vmbr0: port 1(enp1s0) > entered blocking state > Apr 15 21:32:32 ns500184 kernel: [?? 15.603773] vmbr0: port 1(enp1s0) > entered forwarding state > Apr 15 21:32:32 ns500184 kernel: [?? 15.603914] IPv6: > ADDRCONF(NETDEV_CHANGE): vmbr0: link becomes ready > Apr 15 21:32:47 ns500184 kernel: [?? 35.452560] vmbr0: port > 2(tap5000i0) entered blocking state > Apr 15 21:32:47 ns500184 kernel: [?? 35.452608] vmbr0: port > 2(tap5000i0) entered disabled state > Apr 15 21:32:47 ns500184 kernel: [?? 35.452772] vmbr0: port > 2(tap5000i0) entered blocking state > Apr 15 21:32:47 ns500184 kernel: [?? 35.452819] vmbr0: port > 2(tap5000i0) entered forwarding state > > Gerald > > > > proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve) > pve-manager: 6.1-8 (running version: 6.1-8/806edfe1) > pve-kernel-helper: 6.1-8 > pve-kernel-5.3: 6.1-6 > pve-kernel-5.0: 6.0-11 > pve-kernel-5.3.18-3-pve: 5.3.18-3 > pve-kernel-5.0.21-5-pve: 5.0.21-10 > ceph-fuse: 12.2.11+dfsg1-2.1+b1 > corosync: 3.0.3-pve1 > criu: 3.11-3 > glusterfs-client: 5.5-3 > ifupdown: 0.8.35 > libjs-extjs: 6.0.1-10 > libknet1: 1.15-pve1 > libpve-access-control: 6.0-6 > libpve-apiclient-perl: 3.0-3 > libpve-common-perl: 6.0-17 > libpve-guest-common-perl: 3.0-5 > libpve-http-server-perl: 3.0-5 > libpve-storage-perl: 6.1-5 > libqb0: 1.0.5-1 > libspice-server1: 0.14.2-4~pve6+1 > lvm2: 2.03.02-pve4 > lxc-pve: 3.2.1-1 > lxcfs: 4.0.1-pve1 > novnc-pve: 1.1.0-1 > proxmox-mini-journalreader: 1.1-1 > proxmox-widget-toolkit: 2.1-3 > pve-cluster: 6.1-4 > pve-container: 3.0-23 > pve-docs: 6.1-6 > pve-edk2-firmware: 2.20200229-1 > pve-firewall: 4.0-10 > pve-firmware: 3.0-7 > pve-ha-manager: 3.0-9 > pve-i18n: 2.0-4 > pve-qemu-kvm: 4.1.1-4 > pve-xtermjs: 4.3.0-1 > pve-zsync: 2.0-3 > qemu-server: 6.1-7 > smartmontools: 7.1-pve2 > spiceterm: 3.1-1 > vncterm: 1.6-1 > zfsutils-linux: 0.8.3-pve1 > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > From elacunza at binovo.es Tue Apr 21 09:20:09 2020 From: elacunza at binovo.es (Eneko Lacunza) Date: Tue, 21 Apr 2020 09:20:09 +0200 Subject: [PVE-User] 5.4 kernel NFS issue Message-ID: Dear Proxmox developers, Following forum post: https://forum.proxmox.com/threads/linux-kernel-5-4-for-proxmox-ve.66854/ I upgraded from 5.3.18-2 to 5.4 in a new Proxmox 6.1 node to diagnose a network card issue... Network card seems broken :-( , but I found that NFS storage doesn't work with current 5.4 kernel: ?Linux proxmox3 5.4.24-1-pve #1 SMP PVE 5.4.24-1 (Mon, 09 Mar 2020 12:59:46 +0100) x86_64 GNU/Linux I get (lots of) the following in syslog: Apr 21 09:04:20 proxmox3 pvestatd[1192]: mount error: mount.nfs: No such device Apr 21 09:04:21 proxmox3 pvedaemon[2379]: mount error: mount.nfs: No such device Reverting to 5.3.18-2 works. Cheers Eneko -- Zuzendari Teknikoa / Director T?cnico Binovo IT Human Project, S.L. Telf. 943569206 Astigarragako bidea 2, 2? izq. oficina 11; 20180 Oiartzun (Gipuzkoa) www.binovo.es From a.antreich at proxmox.com Tue Apr 21 16:01:51 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Tue, 21 Apr 2020 16:01:51 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <75332af1-fc03-2727-1ad2-dcd36c1ee91b@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> <20200414144227.GA411486@dona.proxmox.com> <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> <20200414160900.GB411486@dona.proxmox.com> <5dd07746-e276-9545-91ed-d5d49af2bd41@uni-koblenz.de> <75332af1-fc03-2727-1ad2-dcd36c1ee91b@uni-koblenz.de> Message-ID: <20200421140151.GC2646755@dona.proxmox.com> On Tue, Apr 21, 2020 at 03:34:47PM +0200, Rainer Krienke wrote: > Hello, > > just wanted to thank you for your help and to tell you that I found the > culprit that made my read-perfomance look rarther small on a proxmox VM > with a LV based on 4 disks (rdbs). The best result using bonnie++ as a > test was about 100MBytes/sec on a single VM read performance. > > I remembered that right after I switched to LVM striping I had tested > the default block size LVM would use which was 64K. I found this value > rather small and replaces it by 512K, which increased read and write > speed. > > I remenbered this fact and again changed the stripe size to the default > RBD object size which is 4MB. Using this value the read performance went > up to 400MBytes/sec and if I run two bonnies on two different VMs the > total read performance in ceph is about 800MBytes/sec > The write performance in this 2VMs test is about 1.2GByte/sec. The 4 MB chunk size is good for Ceph. But how is the intended workload performing on those VMs? -- Cheers, Alwin From krienke at uni-koblenz.de Wed Apr 22 12:43:58 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Wed, 22 Apr 2020 12:43:58 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <20200421140151.GC2646755@dona.proxmox.com> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> <20200414144227.GA411486@dona.proxmox.com> <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> <20200414160900.GB411486@dona.proxmox.com> <5dd07746-e276-9545-91ed-d5d49af2bd41@uni-koblenz.de> <75332af1-fc03-2727-1ad2-dcd36c1ee91b@uni-koblenz.de> <20200421140151.GC2646755@dona.proxmox.com> Message-ID: <50425113-ff79-c41e-4b4c-73511f72f16b@uni-koblenz.de> hello, there is no single workload, but a bunch of VMs the do a lot of different things many of which do not special performance demands. The VMs that do need speed are NFS Fileservers and SMBservers. And exactly these servers seem to benefit from the larger block size. The two I tested mostly on are fileservers. Aside from bonnie++ I also tested writing and reading files especially many small ones. This also works very good and a fact that at the beginning of my search was not true is now: The proxmox/ceph solution is faster and in some disciplines much faster than the old xenbased with ISCSI storage as a backend. Thanks Rainer Am 21.04.20 um 16:01 schrieb Alwin Antreich: > On Tue, Apr 21, 2020 at 03:34:47PM +0200, Rainer Krienke wrote: >> Hello, >> >> just wanted to thank you for your help and to tell you that I found the >> culprit that made my read-perfomance look rarther small on a proxmox VM >> with a LV based on 4 disks (rdbs). The best result using bonnie++ as a >> test was about 100MBytes/sec on a single VM read performance. >> >> I remembered that right after I switched to LVM striping I had tested >> the default block size LVM would use which was 64K. I found this value >> rather small and replaces it by 512K, which increased read and write >> speed. >> >> I remenbered this fact and again changed the stripe size to the default >> RBD object size which is 4MB. Using this value the read performance went >> up to 400MBytes/sec and if I run two bonnies on two different VMs the >> total read performance in ceph is about 800MBytes/sec >> The write performance in this 2VMs test is about 1.2GByte/sec. > The 4 MB chunk size is good for Ceph. But how is the intended workload > performing on those VMs? > > -- > Cheers, > Alwin > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From a.antreich at proxmox.com Wed Apr 22 17:03:07 2020 From: a.antreich at proxmox.com (Alwin Antreich) Date: Wed, 22 Apr 2020 17:03:07 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <50425113-ff79-c41e-4b4c-73511f72f16b@uni-koblenz.de> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> <20200414144227.GA411486@dona.proxmox.com> <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> <20200414160900.GB411486@dona.proxmox.com> <5dd07746-e276-9545-91ed-d5d49af2bd41@uni-koblenz.de> <75332af1-fc03-2727-1ad2-dcd36c1ee91b@uni-koblenz.de> <20200421140151.GC2646755@dona.proxmox.com> <50425113-ff79-c41e-4b4c-73511f72f16b@uni-koblenz.de> Message-ID: <20200422150307.GD2646755@dona.proxmox.com> On Wed, Apr 22, 2020 at 12:43:58PM +0200, Rainer Krienke wrote: > hello, > > there is no single workload, but a bunch of VMs the do a lot of > different things many of which do not special performance > demands. The VMs that do need speed are NFS Fileservers and SMBservers. > > And exactly these servers seem to benefit from the larger block size. > The two I tested mostly on are fileservers. I am curious, so your setup is a striped LVM with what filesystem on top? > > Aside from bonnie++ I also tested writing and reading files especially > many small ones. This also works very good and a fact that at the > beginning of my search was not true is now: The proxmox/ceph solution is > faster and in some disciplines much faster than the old xenbased with > ISCSI storage as a backend. Nice to hear. :) -- Cheers, Alwin From krienke at uni-koblenz.de Thu Apr 23 13:03:15 2020 From: krienke at uni-koblenz.de (Rainer Krienke) Date: Thu, 23 Apr 2020 13:03:15 +0200 Subject: [PVE-User] Proxmox with ceph storage VM performance strangeness In-Reply-To: <20200422150307.GD2646755@dona.proxmox.com> References: <75534fbc-35cc-1bba-566b-f08d22b2e622@uni-koblenz.de> <31f90750-5425-cfa4-7178-8fdc96cd434c@binovo.es> <9f79cb0d-843d-6519-f448-df0fe379848c@uni-koblenz.de> <20200414144227.GA411486@dona.proxmox.com> <9fca29b6-6188-b0fd-44d5-df07dc259efd@uni-koblenz.de> <20200414160900.GB411486@dona.proxmox.com> <5dd07746-e276-9545-91ed-d5d49af2bd41@uni-koblenz.de> <75332af1-fc03-2727-1ad2-dcd36c1ee91b@uni-koblenz.de> <20200421140151.GC2646755@dona.proxmox.com> <50425113-ff79-c41e-4b4c-73511f72f16b@uni-koblenz.de> <20200422150307.GD2646755@dona.proxmox.com> Message-ID: On top I use xfs. Just curious: Why did you ask :-) ? Rainer Am 22.04.20 um 17:03 schrieb Alwin Antreich: > On Wed, Apr 22, 2020 at 12:43:58PM +0200, Rainer Krienke wrote: >> hello, >> >> there is no single workload, but a bunch of VMs the do a lot of >> different things many of which do not special performance >> demands. The VMs that do need speed are NFS Fileservers and SMBservers. >> >> And exactly these servers seem to benefit from the larger block size. >> The two I tested mostly on are fileservers. > I am curious, so your setup is a striped LVM with what filesystem on top? > >> >> Aside from bonnie++ I also tested writing and reading files especially >> many small ones. This also works very good and a fact that at the >> beginning of my search was not true is now: The proxmox/ceph solution is >> faster and in some disciplines much faster than the old xenbased with >> ISCSI storage as a backend. > Nice to hear. :) > > -- > Cheers, > Alwin > -- Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312 From sivakumar.saravanan.jv.ext at valeo-siemens.com Thu Apr 23 21:38:20 2020 From: sivakumar.saravanan.jv.ext at valeo-siemens.com (Sivakumar SARAVANAN) Date: Thu, 23 Apr 2020 21:38:20 +0200 Subject: [PVE-User] Not able to start the VM Message-ID: Hello, Not able to start the VM, where it is showing error as below start failed: hugepage allocation failed at /usr/share/perl5/PVE/QemuServer/Memory.pm line 541 Mit freundlichen Gr??en / Best regards / Cordialement, Sivakumar SARAVANAN Externer Dienstleister f?r / External service provider for Valeo Siemens eAutomotive Germany GmbH Research & Development R & D SWENG TE 1 INFTE Frauenauracher Stra?e 85 91056 Erlangen, Germany Tel.: +49 9131 9892 0000 Mobile: +49 176 7698 5441 sivakumar.saravanan.jv.ext at valeo-siemens.com valeo-siemens.com Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger Schwab, Michael Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 -- *This e-mail message is intended for the internal use of the intended recipient(s) only. The information contained herein is confidential/privileged. Its disclosure or reproduction is strictly prohibited. If you are not the intended recipient, please inform the sender immediately, do not disclose it internally or to third parties and destroy it. In the course of our business relationship and for business purposes only, Valeo may need to process some of your personal data. For more information, please refer to the Valeo Data Protection Statement and Privacy notice available on Valeo.com * From gilberto.nunes32 at gmail.com Thu Apr 23 21:41:13 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 23 Apr 2020 16:41:13 -0300 Subject: [PVE-User] Not able to start the VM In-Reply-To: References: Message-ID: Please, give us the vm config file, placed in /etc/pve/qemu-server --- Gilberto Nunes Ferreira Em qui., 23 de abr. de 2020 ?s 16:38, Sivakumar SARAVANAN < sivakumar.saravanan.jv.ext at valeo-siemens.com> escreveu: > Hello, > > Not able to start the VM, where it is showing error as below > > start failed: hugepage allocation failed at > /usr/share/perl5/PVE/QemuServer/Memory.pm line 541 > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > Sivakumar SARAVANAN > > Externer Dienstleister f?r / External service provider for > Valeo Siemens eAutomotive Germany GmbH > Research & Development > R & D SWENG TE 1 INFTE > Frauenauracher Stra?e 85 > 91056 Erlangen, Germany > Tel.: +49 9131 9892 0000 > Mobile: +49 176 7698 5441 > sivakumar.saravanan.jv.ext at valeo-siemens.com > valeo-siemens.com > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > Schwab, Michael > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > -- > *This e-mail message is intended for the internal use of the intended > recipient(s) only. > The information contained herein is > confidential/privileged. Its disclosure or reproduction is strictly > prohibited. > If you are not the intended recipient, please inform the sender > immediately, do not disclose it internally or to third parties and destroy > it. > > In the course of our business relationship and for business purposes > only, Valeo may need to process some of your personal data. > For more > information, please refer to the Valeo Data Protection Statement and > Privacy notice available on Valeo.com > * > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From sivakumar.saravanan.jv.ext at valeo-siemens.com Thu Apr 23 21:48:26 2020 From: sivakumar.saravanan.jv.ext at valeo-siemens.com (Sivakumar SARAVANAN) Date: Thu, 23 Apr 2020 21:48:26 +0200 Subject: [PVE-User] Not able to start the VM In-Reply-To: References: Message-ID: Hello, Could you let me know the steps to get the file please ? Mit freundlichen Gr??en / Best regards / Cordialement, Sivakumar SARAVANAN Externer Dienstleister f?r / External service provider for Valeo Siemens eAutomotive Germany GmbH Research & Development R & D SWENG TE 1 INFTE Frauenauracher Stra?e 85 91056 Erlangen, Germany Tel.: +49 9131 9892 0000 Mobile: +49 176 7698 5441 sivakumar.saravanan.jv.ext at valeo-siemens.com valeo-siemens.com Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger Schwab, Michael Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 On Thu, Apr 23, 2020 at 9:42 PM Gilberto Nunes wrote: > Please, give us the vm config file, placed in /etc/pve/qemu-server > > > --- > Gilberto Nunes Ferreira > > > Em qui., 23 de abr. de 2020 ?s 16:38, Sivakumar SARAVANAN < > sivakumar.saravanan.jv.ext at valeo-siemens.com> escreveu: > > > Hello, > > > > Not able to start the VM, where it is showing error as below > > > > start failed: hugepage allocation failed at > > /usr/share/perl5/PVE/QemuServer/Memory.pm line 541 > > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > > > Sivakumar SARAVANAN > > > > Externer Dienstleister f?r / External service provider for > > Valeo Siemens eAutomotive Germany GmbH > > Research & Development > > R & D SWENG TE 1 INFTE > > Frauenauracher Stra?e 85 > > 91056 Erlangen, Germany > > Tel.: +49 9131 9892 0000 > > Mobile: +49 176 7698 5441 > > sivakumar.saravanan.jv.ext at valeo-siemens.com > > valeo-siemens.com > > > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > > Schwab, Michael > > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered > > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > > -- > > *This e-mail message is intended for the internal use of the intended > > recipient(s) only. > > The information contained herein is > > confidential/privileged. Its disclosure or reproduction is strictly > > prohibited. > > If you are not the intended recipient, please inform the sender > > immediately, do not disclose it internally or to third parties and > destroy > > it. > > > > In the course of our business relationship and for business purposes > > only, Valeo may need to process some of your personal data. > > For more > > information, please refer to the Valeo Data Protection Statement and > > Privacy notice available on Valeo.com > > * > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- *This e-mail message is intended for the internal use of the intended recipient(s) only. The information contained herein is confidential/privileged. Its disclosure or reproduction is strictly prohibited. If you are not the intended recipient, please inform the sender immediately, do not disclose it internally or to third parties and destroy it. In the course of our business relationship and for business purposes only, Valeo may need to process some of your personal data. For more information, please refer to the Valeo Data Protection Statement and Privacy notice available on Valeo.com * From gilberto.nunes32 at gmail.com Thu Apr 23 21:57:27 2020 From: gilberto.nunes32 at gmail.com (Gilberto Nunes) Date: Thu, 23 Apr 2020 16:57:27 -0300 Subject: [PVE-User] Not able to start the VM In-Reply-To: References: Message-ID: Using Windows, use WinSCP in order to open a ssh session to the Proxmox server Using Linux, use sftp line interface command, then cd /etc/pve/qemu-server, then get VMID.conf, to save to your computer. --- Gilberto Nunes Ferreira Em qui., 23 de abr. de 2020 ?s 16:48, Sivakumar SARAVANAN < sivakumar.saravanan.jv.ext at valeo-siemens.com> escreveu: > Hello, > > Could you let me know the steps to get the file please ? > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > Sivakumar SARAVANAN > > Externer Dienstleister f?r / External service provider for > Valeo Siemens eAutomotive Germany GmbH > Research & Development > R & D SWENG TE 1 INFTE > Frauenauracher Stra?e 85 > 91056 Erlangen, Germany > Tel.: +49 9131 9892 0000 > Mobile: +49 176 7698 5441 > sivakumar.saravanan.jv.ext at valeo-siemens.com > valeo-siemens.com > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > Schwab, Michael > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > On Thu, Apr 23, 2020 at 9:42 PM Gilberto Nunes > > wrote: > > > Please, give us the vm config file, placed in /etc/pve/qemu-server > > > > > > --- > > Gilberto Nunes Ferreira > > > > > > Em qui., 23 de abr. de 2020 ?s 16:38, Sivakumar SARAVANAN < > > sivakumar.saravanan.jv.ext at valeo-siemens.com> escreveu: > > > > > Hello, > > > > > > Not able to start the VM, where it is showing error as below > > > > > > start failed: hugepage allocation failed at > > > /usr/share/perl5/PVE/QemuServer/Memory.pm line 541 > > > > > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > > > > > Sivakumar SARAVANAN > > > > > > Externer Dienstleister f?r / External service provider for > > > Valeo Siemens eAutomotive Germany GmbH > > > Research & Development > > > R & D SWENG TE 1 INFTE > > > Frauenauracher Stra?e 85 > > > 91056 Erlangen, Germany > > > Tel.: +49 9131 9892 0000 > > > Mobile: +49 176 7698 5441 > > > sivakumar.saravanan.jv.ext at valeo-siemens.com > > > valeo-siemens.com > > > > > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > > > Schwab, Michael > > > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered > > > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > > > > -- > > > *This e-mail message is intended for the internal use of the intended > > > recipient(s) only. > > > The information contained herein is > > > confidential/privileged. Its disclosure or reproduction is strictly > > > prohibited. > > > If you are not the intended recipient, please inform the sender > > > immediately, do not disclose it internally or to third parties and > > destroy > > > it. > > > > > > In the course of our business relationship and for business purposes > > > only, Valeo may need to process some of your personal data. > > > For more > > > information, please refer to the Valeo Data Protection Statement and > > > Privacy notice available on Valeo.com > > > * > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > -- > *This e-mail message is intended for the internal use of the intended > recipient(s) only. > The information contained herein is > confidential/privileged. Its disclosure or reproduction is strictly > prohibited. > If you are not the intended recipient, please inform the sender > immediately, do not disclose it internally or to third parties and destroy > it. > > In the course of our business relationship and for business purposes > only, Valeo may need to process some of your personal data. > For more > information, please refer to the Valeo Data Protection Statement and > Privacy notice available on Valeo.com > * > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > From sivakumar.saravanan.jv.ext at valeo-siemens.com Thu Apr 23 22:22:31 2020 From: sivakumar.saravanan.jv.ext at valeo-siemens.com (Sivakumar SARAVANAN) Date: Thu, 23 Apr 2020 22:22:31 +0200 Subject: [PVE-User] Not able to start the VM In-Reply-To: References: Message-ID: Hello Gilberto, Thank you. Please find attached the file. Mit freundlichen Gr??en / Best regards / Cordialement, Sivakumar SARAVANAN Externer Dienstleister f?r / External service provider for Valeo Siemens eAutomotive Germany GmbH Research & Development R & D SWENG TE 1 INFTE Frauenauracher Stra?e 85 91056 Erlangen, Germany Tel.: +49 9131 9892 0000 Mobile: +49 176 7698 5441 sivakumar.saravanan.jv.ext at valeo-siemens.com valeo-siemens.com Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger Schwab, Michael Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 On Thu, Apr 23, 2020 at 9:59 PM Gilberto Nunes wrote: > Using Windows, use WinSCP in order to open a ssh session to the Proxmox > server > Using Linux, use sftp line interface command, then cd /etc/pve/qemu-server, > then get VMID.conf, to save to your computer. > > > --- > Gilberto Nunes Ferreira > > > Em qui., 23 de abr. de 2020 ?s 16:48, Sivakumar SARAVANAN < > sivakumar.saravanan.jv.ext at valeo-siemens.com> escreveu: > > > Hello, > > > > Could you let me know the steps to get the file please ? > > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > > > Sivakumar SARAVANAN > > > > Externer Dienstleister f?r / External service provider for > > Valeo Siemens eAutomotive Germany GmbH > > Research & Development > > R & D SWENG TE 1 INFTE > > Frauenauracher Stra?e 85 > > 91056 Erlangen, Germany > > Tel.: +49 9131 9892 0000 > > Mobile: +49 176 7698 5441 > > sivakumar.saravanan.jv.ext at valeo-siemens.com > > valeo-siemens.com > > > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > > Schwab, Michael > > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered > > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > > > > On Thu, Apr 23, 2020 at 9:42 PM Gilberto Nunes < > gilberto.nunes32 at gmail.com > > > > > wrote: > > > > > Please, give us the vm config file, placed in /etc/pve/qemu-server > > > > > > > > > --- > > > Gilberto Nunes Ferreira > > > > > > > > > Em qui., 23 de abr. de 2020 ?s 16:38, Sivakumar SARAVANAN < > > > sivakumar.saravanan.jv.ext at valeo-siemens.com> escreveu: > > > > > > > Hello, > > > > > > > > Not able to start the VM, where it is showing error as below > > > > > > > > start failed: hugepage allocation failed at > > > > /usr/share/perl5/PVE/QemuServer/Memory.pm line 541 > > > > > > > > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > > > > > > > Sivakumar SARAVANAN > > > > > > > > Externer Dienstleister f?r / External service provider for > > > > Valeo Siemens eAutomotive Germany GmbH > > > > Research & Development > > > > R & D SWENG TE 1 INFTE > > > > Frauenauracher Stra?e 85 > > > > 91056 Erlangen, Germany > > > > Tel.: +49 9131 9892 0000 > > > > Mobile: +49 176 7698 5441 > > > > sivakumar.saravanan.jv.ext at valeo-siemens.com > > > > valeo-siemens.com > > > > > > > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > > > > Schwab, Michael > > > > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; > Registered > > > > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > > > > > > -- > > > > *This e-mail message is intended for the internal use of the intended > > > > recipient(s) only. > > > > The information contained herein is > > > > confidential/privileged. Its disclosure or reproduction is strictly > > > > prohibited. > > > > If you are not the intended recipient, please inform the sender > > > > immediately, do not disclose it internally or to third parties and > > > destroy > > > > it. > > > > > > > > In the course of our business relationship and for business purposes > > > > only, Valeo may need to process some of your personal data. > > > > For more > > > > information, please refer to the Valeo Data Protection Statement and > > > > Privacy notice available on Valeo.com > > > > * > > > > _______________________________________________ > > > > pve-user mailing list > > > > pve-user at pve.proxmox.com > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > -- > > *This e-mail message is intended for the internal use of the intended > > recipient(s) only. > > The information contained herein is > > confidential/privileged. Its disclosure or reproduction is strictly > > prohibited. > > If you are not the intended recipient, please inform the sender > > immediately, do not disclose it internally or to third parties and > destroy > > it. > > > > In the course of our business relationship and for business purposes > > only, Valeo may need to process some of your personal data. > > For more > > information, please refer to the Valeo Data Protection Statement and > > Privacy notice available on Valeo.com > > * > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- *This e-mail message is intended for the internal use of the intended recipient(s) only. The information contained herein is confidential/privileged. Its disclosure or reproduction is strictly prohibited. If you are not the intended recipient, please inform the sender immediately, do not disclose it internally or to third parties and destroy it. In the course of our business relationship and for business purposes only, Valeo may need to process some of your personal data. For more information, please refer to the Valeo Data Protection Statement and Privacy notice available on Valeo.com * From luiscoralle at fi.uncoma.edu.ar Fri Apr 24 02:30:37 2020 From: luiscoralle at fi.uncoma.edu.ar (Luis G. Coralle) Date: Thu, 23 Apr 2020 21:30:37 -0300 Subject: [PVE-User] Not able to start the VM In-Reply-To: References: Message-ID: I think you have too much VM started, try to stop some VMS. El jue., 23 de abr. de 2020 a la(s) 17:22, Sivakumar SARAVANAN ( sivakumar.saravanan.jv.ext at valeo-siemens.com) escribi?: > Hello Gilberto, > > Thank you. > > Please find attached the file. > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > Sivakumar SARAVANAN > > Externer Dienstleister f?r / External service provider for > Valeo Siemens eAutomotive Germany GmbH > Research & Development > R & D SWENG TE 1 INFTE > Frauenauracher Stra?e 85 > 91056 Erlangen, Germany > Tel.: +49 9131 9892 0000 > Mobile: +49 176 7698 5441 > sivakumar.saravanan.jv.ext at valeo-siemens.com > valeo-siemens.com > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > Schwab, Michael > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > On Thu, Apr 23, 2020 at 9:59 PM Gilberto Nunes > > wrote: > > > Using Windows, use WinSCP in order to open a ssh session to the Proxmox > > server > > Using Linux, use sftp line interface command, then cd > /etc/pve/qemu-server, > > then get VMID.conf, to save to your computer. > > > > > > --- > > Gilberto Nunes Ferreira > > > > > > Em qui., 23 de abr. de 2020 ?s 16:48, Sivakumar SARAVANAN < > > sivakumar.saravanan.jv.ext at valeo-siemens.com> escreveu: > > > > > Hello, > > > > > > Could you let me know the steps to get the file please ? > > > > > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > > > > > Sivakumar SARAVANAN > > > > > > Externer Dienstleister f?r / External service provider for > > > Valeo Siemens eAutomotive Germany GmbH > > > Research & Development > > > R & D SWENG TE 1 INFTE > > > Frauenauracher Stra?e 85 > > > 91056 Erlangen, Germany > > > Tel.: +49 9131 9892 0000 > > > Mobile: +49 176 7698 5441 > > > sivakumar.saravanan.jv.ext at valeo-siemens.com > > > valeo-siemens.com > > > > > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > > > Schwab, Michael > > > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; Registered > > > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > > > > > > > On Thu, Apr 23, 2020 at 9:42 PM Gilberto Nunes < > > gilberto.nunes32 at gmail.com > > > > > > > wrote: > > > > > > > Please, give us the vm config file, placed in /etc/pve/qemu-server > > > > > > > > > > > > --- > > > > Gilberto Nunes Ferreira > > > > > > > > > > > > Em qui., 23 de abr. de 2020 ?s 16:38, Sivakumar SARAVANAN < > > > > sivakumar.saravanan.jv.ext at valeo-siemens.com> escreveu: > > > > > > > > > Hello, > > > > > > > > > > Not able to start the VM, where it is showing error as below > > > > > > > > > > start failed: hugepage allocation failed at > > > > > /usr/share/perl5/PVE/QemuServer/Memory.pm line 541 > > > > > > > > > > > > > > > Mit freundlichen Gr??en / Best regards / Cordialement, > > > > > > > > > > Sivakumar SARAVANAN > > > > > > > > > > Externer Dienstleister f?r / External service provider for > > > > > Valeo Siemens eAutomotive Germany GmbH > > > > > Research & Development > > > > > R & D SWENG TE 1 INFTE > > > > > Frauenauracher Stra?e 85 > > > > > 91056 Erlangen, Germany > > > > > Tel.: +49 9131 9892 0000 > > > > > Mobile: +49 176 7698 5441 > > > > > sivakumar.saravanan.jv.ext at valeo-siemens.com > > > > > valeo-siemens.com > > > > > > > > > > Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger > > > > > Schwab, Michael > > > > > Axmann; Chairman of the Supervisory Board: Hartmut Kl?tzer; > > Registered > > > > > office: Erlangen, Germany; Commercial registry: F?rth, HRB 15655 > > > > > > > > > > -- > > > > > *This e-mail message is intended for the internal use of the > intended > > > > > recipient(s) only. > > > > > The information contained herein is > > > > > confidential/privileged. Its disclosure or reproduction is strictly > > > > > prohibited. > > > > > If you are not the intended recipient, please inform the sender > > > > > immediately, do not disclose it internally or to third parties and > > > > destroy > > > > > it. > > > > > > > > > > In the course of our business relationship and for business > purposes > > > > > only, Valeo may need to process some of your personal data. > > > > > For more > > > > > information, please refer to the Valeo Data Protection Statement > and > > > > > Privacy notice available on Valeo.com > > > > > * > > > > > _______________________________________________ > > > > > pve-user mailing list > > > > > pve-user at pve.proxmox.com > > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > > > _______________________________________________ > > > > pve-user mailing list > > > > pve-user at pve.proxmox.com > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > > > > > > -- > > > *This e-mail message is intended for the internal use of the intended > > > recipient(s) only. > > > The information contained herein is > > > confidential/privileged. Its disclosure or reproduction is strictly > > > prohibited. > > > If you are not the intended recipient, please inform the sender > > > immediately, do not disclose it internally or to third parties and > > destroy > > > it. > > > > > > In the course of our business relationship and for business purposes > > > only, Valeo may need to process some of your personal data. > > > For more > > > information, please refer to the Valeo Data Protection Statement and > > > Privacy notice available on Valeo.com > > > * > > > _______________________________________________ > > > pve-user mailing list > > > pve-user at pve.proxmox.com > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > > _______________________________________________ > > pve-user mailing list > > pve-user at pve.proxmox.com > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > > > -- > *This e-mail message is intended for the internal use of the intended > recipient(s) only. > The information contained herein is > confidential/privileged. Its disclosure or reproduction is strictly > prohibited. > If you are not the intended recipient, please inform the sender > immediately, do not disclose it internally or to third parties and destroy > it. > > In the course of our business relationship and for business purposes > only, Valeo may need to process some of your personal data. > For more > information, please refer to the Valeo Data Protection Statement and > Privacy notice available on Valeo.com > * > _______________________________________________ > pve-user mailing list > pve-user at pve.proxmox.com > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > -- Luis G. Coralle Secretar?a de TIC Facultad de Inform?tica Universidad Nacional del Comahue (+54) 299-4490300 Int 647