[PVE-User] Proxmox VE 6.1 released!

Uwe Sauter uwe.sauter.de at gmail.com
Thu Dec 5 08:47:49 CET 2019


Am 05.12.19 um 07:58 schrieb Thomas Lamprecht:
> Hi,
> 
> On 12/4/19 11:17 PM, Uwe Sauter wrote:
>> Hi,
>>
>> upgraded a cluster of three servers to 6.1. Currently I'm in the process of rebooting them one after the other.
>>
> 
> Upgrade from 5.4 to 6.1 or from 6.0 to 6.1 ?

6.0 to 6.1

> 
>> When trying to migrate VMs to a host that was already rebooted I get the following in the task viewer window in the web ui:
>>
>> Check VM 109: precondition check passed
>> Migrating VM 109
>> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441.
>> trying to acquire lock...
>>  OK
>> Check VM 200: precondition check passed
>> Migrating VM 200
>> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441.
>> Check VM 203: precondition check passed
>> Migrating VM 203
>> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441.
>> Check VM 204: precondition check passed
>> Migrating VM 204
>> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441.
>> Check VM 205: precondition check passed
>> Migrating VM 205
>> Use of uninitialized value $val in pattern match (m//) at /usr/share/perl5/PVE/RESTHandler.pm line 441.
>> All jobs finished, used 5 workers in total.
>> TASK OK
>>
>>
>> Hope this is just cosmetic…
>>
> 
> It is, but I'm wondering why you get this.. Migration was just started normally
> through the webinterface, or?


I selected the server on the left, then bulk actions, migrate, all running VMs, chose the target host and started migration.

Regards,

	Uwe


> 
> regards,
> Thomas
> 
>>
>> Regards,
>>
>>     Uwe
>>
>>
>>
>> Am 04.12.19 um 10:38 schrieb Martin Maurer:
>>> Hi all,
>>>
>>> We are very excited to announce the general availability of Proxmox VE 6.1.
>>>
>>> It is built on Debian Buster 10.2 and a specially modified Linux Kernel 5.3, QEMU 4.1.1, LXC 3.2, ZFS 0.8.2, Ceph 14.2.4.1 (Nautilus), Corosync 3.0, and more of the current leading open-source virtualization technologies.
>>>
>>> This release brings new configuration options available in the GUI which make working with Proxmox VE even more comfortable and secure. Editing the cluster-wide bandwidth limit for traffic types such as migration, backup-restore, clone, etc. is possible via the GUI. If the optional package ifupdown2 of the Debian network interface manager is installed, it’s now possible to change the network configuration and reload it in the Proxmox web interface without a reboot. We have improvements to 2-factor authentication with TOTP and U2F.
>>>
>>> The HA stack has been improved and comes with a new 'migrate' shutdown policy, migrating running services to another node on shutdown.
>>>
>>> In the storage backend, all features offered by newer kernels with Ceph and KRBD are supported with version 6.1.
>>>
>>> We have some notable bug fixes, one of them being the QEMU monitor timeout issue or stability improvements for corosync. Countless other bugfixes and smaller improvements are listed in the release notes.
>>>
>>> Release notes
>>> https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_6.1
>>>
>>> Video intro
>>> https://www.proxmox.com/en/training/video-tutorials/item/what-s-new-in-proxmox-ve-6-1
>>>
>>> Download
>>> https://www.proxmox.com/en/downloads
>>> Alternate ISO download:
>>> http://download.proxmox.com/iso/
>>>
>>> Documentation
>>> https://pve.proxmox.com/pve-docs/
>>>
>>> Community Forum
>>> https://forum.proxmox.com
>>>
>>> Source Code
>>> https://git.proxmox.com
>>>
>>> Bugtracker
>>> https://bugzilla.proxmox.com
>>>
>>> FAQ
>>> Q: Can I dist-upgrade Proxmox VE 6.0 to 6.1 with apt?
>>> A: Yes, just via GUI or via CLI with apt update && apt dist-upgrade
>>>
>>> Q: Can I install Proxmox VE 6.1 on top of Debian Buster?
>>> A: Yes, see https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
>>>
>>> Q: Can I upgrade my Proxmox VE 5.4 cluster with Ceph Luminous to 6.x and higher with Ceph Nautilus?
>>> A: This is a two step process. First, you have to upgrade Proxmox VE from 5.4 to 6.0, and afterwards upgrade Ceph from Luminous to Nautilus. There are a lot of improvements and changes, please follow exactly the upgrade documentation.
>>> https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0
>>> https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus
>>>
>>> Q: Where can I get more information about future feature updates?
>>> A: Check our roadmap, forum, mailing list and subscribe to our newsletter.
>>>
>>> A big THANK YOU to our active community for all your feedback, testing, bug reporting and patch submitting!
>>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 
> 
> 




More information about the pve-user mailing list