[PVE-User] Adding VEs and containers to 7.4

Uwe Sauter uwe.sauter.de at gmail.com
Sun Oct 29 09:43:36 CET 2023


First of all: Please do not only reply to just me, address your answers to the list as well so that other folks can help 
you to.

Second: It usually is good habit to write answers or explanations below the section they refer to. It does make life 
much easier for everyone following when they don't need to scroll up and down just to understand what you are talking about.

Third: Always ask yourself what some reader without you knowledge specific to your situation would know about your 
setup. If things that are special don't get mentioned, then noone but you will known them. Better explain something that 
seems trivial to you than let others (falsely) assume things which might lead into directions that are orthotogal to 
your issue.

Those three things will get you more success in your quest to get help from others.


Am 29.10.23 um 01:58 schrieb Oboe Lova:
> The only laptop in my installation is the one running the web gui to the Dell xps8500 tower that hosts proxmox.  The   
> Lenovo W510 laptop is an i7 Q720 4 core/8 thread running Debian 12 with latest updates.   I don’t understand how it 
> could affect the installation since it is simply a console running html to configure pve.home.  And to use the noVNC 
> console screen which always stayed running even though the VM froze.  Is there no detection reported to the hypervisor 
> when the VM quits?  Is that why the vm shows in the gui as still running even though the vm is unresponsive to the 
> console, sometimes reporting a communication issue?

So, this tells me that your setup wasn't explained in enough detail before.
Please give a full overview of your setup and the inteded function.

If you refer to the installation of PVE than your Lenovo laptop should have no role whatsoever because you have display, 
mouse and keyboard attached to your Dell tower.

If you refer to the installation of VMs than your Lenovo laptop will just use the browser to display the graphics output 
of your VMs. As long as your VM doesn't have a serial console configured simply clicking on "console" and selecting 
noVNC from the "console" dropdown menu are equivalent and the way to go. (Configuring a VM with a serial console and 
using that as another channel to get access to the VM is an advanced topic that I don't want to delve into right now.)

> The Dell tower has an Intel  i7  3770 cpu with 4 cores/8 threads running 3.4 GHz per core.  Yes, it is Ivy Bridge  My 
> research online and the virtualization settings in the bios led me to believe it conformed to the required Intel spec.  
> Web pages with specs say all intel chips starting 2006 have “EM64T” and others talk about EMT64 being borrowed from 
> AMD.  BIOS  shows Intel Hyperthreading enabled, Intel Speedstep enabled, Intel Virtualization Technology enabled, CPU XD 
> Support disabled, Limit CPUID disabled, secure boot disabled.

Small example: Your Ivy Bridge CPU has the AVX instruction set. The next generation of Intel CPUs (Haswell) gained the 
AVX2 instruction set. If you now configure your VM to use a virtual CPU that also provides AVX2 support although your 
Ivy Bridge doesn't will lead to things like hanging or crashing VMs because the software inside gets the wrong answer to 
the question "what capabilities does the CPU provide?"

So on older hardware it is curcial to select the correct CPU model for the VM because the default might assume too much.
The easiest for you is to select CPU type "host" when creating the VM.

> Yes libvert is what I meant.
> 
> Should I enable QEMU in the create VM checkbox?

Again I'm unsure what you are talking about. If you create a VM in PVE you will be using QEMU. Do you mean the "QEMU 
Guest Agent" check box? If so I'd recommend it because it gives a communication channel to QEMU into the VM for things 
like reading the configured IP addresses (so they can be shown in the VM overview in teh WebUI) or instructing the VM to 
flush all cached data to disk just before a backup is taken.

> But if you are certain Ivy Bridge does not conform then game over.

I am very certain that Ivy Bridge is good enough. I have run multiple PVE clusters on Sandy Bridge (one generation 
older) hardware for a long time.

   Instead I will attempt IOT stack directly to Debian
> 12 or 11. Spiess’s project allows him to emulate pi boards using IOTstack on a Debian VM under proxmox 7.2.  All I care 
> about is IOTstack and open mqtt talking with ESP8266 boards over wifi.  But maybe it is worth one more try on 8.x.
> I tried ventoy on a 250 GB usb backup device aka Seagate FreeAgent but it did not install probably because it is a real 
> hdd inside. 

This has nothing to do whether there is a spinning HDD or a flashy SSD inside the USB case. If the installation of 
Ventoy fails there is another issue at hand.

  I simply burned the dvd since I had some blanks available and xfburn has always worked.  People keep giving
> me perfectly good computers so I have many duplicate HDDs to play with RAID but that is just a toy to try for faster 
> reads with zfs (per wiki).   I can always do a poor man’s on-demand NAS by simply mounting an nfs share to another linux 
> machine.  My clonezilla backups will go faster then.
> 
> Yes all hdds are recognized in BIOS and show as icons in the web gui.  I remove old partitions and tables then create a 
> new table and create an ext 4 partition.

And here's one of the issues. ZFS will need empty disks (or at least an empty partition). You cannot put a ZFS onto an 
existing filesystem.

ZFS combines disk management, RAID management and volume/filesystem management into one one. For that it needs empty disks…


    Maybe that is why all
> I see on offered on disk options is /dev/sda as ext 4.  But why does FSTAB not show /dev/sda, /dev/sdb?  Security feature?

Only because some partitions have a filesystem on top doesn't mean that they need to be configured to be mounted.
Indeed on my personal PVE that uses ZFS there is only one entry in /etc/fstab and that is for /proc.
And then there's the thing with systemd having its own mechanism to handle mountpoints…

> FWIW: for some strange reason dd is not included in Debian 12 installs.   Can’t find  in apt or software repositories 
> that are included with the distro.  Typically apt suggests a newer substitute when appropriate but not this time.

I've just checked and I'm baffeled that this is true. Yet dd is included just as well as /usr/bin/dd.


Regards,

	Uwe


> On Sat, Oct 28, 2023 at 3:05 PM Uwe Sauter <uwe.sauter.de at gmail.com <mailto:uwe.sauter.de at gmail.com>> wrote:
> 
> 
> 
>     Am 28.10.23 um 22:15 schrieb Oboe Lova:
>      > Thanks Ewe,
>      >
>      > I am glad but surprised to hear from anyone on the list because my post was rejected by an auto-email from gitlab
>     saying
>      > it could not handle my message because "We could not tell what it was for.  Please use the web interface or post an
>      > issue.  Do you know precisely what that might mean?  I could not sign up for the forum without a subscription.
> 
>     I think someone subscribed to this list with an email address that points to his/her Gitlab instance. When the list
>     then
>     tried to deliver my answer to said address it caused the error message.
> 
>      >   Anyway, I did know about 8.0-2 in the repository, which I burnt to a DVD  and tried first.
> 
>     Depending on your hardware you will have more fun using a USB stick where you put the ISO onto using the "dd" command.
> 
>        But that but quickly got me
>      > out into the weeds. So I have not yet gone  back to 8.x to see if what I have learned so far will give me better
>      > results.  Are you saying that I will have a chance to start the zfs install from the initial proxmox install with
>     8.x?
>      > If so does zfs configuration include the 3 come later with the GUI or command line?  Does zfs raid come later
>     with the
>      > GUI or command line? Can I add a third disk formatted with zfs later as the wiki says, and what are the steps?
> 
>     In all of my installations of PVE (I started with PVE 6.x) I had no trouble selecting ZFS during installation.
>     This picture (https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png
>     <https://pve.proxmox.com/pve-docs/images/screenshot/pve-select-target-disk.png>) shows how the installer
>     should look. If this is not the case then your hardware might lack something…
>     Are all disks recognized in the BIOS or when booting a Debian live system? If so, wipe them clean from all former
>     filesystem and partition signatures using the "wipefs" tool.
> 
>      > I was trying to follow Andreas Spiess's (aka the guy with the Swiss accent) youtube video on setting up a home
>     lab. He
>      > was using 7.2 and the vid was recent. Most of my issues went away when I tried 7.4.  What I did not know at the
>     time is
>      > that either version requires UEFI boot or HHD icons are not shown on the web gui.; only the 3 working partitions
>     show up
>      > on the boot HDD.  I examined and deleted them on the /dev/sda  HDD to start over on a clean system may times. Just
>      > yesterday I found some documentation that said GRUB boots give rise to disk mounting problems.  That made sense so I
>      > tried UEFI boot and the HDD icons showed up on the server view.
>      > I did navigate to the HELP page but only simple instructions were given and I could not establish a running VM by
>     simply
>      > following those HELP steps.  Through experimentation I got two 500 HDDs to show up as icons (pool candidates?) in
>     the
>      > gui.  I also got an iso image of bookworm to upload to VM storage from my laptop DVD drive to somewhere on pve.home.
>      > What I don't understand now is how to get the image to completely install after launching with START command.  I
>     did not
>      > enable start on boot so I could reboot the node and crash out of the VM.  I did so because VM hangs do not
>     respond to
>      > STOP, HARD STOP, etc. With the VM stopped on node reboot I could at least remove it and start over.
> 
>     Given the age of you hardware (a quick search revealed that the notebook you are using has an Ivy Bridge generation
>     Intel CPU which is now 11 years old) I suspect that your troubles come from the CPU that is selected for your VM. The
>     Ivy Bridge chip probably does not have all features that the default emulated CPU ("x86-64-v2-AES") will propagate to
>     the VM.
>     When creating the VM configuration try to select CPU type "host" at the bottom of the list.
> 
>      > One question I have is why QEMU is optional.
> 
>     I'm not quite certain what you mean. QEMU started as a project that emulated certain CPUs in software. When hardware
>     began to support virtualization out of the box the kernel-based virtual machine (KVM) allowed better performance and
>     QEMU adopted the usage of KVM where possible.
> 
>     So when you nowerdays run a x86-64 VM on a x86-64 CPU you will use all the hardware virtualization if these features
>     are
>     not disabled in the BIOS. But if you run an ARM VM on a x86-64 CPU QEMU will still emulate the VM's CPU architecture in
>     software.
> 
>     Do you actually mean libvirt? Libvirt is a project on top of different virtualization technologies and container
>     runtimes that allows you to save a VMs configuration to a file. Libvirt will use QEMU and KVM under the hood.
>     But in PVE libvirt is not used because the devs from Proxmox decided to talk directly to QEMU using their own framework.
> 
>        Isn't it needed for KVM or does the other gui method take over?  I am
>      > referring to virtXXX.man which I previously tried on a direct to Debian QEMU-KVM install try.  When trying to
>     prep for
>      > Home Assistant direct to Debian Bookworm (as direct QEMU-KVM install) I downloaded the the QCOW2 file and tried
>     QEMU to
>      > prepare it for upload to my pve.   QEMU converted it to an img file which was not recognized by Debian or Proxmox. I
>      > tried to do an import to disk from the node DVD drive but the gui would not recognize it; maybe from issues with
>     grub boot.
> 
>     If you download VM images in the qcow2 format you will need to convert these images into the raw image file format. See
>     "man qemu-img". The raw image then can be used as virtual HDD for the VM.
> 
>      >   Are you saying that I will have a chance to start the zfs (instead of ext 4?) install to boot disk from the
>     initial
>      > proxmox install with 8.x?  If so does zfs configuration come later with the GUI or command line?  Does zfs raid come
>      > later with the GUI or command line? Do I add an optional third or fourth  third disk to zfs pool later as the
>     wiki says,
>      > or install the HDD before the initial node installation?
> 
>     Before going down further the debugging road I need you to familiarize yourself with the concepts of ZFS.
>     Even if you succeed lateron to get more disks recognized you will be limited in your possibilities compared to when all
>     disks are recognized during installation. (E.g. you cannot convert a ZFS RAID 1 pool into a ZFS RAIDZ2 pool.)
> 
>     My recommendation is: before trying out to install VMs make sure that your host system is running the way you want
>     it to
>     run.
> 
>     Regards,
> 
>              Uwe
> 
>      > Maybe one more try with 8.0-2 will be more straight forward?  Which log files will reveal problems, and where
>     will they
>      > be stored?
>      >
>      > Armed with the above info I ought to make better progress.  Your help is much appreciated.
>      >
>      > Vielen Danke,
>      >
>      > Chuck in Libby, MT USA
>      >
>      >
>      >
>      > On Sat, Oct 28, 2023 at 12:49 PM Uwe Sauter <uwe.sauter.de at gmail.com <mailto:uwe.sauter.de at gmail.com>
>     <mailto:uwe.sauter.de at gmail.com <mailto:uwe.sauter.de at gmail.com>>> wrote:
>      >
>      >     Hi,
>      >
>      >     don't know if you are aware that PVE 8.0 was released back in June?
>      >
>      >     Also, there generally is a help button in the various menus and a documenation button left to the "create vm" and
>      >     "create ct" buttons.
>      >
>      >     Regarding when to create a ZFS pool: usually you can do that during installation of PVE. You need to change the
>      >     filesystem and the disks used by the installer. If that is not the case with your setup there seems to be
>     something
>      >     going wrong.
>      >     If you'd like to create a ZFS pool on an installed system go to "datacenter -> your server", then select
>     "disks ->
>      >     ZFS".
>      >     You should be able to create a new pool **if** you have unused disks in your system.
>      >
>      >
>      >     Regards,
>      >
>      >              Uwe
>      >
>      >     Am 28.10.23 um 18:13 schrieb Oboe Lova:
>      >      > Greetings to listers,
>      >      >
>      >      > I have installed 7.4 VE and am finding no success in installing guest
>      >      > images to what looks like a valid install, using the web gui at port 8006.
>      >      > Specifically, the best I can do is create a vm using  a windows 7 dvd in
>      >      > the node dvd drive and start an install session.  That runs for a while
>      >      > then stalls while I watch in a console.  After that using console and othere
>      >      > ways to stop, reset, etc the vm so I can remove it are  ignored, though the
>      >      > gui is still up and not frozen. Similar symptoms trying various Linux
>      >      > distro dvds, from either iso image or burned install disks.  I also fumbled
>      >      > around until I managed to upload an iso from laptop to a second internal
>      >      > hdd disk but I can’t find a way to load it to a new vm.
>      >      >
>      >      > Goal is homelab and a separate bookworm as VMs.   So what I obviously need
>      >      > is docs on definitions and caveats for each gui option in the web gui.
>      >      > Examples: how to create a vm from qcow2 image. Functionally what does  QEMU
>      >      > checkbox do since I get console either way?  I expect command line
>      >      > maneuvers will be required.
>      >      >
>      >      > I have read the current wiki and tried help screens but haven’t found
>      >      > anything that gives me a detailed recipe.    I would also like to use a
>      >      > three x 500 GB disk zfs raid but can’t find when  or where I do the zfs
>      >      > setup.  No install option on install except ext 4 partitons.   Dell XPS
>      >      >   8500 i7 16 GB ram 4 physical / 8 threads.   Allocating 2 cores with
>      >      > default lvms 2048 memory per vm.
>      >      >
>      >      > Tnx in advance
>      >      > _______________________________________________
>      >      > pve-user mailing list
>      >      > pve-user at lists.proxmox.com <mailto:pve-user at lists.proxmox.com> <mailto:pve-user at lists.proxmox.com
>     <mailto:pve-user at lists.proxmox.com>>
>      >      > https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>
>      >     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>     <https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user>>
>      >
> 



More information about the pve-user mailing list