[PVE-User] Proxmox questions/features

Alex K rightkicktech at gmail.com
Thu Jul 8 09:54:53 CEST 2021


Checking the qemu process for the specific VM I get the following:

/usr/bin/kvm -id 100 -name Debian -no-shutdown -chardev
socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon
chardev=qmp,mode=control -chardev
socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5 -mon
chardev=qmp-event,mode=control -pidfile /var/run/qemu-server/100.pid
-daemonize -smbios type=1,uuid=667352ae-6b86-49fc-a892-89b96e97ab8d -smp
1,sockets=1,cores=1,maxcpus=1 -nodefaults -boot
menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg
-vnc unix:/var/run/qemu-server/100.vnc,password -cpu
kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m 2048 -device
pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device
pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device
vmgenid,guid=3ce34c1d-ed4f-457d-a569-65f7755f47f1 -device
piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device
usb-tablet,id=tablet,bus=uhci.0,port=1 -device
VGA,id=vga,bus=pci.0,addr=0x2 -chardev
socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0 -device
virtio-serial,id=qga0,bus=pci.0,addr=0x8 -device
virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -object
rng-random,filename=/dev/urandom,id=rng0 -device
virtio-rng-pci,rng=rng0,max-bytes=1024,period=1000,bus=pci.1,addr=0x1d
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi
initiator-name=iqn.1993-08.org.debian:01:094fb2bbde3 -drive
file=/mnt/pve/share/template/iso/debian-10.5.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101
-device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5 *-drive
file=gluster://node0/vms/images/100/vm-100-disk-1.qcow2*,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on
-device
scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100
-netdev
type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on
-device
virtio-net-pci,mac=D2:83:A3:B9:77:2C,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102
-machine type=pc+pve0

How one can confirm that *libgfapi* is being used to access the VM disk or
if not used how libgfapi can be enabled?

Thanx,
Alex


On Thu, Jul 8, 2021 at 10:40 AM Alex K <rightkicktech at gmail.com> wrote:

> Hi all,
>
> Anyone has any info to share for the below?
> Many thanx
>
> On Tue, Jul 6, 2021 at 12:00 PM Alex K <rightkicktech at gmail.com> wrote:
>
>> Hi all,
>>
>> I've been assessing Proxmox for the last couple of days, coming from a
>> previous experience with oVirt. The intent is to switch to this solution if
>> most of the features are covered.
>>
>> The below questions might have been put again, though searching the forum
>> or online I was not able to find any specific reference or not sure if the
>> feedback is still relevant.
>>
>> - When adding a gluster volume I see that the UI provides an option for a
>> secondary server. In case I have a 3 replica glusterfs setup where I need
>> to add two backup servers as below, how can this be defined?
>>
>> backup-volfile-servers=node1,node2
>>
>>
>> - I've read that qemu does use *libgfapi* when accessing VM disks on
>> glusterfs volumes. Can someone confirm this? I tried to find out the VM
>> configs that may reference this detail but was not able to do so.
>>
>> - I have not seen any *load balancing/scheduling* feature being provided
>> and looking through the forum it seems that this is still missing. Is there
>> any future plan to provide such a feature. By load balancing I mean to
>> automatically balance VMs through the available hosts/nodes depending on a
>> set policy (CPU load, memory load or other).
>>
>>
>> Thanx for reading and appreciate any feedback,
>>
>> Alex
>>
>>
>>
>>



More information about the pve-user mailing list