[PVE-User] VIOSCSI, WSFC, and S2D woes - Solved! & RFE

Edwin Pers EPers at ansencorp.com
Tue Feb 12 17:45:07 CET 2019

Tried that this morning, no luck. Not having much luck finding anything about 83h in vioscsi, but I did find a few things:


Most of that is related to pointing vioscsi at a remote iscsi target though, instead of my use case of a local disk image/block device.

Later - I got it working!
I had to specify the wwn and serial parameters on the -device parameter, like so:

-drive file=/mnt/pve/cc1-sn1/images/5007/vm-5007-disk-1.raw,if=none,id=drive-scsi1,cache=writeback,format=raw,aio=threads,detect-zeroes=on\
-device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,wwn=0x5000c50015ea71ad,serial=yCCfBgH1\   <- note the wwn= and serial= parameters

Currently testing this out by running the vm manually, looks like I'll have to add this disk via the args: entry in <vmid>.conf, which is acceptable.

I suppose at this point we can call this a RFE to expose the wwn= and serial= parameters in the api in some future version.
Might be able to randomly generate wwn/serial entries, but I don't know nearly enough about the sas protocols to say whether this is a good idea or not.

Some more references that I found:
https://bugzilla.redhat.com/show_bug.cgi?id=831102 <- this is the one that got me on the right track finally.

Full kvm command is here for those interested:

-----Original Message-----
From: pve-user <pve-user-bounces at pve.proxmox.com> On Behalf Of Dominik Csapak
Sent: Tuesday, February 12, 2019 3:12 AM
To: pve-user at pve.proxmox.com
Subject: Re: [PVE-User] VIOSCSI, WSFC, and S2D woes

On 2/11/19 9:18 PM, Edwin Pers wrote:
> Happy Monday all,
> Trying to get storage spaces direct running. WinSvr2016 guests, PVE 5.2-2, NFS shared storage for the guest disk images.
> I'm getting an error when running the cluster validator in windows: "The required inquiry data (SCSI page 83h VPD descriptor) was reported as not being supported."
> As a result, I'm unable to run s2d.
> It looks like the RHEL guys had to make some changes in vioscsi.sys & qemu:
> https://bugzilla.redhat.com/show_bug.cgi?id=1219841
> Have these changes made it into pve? Or am I overlooking something?
> Any thoughts on this matter are appreciated.

afaics from the bug report, this should be fixed since 2016 if their changes made it into the upstream qemu (unknown, since they do not disclose what needs to change) our qemu version should include it

you can try to upgrade to a current version (PVE 5.3, with qemu 2.12.1) and use the most recent virtio drivers

pve-user mailing list
pve-user at pve.proxmox.com

More information about the pve-user mailing list