[PVE-User] VIOSCSI, WSFC, and S2D woes - Solved! & RFE

Dominik Csapak d.csapak at proxmox.com
Wed Feb 13 09:57:46 CET 2019


On 2/12/19 5:45 PM, Edwin Pers wrote:
> Tried that this morning, no luck. Not having much luck finding anything about 83h in vioscsi, but I did find a few things:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg04999.html
> https://marc.info/?l=qemu-devel&m=146296689703152&w=2
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VOYDNZT5UK3773GW2GU6DFJND4RQPZCO/
> 
> Most of that is related to pointing vioscsi at a remote iscsi target though, instead of my use case of a local disk image/block device.
> 
> Later - I got it working!
> I had to specify the wwn and serial parameters on the -device parameter, like so:
> 
> -drive file=/mnt/pve/cc1-sn1/images/5007/vm-5007-disk-1.raw,if=none,id=drive-scsi1,cache=writeback,format=raw,aio=threads,detect-zeroes=on\
> -device scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1,wwn=0x5000c50015ea71ad,serial=yCCfBgH1\   <- note the wwn= and serial= parameters
> 
> Currently testing this out by running the vm manually, looks like I'll have to add this disk via the args: entry in <vmid>.conf, which is acceptable.
> 
> I suppose at this point we can call this a RFE to expose the wwn= and serial= parameters in the api in some future version.
> Might be able to randomly generate wwn/serial entries, but I don't know nearly enough about the sas protocols to say whether this is a good idea or not.

sounds sensible, serial is already exposed (you can set it via api or 
qm), wwn is not afaics

can you open an enhancement request for the wwn? 
https://bugzilla.proxmox.com

> 
> Some more references that I found:
> https://lists.wpkg.org/pipermail/stgt/2013-May/018875.html
> https://ipads.se.sjtu.edu.cn:1312/qiuzhe/qemu-official/commit/fd9307912d0a2ffa0310f9e20935d96d5af0a1ca
> https://bugzilla.redhat.com/show_bug.cgi?id=831102 <- this is the one that got me on the right track finally.
> 
> Full kvm command is here for those interested:
> https://gist.github.com/epers/b0340c897c4403ba09b247f2d614b674
> 
> -----Original Message-----
> From: pve-user <pve-user-bounces at pve.proxmox.com> On Behalf Of Dominik Csapak
> Sent: Tuesday, February 12, 2019 3:12 AM
> To: pve-user at pve.proxmox.com
> Subject: Re: [PVE-User] VIOSCSI, WSFC, and S2D woes
> 
> On 2/11/19 9:18 PM, Edwin Pers wrote:
>> Happy Monday all,
>> Trying to get storage spaces direct running. WinSvr2016 guests, PVE 5.2-2, NFS shared storage for the guest disk images.
>> I'm getting an error when running the cluster validator in windows: "The required inquiry data (SCSI page 83h VPD descriptor) was reported as not being supported."
>> As a result, I'm unable to run s2d.
>> It looks like the RHEL guys had to make some changes in vioscsi.sys & qemu:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1219841
>> Have these changes made it into pve? Or am I overlooking something?
>> Any thoughts on this matter are appreciated.
>>
> 
> afaics from the bug report, this should be fixed since 2016 if their changes made it into the upstream qemu (unknown, since they do not disclose what needs to change) our qemu version should include it
> 
> you can try to upgrade to a current version (PVE 5.3, with qemu 2.12.1) and use the most recent virtio drivers
> 
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> 




More information about the pve-user mailing list