[pve-devel] [PATCH] add ide-cd, ide-hd, scsi-cd, scsi-hd, scsi-block to device syntax
Alexandre DERUMIER
aderumier at odiso.com
Wed Dec 7 12:33:21 CET 2011
> Don't know , i didn't have tested this new feature :) . I don't know if it's work
> with iscsi block device or only local scsi device.
>
> So, I'm waiting your qemu 1.0 package ;)
>>just committed pve-qemu-kvm and new qemu-server packages - please test.
Thanks !
> They are a new cool features I want to test, like built-in iscsi initiator.(but no
> multipath yet) So no more need to use initiator on host. (I'm playing with 400
> luns,it begin to take a very long time when the host boot.)
>
> something like that
> qemu -drive file=iscsi://10.0.01/iqn.qemu.test/1
>
> I think I will be the best way to access iscsi devices in the future.
>>why? Do you expect better performance?
Yes, I had read on qemu mailing that perf are already faster (20% faster).
> But they are a drawback with current config syntax, if we don't have initiator on
> the host side, we can't list iscsi devices so and we can't have disk id.
>
> So maybe we can change config syntax from by example:
>
> virtio0: nexenta:0.0.1.scsi-3600144f0f62f0e0000004cd953550008,cache=none
>
> to
>
> virtio0: nexenta:1,cache=none (just put lunid)
>>If you define a storage, that storage is always activated on the host.
I was thinking of using a syntax that works for both case, if in the future we don't need to have storage activated on host.
> Then
> - If we use host initiator, on vm start, just search the iscsi device path.
> - If we use the built-in initiator, just pass lunid in parameter.
>
>
> Also it's correcting a problem with my nexenta san, when I unmap/remap a disk
> on a lun, (when I rollback a snapshot by example),
> The disk id change, so I need to reedit the vm config to change the disk id.
> Don't know for other iscsi san.
>
> So I think finding path from lunid at vm start is the best way.
>
> What do you think about it ?
>>I think you should use LVM on to of iSCSI instead. Why do you try to avoid that?
I don't use LVM, because I need to have 1 lun by disk because :
- I need to manage disk snapshots/clone on my san.
- Also I need to tune disks (lun), cluster size,writeback,... depending on the workload of my vms,
- I need mount 1 disk on 2 vm (cluster with ocfs2)
- also with future virtio-scsi device, it will be possible to do iscsi passthrough from guest directly to san. (I see benchs of 40000io/s on dev patchs)
so lvm is not an option ;)
-Alexandre
-------------- next part --------------
A non-text attachment was scrubbed...
Name: aderumier.vcf
Type: text/x-vcard
Size: 183 bytes
Desc: not available
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20111207/1a52d58c/attachment.vcf>
More information about the pve-devel
mailing list