[PVE-User] Fwd: Snapshot
Gilberto Nunes
gilberto.nunes32 at gmail.com
Fri Jul 11 23:50:05 CEST 2014
GlusterFS is on the list...
Now I just playing with Ceph...
And I wonder if Ceph has some minimum size of disk...
I tried create one OSD with a 2 GB disk and doesn't work:
ceph-disk -v prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid
9171be6a-6a47-4f39-bf8d-0442c75e4bdd /dev/sdc
DEBUG:ceph-disk:Zapping partition table on /dev/sdc
Creating new GPT entries.
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
The operation has completed successfully.
INFO:ceph-disk:Will colocate journal with data on /dev/sdc
DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdc
Information: Moved requested sector from 34 to 2048 in
order to align on 2048-sector boundaries.
Could not create partition 2 from 34 to 10485760
Unable to set partition 2's name to 'ceph journal'!
Could not change partition 2's type code to
45b0969e-9b03-4f30-b4c6-b4b80ceff106!
Error encountered; not saving changes.
ceph-disk: Error: Command '['/sbin/sgdisk', '--new=2:0:5120M',
'--change-name=2:ceph journal',
'--partition-guid=2:5e00a733-96bf-4a23-a64f-29b348b0e04e',
'--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--',
'/dev/sdc']' returned non-zero exit status 4
I think I need more than 2 GB....
2014-07-11 18:45 GMT-03:00 admin-at-extremeshok-dot-com <
admin at extremeshok.com>:
> have you tried glusterfs ?
>
>
> On 7/11/2014 11:43 PM, Adam Thompson wrote:
>
> Just beware: it's much slower than sheepdog. Should be more or less
> comparable to NFS or iSCSI. Switching to write-back mode helps until you
> have an extended power outage :-(.
> -Adam
>
> On July 11, 2014 4:08:03 PM CDT, Gilberto Nunes
> <gilberto.nunes32 at gmail.com> <gilberto.nunes32 at gmail.com> wrote:
>>
>> Guys
>>
>> After lost some hairs here, I think I found better soultion to solve
>> all my problems: Ceph Storage...
>>
>> I am studying it and I think is wonderful storage...
>>
>>
>>
>> 2014-07-10 16:53 GMT-03:00 Michael Rasmussen <mir at miras.org>:
>>
>>> On Thu, 10 Jul 2014 16:33:04 -0300
>>> Gilberto Nunes <gilberto.nunes32 at gmail.com> wrote:
>>>
>>> > Good...
>>> >
>>> > But I ask for suggestions, because I have a customer that have an IBM
>>> > StorWize V3700 and I wonder if this Storage works with Live Snapshot...
>>> >
>>> I have no knowledge using proxmox and storage on a SAN.
>>>
>>> I guess it includes multipath ISCSI and some kind of formatting the
>>> LUN's to support live migration and live snapshot etc.
>>>
>>> --
>>> Hilsen/Regards
>>> Michael Rasmussen
>>>
>>> Get my public GnuPG keys:
>>> michael <at> rasmussen <dot> cc
>>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
>>> mir <at> datanom <dot> net
>>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
>>> mir <at> miras <dot> org
>>> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
>>> --------------------------------------------------------------
>>> /usr/games/fortune -es says:
>>> Option Paralysis:
>>> The tendency, when given unlimited choices, to make none.
>>> -- Douglas Coupland, "Generation X: Tales for an
>>> Accelerated Culture"
>>>
>>
>>
>>
>> --
>> Gilberto Ferreira
>>
>> ------------------------------
>>
>> pve-user mailing listpve-user at pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
> _______________________________________________
> pve-user mailing listpve-user at pve.proxmox.comhttp://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
--
Gilberto Ferreira
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20140711/8e44c065/attachment.htm>
More information about the pve-user
mailing list