[PVE-User] Create proxmox cluster / storage question.

Eneko Lacunza elacunza at binovo.es
Mon Mar 2 09:22:45 CET 2020


Hola Leandro,

El 28/2/20 a las 15:25, Leandro Roggerone escribió:
> When you talk about redundacy and availability, do you want...
> - HA? (automatic restart of VMs in the other node in case one server fails)
> - Be able to move VMs from one server to the other "fast" (without
> copying the disks)?
> Yes , this is what I want.
> So regarding my original layout from my 5.5Tb  storage:
> Im using 1T for LVM , 1TB for LVM-thin and  3.5 TB unassigned space, is it
> ok to use this unassigned space for a ceph ?
> Can I set it later ?  with server on production ?
You can configure Ceph later, with the server in production, yes.

But for Ceph you want single disks, no RAID. So I'd build the server 
with a 2x2TB RAID1 and leave the other 2x2TB disks for adding to Ceph.

Also, read a bit about Ceph before deploying, it is not "sinple" :-)
https://docs.ceph.com/docs/master/start/intro/

> Other:
> Using an NFS system, means to have an external server running a file sistem?
> So you should have at least two servers for the cluster and one for the
> file system?
> It seems to me that using ceph has a better redundancy plan an it is easier
> to deploy since I only need two servers. (am i right?).
For Ceph you need at least 3 servers; one of them can be a simple PC but 
you need it for Ceph monitor's quorum. It is recommended too for Proxmox 
cluster, for the same reason.

Really a NFS based solution is simpler, but then NFS server is a simple 
point of failure. Ceph will be more resilient, but you have to 
understand how it works. You may find that having only two servers with 
Ceph storage can be risky when performing maintenance on one of the servers.

Saludos
Eneko

> Thanks!
>
> El vie., 28 feb. 2020 a las 11:06, Eneko Lacunza (<elacunza at binovo.es>)
> escribió:
>
>> Hola Leandro,
>>
>> El 28/2/20 a las 14:43, Leandro Roggerone escribió:
>>> Regarding your question , what is the tarjet use for this server.
>>> I have a dell R610 with 6 drive bays.
>>> Today I have  4 (2TB) drives in Raid5 , resulting a 5.5TB capacity.
>>> I will add 2 ssd drives later in raid1 for applications that need more
>> read
>>> speed.
>>> The purpose for this server is to run proxmox with some VMs for external
>>> and internal access.
>>> Im planning to build a second server and create a cluster just to have
>> more
>>> redundancy and availability.
>>>
>>> I would like to set all I can now that server is not in production and
>>> minimize risk later.
>>> Thats why im asking so many questions.
>> Asking is good, but we need info to be able to help you ;)
>>
>> When you talk about redundacy and availability, do you want...
>> - HA? (automatic restart of VMs in the other node in case one server fails)
>> - Be able to move VMs from one server to the other "fast" (without
>> copying the disks)?
>>
>> If your answer is yes to any of the previous questions, you have to look
>> at using a NFS server or deploying Ceph.
>>
>> If it's no, then we can talk about local storage in your servers. What
>> RAID card do you have in that server? Does it have write cache (non
>> volatile of battery-backed) If it doesn't have such, RAID5 could prove
>> slow (eat quite CPU), I suggest you use 2xRAID1 or a RAID10 setup. Also,
>> please bear in mind that RAID5 with "big" disks is considered quite
>> unsecure (risk of having a second disk failure during recovery is high).
>>
>> Saludos
>> Eneko
>>> Regards.
>>> Leandro.
>>>
>>>
>>> El vie., 28 feb. 2020 a las 5:49, Eneko Lacunza (<elacunza at binovo.es>)
>>> escribió:
>>>
>>>> Hola Leandro,
>>>>
>>>> El 27/2/20 a las 17:29, Leandro Roggerone escribió:
>>>>> Hi guys , i'm still tunning my 5.5 Tb server.
>>>>> While setting storage options during install process, I set 2000 for hd
>>>>> size, so I have 3.5 TB free to assign later.
>>>>>
>>>>> my layout is as follows:
>>>>> root at pve:~# lsblk
>>>>> NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>>>>> sda                  8:0    0   5.5T  0 disk
>>>>> ├─sda1               8:1    0  1007K  0 part
>>>>> ├─sda2               8:2    0   512M  0 part
>>>>> └─sda3               8:3    0     2T  0 part
>>>>>      ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
>>>>>      ├─pve-root       253:1    0     1T  0 lvm  /
>>>>>      ├─pve-data_tmeta 253:2    0     9G  0 lvm
>>>>>      │ └─pve-data     253:4    0 949.6G  0 lvm
>>>>>      └─pve-data_tdata 253:3    0 949.6G  0 lvm
>>>>>        └─pve-data     253:4    0 949.6G  0 lvm
>>>>> sr0                 11:0    1  1024M  0 rom
>>>>>
>>>>> My question is:
>>>>> Is it possible to expand sda3 partition later without service outage ?
>>>>> Is it possible to expand pve group on sda3 partition ?
>>>> You don't need to expand sda3 really. You can just create a new
>>>> partition, create a new PV with it and add the new PV to pve VG.
>>>>
>>>>> In case to create a proxmox cluster, what should I do with that 3.5 TB
>>>> free
>>>>> ?
>>>> I don't know really how to reply to this. If you're building a cluster,
>>>> I suggest you configure some kind of shared storage; NFS server or Ceph
>>>> cluster for example.
>>>>
>>>>> Is there a best partition type suited for this ? Can I do it without
>>>>> service outage?
>>>> For what?
>>>>
>>>>> I have not any service running yet , so I can experiment what it takes.
>>>>> Any thought about this would be great.
>>>> Maybe you can start telling us your target use for this server/cluster.
>>>> Also some detailed spec of the server would help; for example does it
>>>> have a RAID card with more than one disk, or you're using a 6TB single
>>>> disk?
>>>>
>>>> Cheers
>>>> Eneko
>>>>
>>>> --
>>>> Zuzendari Teknikoa / Director Técnico
>>>> Binovo IT Human Project, S.L.
>>>> Telf. 943569206
>>>> Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
>>>> www.binovo.es
>>>>
>>>> _______________________________________________
>>>> pve-user mailing list
>>>> pve-user at pve.proxmox.com
>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>>>
>>> _______________________________________________
>>> pve-user mailing list
>>> pve-user at pve.proxmox.com
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
>> --
>> Zuzendari Teknikoa / Director Técnico
>> Binovo IT Human Project, S.L.
>> Telf. 943569206
>> Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
>> www.binovo.es
>>
>> _______________________________________________
>> pve-user mailing list
>> pve-user at pve.proxmox.com
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


-- 
Zuzendari Teknikoa / Director Técnico
Binovo IT Human Project, S.L.
Telf. 943569206
Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
www.binovo.es



More information about the pve-user mailing list