[PVE-User] ceph-0.94.3 can't add the osd!

lyt_yudi lyt_yudi at icloud.com
Thu Sep 3 14:54:14 CEST 2015


hi,

here for note:

downgrade to the gaint, setup the cluster for ceph.

and upgrade to hammer(0.94.3), the ceph cluster no problem.


> 在 2015年8月31日,下午3:57,lyt_yudi <lyt_yudi at icloud.com> 写道:
> 
> hi,all
> 
> from cli, it’s normal. 
> # pveceph createosd /dev/sdb -journal_dev /dev/sdm
> create OSD on /dev/sdb (xfs)
> using device '/dev/sdm' for journal
> Caution: invalid backup GPT header, but valid main header; regenerating
> backup header from main header.
> 
> ****************************************************************************
> Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
> verification and recovery are STRONGLY recommended.
> ****************************************************************************
> GPT data structures destroyed! You may now partition the disk using fdisk or
> other utilities.
> Creating new GPT entries.
> The operation has completed successfully.
> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
> Information: Moved requested sector from 34 to 2048 in
> order to align on 2048-sector boundaries.
> The operation has completed successfully.
> Information: Moved requested sector from 34 to 2048 in
> order to align on 2048-sector boundaries.
> The operation has completed successfully.
> meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=244188597 blks
>          =                       sectsz=512   attr=2, projid32bit=0
> data     =                       bsize=4096   blocks=976754385, imaxpct=5
>          =                       sunit=0      swidth=0 blks
> naming   =version 2              bsize=4096   ascii-ci=0
> log      =internal log           bsize=4096   blocks=476930, version=2
>          =                       sectsz=512   sunit=0 blks, lazy-count=1
> realtime =none                   extsz=4096   blocks=0, rtextents=0
> The operation has completed successfully.
> 
> but no osd process,and restart it have no result.
> #/etc/init.d/ceph restart osd
> —no log here—
> 
> in pve gui,it’s partitions, add the osd task had gone:
> <PastedGraphic-1.tiff>
> <PastedGraphic-2.tiff>
> 
> # pveversion -v
> proxmox-ve-2.6.32: 3.4-160 (running kernel: 2.6.32-40-pve)
> pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
> pve-kernel-2.6.32-40-pve: 2.6.32-160
> lvm2: 2.02.98-pve4
> clvm: 2.02.98-pve4
> corosync-pve: 1.4.7-1
> openais-pve: 1.1.4-3
> libqb0: 0.11.1-2
> redhat-cluster-pve: 3.2.0-2
> resource-agents-pve: 3.9.2-4
> fence-agents-pve: 4.0.10-3
> pve-cluster: 3.0-18
> qemu-server: 3.4-6
> pve-firmware: 1.1-4
> libpve-common-perl: 3.0-24
> libpve-access-control: 3.0-16
> libpve-storage-perl: 3.0-33
> pve-libspice-server1: 0.12.4-3
> vncterm: 1.1-8
> vzctl: 4.0-1pve6
> vzprocps: 2.0.11-2
> vzquota: 3.1-2
> pve-qemu-kvm: 2.2-11
> ksm-control-daemon: 1.1-1
> glusterfs-client: 3.5.2-1
> 
> thanks.
> 
> lyt_yudi
> lyt_yudi at icloud.com <mailto:lyt_yudi at icloud.com>
> 
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pve.proxmox.com/pipermail/pve-user/attachments/20150903/b7b807ee/attachment-0014.html>


More information about the pve-user mailing list