[PVE-User] Proxmox VE 4.0 or Ceph v0.94.3 issues

lyt_yudi lyt_yudi at icloud.com
Thu Oct 8 09:42:20 CEST 2015


hi,

update:  delete the osd  and readd.

create OSD on /dev/sdb (xfs)
Caution: invalid backup GPT header, but valid main header; regenerating
backup header from main header.

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries.
The operation has completed successfully.
libust[1773/1773]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:375)
Setting name!
partNum is 1
REALLY setting name!
The operation has completed successfully.
Setting name!
partNum is 0
REALLY setting name!
The operation has completed successfully.
meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=17655122 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=70620487, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=34482, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.
ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', 'xfs', '-o', 'rw,noexec,nodev,noatime,nodiratime,nobarrier,inode64,logbsize=256k,delaylog,allocsize=4M', '--', '/dev/sdb1', '/var/lib/ceph/tmp/mnt.2_fto6']' returned non-zero exit status 32
TASK ERROR: command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 82dc8716-76c7-4a41-983a-6ea04bc98098 /dev/sdb' failed: exit code 1


> 在 2015年10月8日,下午3:13,lyt_yudi <lyt_yudi at icloud.com> 写道:
> 
> 
> hi,all
> 
> follow up http://pve.proxmox.com/wiki/Upgrade_from_3.x_to_4.0 <http://pve.proxmox.com/wiki/Upgrade_from_3.x_to_4.0>
> and use ceph gitbuilder repo:
> “deb http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/v0.94.3 <http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/v0.94.3> jessie main"
> 
> now, got those error:
> root at t2:~# cat /var/log/syslog
> …...
> Oct  8 14:51:34 t2 ceph[5635]: ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', 'xfs', '-o', 'rw,noexec,nodev,noatime,nodiratime,nobarrier,inode64,logbsize=256k,delaylog,allocsize=4M', '--', '/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.54d7f6f8-7299-4802-b19c-5758f528fdb1', '/var/lib/ceph/tmp/mnt.DSl9QO']' returned non-zero exit status 32
> Oct  8 14:51:34 t2 kernel: [ 1347.348868] XFS (sdc1): unknown mount option [delaylog].
> Oct  8 14:51:34 t2 ceph[5635]: mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
> Oct  8 14:51:34 t2 ceph[5635]: missing codepage or helper program, or other error
> Oct  8 14:51:34 t2 ceph[5635]: In some cases useful info is found in syslog - try
> Oct  8 14:51:34 t2 ceph[5635]: dmesg | tail or so.
> …...
> 
> root at t2:~# dmesg |tail
> [   44.054829] igb 0000:03:00.3 eth5: igb: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
> [   44.054924] IPv6: ADDRCONF(NETDEV_CHANGE): eth5: link becomes ready
> [   44.645489] XFS (sdd1): unknown mount option [delaylog].
> [   45.113587] XFS (sdc1): unknown mount option [delaylog].
> [   45.148538] XFS (sdb1): unknown mount option [delaylog].
> [   46.585905] ip6_tables: (C) 2000-2006 Netfilter Core Team
> [   46.806416] ip_set: protocol 6
> [ 1347.271220] XFS (sdd1): unknown mount option [delaylog].
> [ 1347.348868] XFS (sdc1): unknown mount option [delaylog].
> [ 1347.378846] XFS (sdb1): unknown mount option [delaylog].
> 
> root at t2:~# ceph -s
>     cluster 82dc8716-76c7-4a41-983a-6ea04bc98098
>      health HEALTH_WARN
>             194 pgs stale
>             194 pgs stuck stale
>      monmap e3: 3 mons at {0=192.168.0.1:6789/0,1=192.168.0.2:6789/0,2=192.168.0.3:6789/0}
>             election epoch 56, quorum 0,1,2 0,1,2
>      osdmap e827: 9 osds: 2 up, 2 in
>       pgmap v163730: 256 pgs, 1 pools, 4146 MB data, 1159 objects
>             2062 MB used, 516 GB / 518 GB avail
>                  194 stale+active+clean
>                   62 active+clean
> 
> root at t2:~# ceph osd tree
> ID WEIGHT  TYPE NAME     UP/DOWN REWEIGHT PRIMARY-AFFINITY 
> -1 2.16997 root default                                    
> -2 0.76999     host t1                                     
>  0 0.26999         osd.0    down        0          1.00000 
>  3 0.25000         osd.3    down        0          1.00000 
>  6 0.25000         osd.6    down        0          1.00000 
> -3 0.76999     host t2                                     
>  1 0.26999         osd.1    down        0          1.00000 
>  4 0.25000         osd.4    down        0          1.00000 
>  7 0.25000         osd.7      up  1.00000          1.00000 
> -4 0.62999     host t3                                     
>  2 0.12999         osd.2    down        0          1.00000 
>  5 0.25000         osd.5    down        0          1.00000 
>  8 0.25000         osd.8      up  1.00000          1.00000 
> 
> # ps axjf|grep ceph
>  3328  8865  8864  3328 pts/0     8864 S+       0   0:00          \_ grep ceph
>     1  5740  5740  5740 ?           -1 Ss       0   0:00 /bin/bash -c ulimit -n 131072; /usr/bin/ceph-mon -i 1 --pid-file /var/run/ceph/mon.1.pid -c /etc/ceph/ceph.conf --cluster ceph -f
>  5740  5743  5740  5740 ?           -1 Sl       0   0:03  \_ /usr/bin/ceph-mon -i 1 --pid-file /var/run/ceph/mon.1.pid -c /etc/ceph/ceph.conf --cluster ceph -f
> 
> so, No OSD process running!
> 
> someone can help me?
> 
> Thanks. 
> 
> 
> lyt_yudi
> lyt_yudi at icloud.com <mailto:lyt_yudi at icloud.com>
> 
> 
> 
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://pve.proxmox.com/pipermail/pve-user/attachments/20151008/591d76d5/attachment-0015.html>


More information about the pve-user mailing list