[PVE-User] Proxmox VE 4.0 or Ceph v0.94.3 issues
lyt_yudi
lyt_yudi at icloud.com
Thu Oct 8 09:13:03 CEST 2015
hi,all
follow up http://pve.proxmox.com/wiki/Upgrade_from_3.x_to_4.0 <http://pve.proxmox.com/wiki/Upgrade_from_3.x_to_4.0>
and use ceph gitbuilder repo:
“deb http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/ref/v0.94.3 jessie main"
now, got those error:
root at t2:~# cat /var/log/syslog
…...
Oct 8 14:51:34 t2 ceph[5635]: ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t', 'xfs', '-o', 'rw,noexec,nodev,noatime,nodiratime,nobarrier,inode64,logbsize=256k,delaylog,allocsize=4M', '--', '/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.54d7f6f8-7299-4802-b19c-5758f528fdb1', '/var/lib/ceph/tmp/mnt.DSl9QO']' returned non-zero exit status 32
Oct 8 14:51:34 t2 kernel: [ 1347.348868] XFS (sdc1): unknown mount option [delaylog].
Oct 8 14:51:34 t2 ceph[5635]: mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
Oct 8 14:51:34 t2 ceph[5635]: missing codepage or helper program, or other error
Oct 8 14:51:34 t2 ceph[5635]: In some cases useful info is found in syslog - try
Oct 8 14:51:34 t2 ceph[5635]: dmesg | tail or so.
…...
root at t2:~# dmesg |tail
[ 44.054829] igb 0000:03:00.3 eth5: igb: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
[ 44.054924] IPv6: ADDRCONF(NETDEV_CHANGE): eth5: link becomes ready
[ 44.645489] XFS (sdd1): unknown mount option [delaylog].
[ 45.113587] XFS (sdc1): unknown mount option [delaylog].
[ 45.148538] XFS (sdb1): unknown mount option [delaylog].
[ 46.585905] ip6_tables: (C) 2000-2006 Netfilter Core Team
[ 46.806416] ip_set: protocol 6
[ 1347.271220] XFS (sdd1): unknown mount option [delaylog].
[ 1347.348868] XFS (sdc1): unknown mount option [delaylog].
[ 1347.378846] XFS (sdb1): unknown mount option [delaylog].
root at t2:~# ceph -s
cluster 82dc8716-76c7-4a41-983a-6ea04bc98098
health HEALTH_WARN
194 pgs stale
194 pgs stuck stale
monmap e3: 3 mons at {0=192.168.0.1:6789/0,1=192.168.0.2:6789/0,2=192.168.0.3:6789/0}
election epoch 56, quorum 0,1,2 0,1,2
osdmap e827: 9 osds: 2 up, 2 in
pgmap v163730: 256 pgs, 1 pools, 4146 MB data, 1159 objects
2062 MB used, 516 GB / 518 GB avail
194 stale+active+clean
62 active+clean
root at t2:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 2.16997 root default
-2 0.76999 host t1
0 0.26999 osd.0 down 0 1.00000
3 0.25000 osd.3 down 0 1.00000
6 0.25000 osd.6 down 0 1.00000
-3 0.76999 host t2
1 0.26999 osd.1 down 0 1.00000
4 0.25000 osd.4 down 0 1.00000
7 0.25000 osd.7 up 1.00000 1.00000
-4 0.62999 host t3
2 0.12999 osd.2 down 0 1.00000
5 0.25000 osd.5 down 0 1.00000
8 0.25000 osd.8 up 1.00000 1.00000
# ps axjf|grep ceph
3328 8865 8864 3328 pts/0 8864 S+ 0 0:00 \_ grep ceph
1 5740 5740 5740 ? -1 Ss 0 0:00 /bin/bash -c ulimit -n 131072; /usr/bin/ceph-mon -i 1 --pid-file /var/run/ceph/mon.1.pid -c /etc/ceph/ceph.conf --cluster ceph -f
5740 5743 5740 5740 ? -1 Sl 0 0:03 \_ /usr/bin/ceph-mon -i 1 --pid-file /var/run/ceph/mon.1.pid -c /etc/ceph/ceph.conf --cluster ceph -f
so, No OSD process running!
someone can help me?
Thanks.
lyt_yudi
lyt_yudi at icloud.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-user/attachments/20151008/148a8b47/attachment.htm>
More information about the pve-user
mailing list