[PVE-User] pveceph : Unable to add any OSD

Alwin Antreich a.antreich at proxmox.com
Mon Sep 25 10:59:35 CEST 2017


Hi Phil,

On Sat, Sep 23, 2017 at 03:26:10PM +0200, Phil Schwarz wrote:
> Hi, thanks for the advice.
>
> 1. Did a ceph activate of the device; the process segfaulted :
What hardware do you use? But I think it looks like the following bug:
http://tracker.ceph.com/issues/20529

It has been fixed upstream and is in ceph now, but it will take a couple
of days till it will appear in our repositories.

>
>
> root at arya:~# ceph-disk activate /dev/sdb1
> got monmap epoch 4
> *** Caught signal (Illegal instruction) **
>  in thread 7fd3d8c6ae00 thread_name:ceph-osd
>  ceph version 12.2.0 (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous
> (rc)
>  1: (()+0xa07bb4) [0x561da8a16bb4]
>  2: (()+0x110c0) [0x7fd3d64820c0]
>  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871)
> [0x561da8ede8d1]
>  4:
> (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc)
> [0x561da8dc2a6c]
>  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool,
> bool)+0x11f) [0x561da8d89e8f]
>  6: (rocksdb::DB::Open(rocksdb::DBOptions const&,
> std::__cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&,
> std::vector<rocksdb::ColumnFamilyHandle*,
> std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xe40)
> [0x561da8d8b900]
>  7: (rocksdb::DB::Open(rocksdb::Options const&,
> std::__cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> > const&, rocksdb::DB**)+0x698) [0x561da8d8d168]
>  8: (RocksDBStore::_test_init(std::__cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > const&)+0x52)
> [0x561da8959322]
>  9: (FileStore::mkfs()+0x7b6) [0x561da880c096]
>  10: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x346)
> [0x561da844edf6]
>  11: (main()+0xe9b) [0x561da839ed1b]
>  12: (__libc_start_main()+0xf1) [0x7fd3d54372b1]
>  13: (_start()+0x2a) [0x561da842aa0a]
> 2017-09-23 14:52:22.114127 7fd3d8c6ae00 -1 *** Caught signal (Illegal
> instruction) **
>  in thread 7fd3d8c6ae00 thread_name:ceph-osd
>
>  ceph version 12.2.0 (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous
> (rc)
>  1: (()+0xa07bb4) [0x561da8a16bb4]
>  2: (()+0x110c0) [0x7fd3d64820c0]
>  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871)
> [0x561da8ede8d1]
>  4:
> (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc)
> [0x561da8dc2a6c]
>  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool,
> bool)+0x11f) [0x561da8d89e8f]
>  6: (rocksdb::DB::Open(rocksdb::DBOptions const&,
> std::__cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&,
> std::vector<rocksdb::ColumnFamilyHandle*,
> std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xe40)
> [0x561da8d8b900]
>  7: (rocksdb::DB::Open(rocksdb::Options const&,
> std::__cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> > const&, rocksdb::DB**)+0x698) [0x561da8d8d168]
>  8: (RocksDBStore::_test_init(std::__cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > const&)+0x52)
> [0x561da8959322]
>  9: (FileStore::mkfs()+0x7b6) [0x561da880c096]
>  10: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x346)
> [0x561da844edf6]
>  11: (main()+0xe9b) [0x561da839ed1b]
>  12: (__libc_start_main()+0xf1) [0x7fd3d54372b1]
>  13: (_start()+0x2a) [0x561da842aa0a]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.
>
>      0> 2017-09-23 14:52:22.114127 7fd3d8c6ae00 -1 *** Caught signal
> (Illegal instruction) **
>  in thread 7fd3d8c6ae00 thread_name:ceph-osd
>
>  ceph version 12.2.0 (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous
> (rc)
>  1: (()+0xa07bb4) [0x561da8a16bb4]
>  2: (()+0x110c0) [0x7fd3d64820c0]
>  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871)
> [0x561da8ede8d1]
>  4:
> (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc)
> [0x561da8dc2a6c]
>  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool,
> bool)+0x11f) [0x561da8d89e8f]
>  6: (rocksdb::DB::Open(rocksdb::DBOptions const&,
> std::__cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> > const&, std::vector<rocksdb::ColumnFamilyDescriptor,
> std::allocator<rocksdb::ColumnFamilyDescriptor> > const&,
> std::vector<rocksdb::ColumnFamilyHandle*,
> std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xe40)
> [0x561da8d8b900]
>  7: (rocksdb::DB::Open(rocksdb::Options const&,
> std::__cxx11::basic_string<char, std::char_traits<char>,
> std::allocator<char> > const&, rocksdb::DB**)+0x698) [0x561da8d8d168]
>  8: (RocksDBStore::_test_init(std::__cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > const&)+0x52)
> [0x561da8959322]
>  9: (FileStore::mkfs()+0x7b6) [0x561da880c096]
>  10: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char,
> std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x346)
> [0x561da844edf6]
>  11: (main()+0xe9b) [0x561da839ed1b]
>  12: (__libc_start_main()+0xf1) [0x7fd3d54372b1]
>  13: (_start()+0x2a) [0x561da842aa0a]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to
> interpret this.
>
> mount_activate: Failed to activate
> Traceback (most recent call last):
>   File "/usr/sbin/ceph-disk", line 11, in <module>
>     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5704, in
> run
>     main(sys.argv[1:])
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5657, in
> main
>     main_catch(args.func, args)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5682, in
> main_catch
>     func(args)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3759, in
> main_activate
>     reactivate=args.reactivate,
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3522, in
> mount_activate
>     (osd_id, cluster) = activate(path, activate_key_template, init)
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3699, in
> activate
>     keyring=keyring,
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3166, in
> mkfs
>     '--setgroup', get_ceph_group(),
>   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 558, in
> command_check_call
>     return subprocess.check_call(arguments)
>   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
>     raise CalledProcessError(retcode, cmd)
> subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster',
> 'ceph', '--mkfs', '-i', u'0', '--monmap',
> '/var/lib/ceph/tmp/mnt.IwlFM9/activate.monmap', '--osd-data',
> '/var/lib/ceph/tmp/mnt.IwlFM9', '--osd-journal',
> '/var/lib/ceph/tmp/mnt.IwlFM9/journal', '--osd-uuid',
> u'89fce23c-8535-48fa-bfc0-ae9a2a5d7cd6', '--setuser', 'ceph', '--setgroup',
> 'ceph']' returned non-zero exit status -4
>
>
>
>
>
> 2. Cepĥ-auth list :
> root at arya:~# ceph auth list
> installed auth entries:
>
> osd.0
> 	key: First key==
> 	caps: [mgr] allow profile osd
> 	caps: [mon] allow profile osd
> 	caps: [osd] allow *
> osd.1
> 	key: Second key
> 	caps: [mgr] allow profile osd
> 	caps: [mon] allow profile osd
> 	caps: [osd] allow *
> ....
>
> Didn't show anything with seems related to my issue.
>
>
>
> 3. On arya (with OSD.0) ; what the status of the service ?
> root at arya:~# ps ax |grep ceph
>  1941 pts/0    S+     0:00 grep ceph
>
> There's no ceph-osd service ?
> root at arya:~# /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph
> --setgroup ceph
> 2017-09-23 15:05:03.120960 7ffa38272e00 -1  ** ERROR: unable to open OSD
> superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
>
> Is the OSD mounted ?
> root at arya:~# mount |grep ceph
> root at arya:~#
>
> No OSD is mounted under /var/lib/ceph/osd/. Therefore, no keyring inside.
>
>
> 4. Trying manually (starting of being frightened...)
> mkdir /var/lib/ceph/osd/ceph-0
> mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
>
> /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
> and
> ceph-disk activate /dev/sdb
> are leading to the same issue, with the same segfault previously seen..
>
>
> I'm stuck with my useless disk...
> My rbd pool is full up to 103%.....
>
> Thanks
> Best regards
>
>
>
>
>
>
> Le 22/09/2017 à 15:09, Alwin Antreich a écrit :
> > Hi Phil,
> >
> > On Thu, Sep 21, 2017 at 09:34:53PM +0200, Phil Schwarz wrote:
> > > Hi,
> > > did the information i gave sufficient to get a solution ?
> > > Thanks
> > > Best regards
> > >
> > >
> > >
> > >
> > > Le 18/09/2017 à 21:12, Phil Schwarz a écrit :
> > > > Thanks for your help,
> > > >
> > > > Le 18/09/2017 à 12:37, Alwin Antreich a écrit :
> > > > > On Sun, Sep 17, 2017 at 11:18:51AM +0200, Phil Schwarz wrote:
> > > > > > Hi,
> > > > > > going on on the same problem (links [1] & [2] )
> > > > > >
> > > > > > [1] : https://pve.proxmox.com/pipermail/pve-user/2017-July/168578.html
> > > > > > [2] :
> > > > > > https://pve.proxmox.com/pipermail/pve-user/2017-September/168775.html
> > > > > >
> > > > > > -Added a brand new node, updated to last ceph version (the proxmox team
> > > > > > recompiled one)
> > > > > Can you please post a 'ceph versions' and a 'ceph osd tree' to get some
> > > > > overview on your setup?
> > > >
> > > > root at arya:~# ceph versionceph version 12.2.0
> > > > (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous (rc)
> > > >
> > > >
> > > > root at arya:~# ceph osd tree
> > > > ID CLASS WEIGHT   TYPE NAME         STATUS REWEIGHT PRI-AFF
> > > > -1       10.06328 root default
> > > > -3              0     host daenerys
> > > > -5        1.81360     host jaime
> > > >    5   hdd  1.81360         osd.5         up  1.00000 1.00000
> > > > -2        6.59999     host jon
> > > >    1   hdd  4.20000         osd.1         up  1.00000 1.00000
> > > >    3   hdd  2.39999         osd.3         up  1.00000 1.00000
> > > > -4        1.64969     host tyrion
> > > >    2   hdd  0.44969         osd.2         up  1.00000 1.00000
> > > >    4   hdd  1.20000         osd.4         up  1.00000 1.00000
> > > >    0              0 osd.0               down        0 1.00000
> > Check on the system with osd.0, what the status of the service is and
> > what logs show. Also check if the mounted directory has ceph keys
> > for the osd and if it shows up in 'ceph auth list'.
> >
> > You may need to do a ceph-disk activate (or remove it and retry).
> > http://docs.ceph.com/docs/jewel/install/manual-deployment/#adding-osds
> >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > -plugged a new disk
> > > > > >
> > > > > > -used GUI (same result with pveceph createosd from cli) to
> > > > > > create a new osd
> > > > > > (with bluestore feature).
> > > > > PVE GUI and CLI use the same API for managing ceph
> > > > >
> > > > > >
> > > > > > 1. The OSD doesn't appear on gui
> > > > > > 2. The OSD is seen as down and out of any node
> > > > > > 3. the /var/log/ceph/ceph-osd.admin.log logfile seems to figure a
> > > > > > mismatch between filestore and bluestore:
> > > >
> > > > > Do you see any errors in the mon logs or ceph.log itself?
> > > > (Jaime is a mon& mgr)
> > > > root at jaime:~# tail -f /var/log/ceph/ceph-mon.1.log
> > > >
> > > > 2017-09-18 21:05:00.084847 7f8a1b4a8700  1 mon.1 at 0(leader).log v2152264
> > > > check_sub sending message to client.5804116 10.250.0.23:0/4045099631
> > > > with 0 entries (version 2152264)
> > > > 2017-09-18 21:05:09.963784 7f8a1868c700  0
> > > > mon.1 at 0(leader).data_health(2028) update_stats avail 90% total 58203 MB,
> > > > used 2743 MB, avail 52474 MB
> > > > 2017-09-18 21:05:29.878648 7f8a15e87700  0 mon.1 at 0(leader) e4
> > > > handle_command mon_command({"prefix": "osd new", "uuid":
> > > > "89fce23c-8535-48fa-bfc0-ae9a2a5d7cd6"} v 0) v1
> > > > 2017-09-18 21:05:29.878705 7f8a15e87700  0 log_channel(audit) log [INF]
> > > > : from='client.6392525 -' entity='client.bootstrap-osd' cmd=[{"prefix":
> > > > "osd new", "uuid": "89fce23c-8535-48fa-bfc0-ae9a2a5d7cd6"}]: dispatch
> > > > 2017-09-18 21:05:29.927377 7f8a1b4a8700  1 mon.1 at 0(leader).osd e1141
> > > > e1141: 6 total, 5 up, 5 in
> > > > 2017-09-18 21:05:29.932253 7f8a1b4a8700  0 log_channel(audit) log [INF]
> > > > : from='client.6392525 -' entity='client.bootstrap-osd' cmd='[{"prefix":
> > > > "osd new", "uuid": "89fce23c-8535-48fa-bfc0-ae9a2a5d7cd6"}]': finished
> > > > 2017-09-18 21:05:29.932388 7f8a1b4a8700  0 log_channel(cluster) log
> > > > [DBG] : osdmap e1141: 6 total, 5 up, 5 in
> > > > 2017-09-18 21:05:29.932983 7f8a15e87700  0 mon.1 at 0(leader) e4
> > > > handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
> > > > 2017-09-18 21:05:29.933040 7f8a15e87700  0 log_channel(audit) log [DBG]
> > > > : from='client.5804116 10.250.0.23:0/4045099631' entity='mgr.jon'
> > > > cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
> > > > 2017-09-18 21:05:29.933337 7f8a15e87700  0 mon.1 at 0(leader) e4
> > > > handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
> > > > 2017-09-18 21:05:29.933383 7f8a15e87700  0 log_channel(audit) log [DBG]
> > > > : from='client.5804116 10.250.0.23:0/4045099631' entity='mgr.jon'
> > > > cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
> > > > 2017-09-18 21:05:29.933674 7f8a15e87700  0 mon.1 at 0(leader) e4
> > > > handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) v1
> > > > 2017-09-18 21:05:29.933692 7f8a15e87700  0 log_channel(audit) log [DBG]
> > > > : from='client.5804116 10.250.0.23:0/4045099631' entity='mgr.jon'
> > > > cmd=[{"prefix": "osd metadata", "id": 3}]: dispatch
> > > > 2017-09-18 21:05:29.933880 7f8a15e87700  0 mon.1 at 0(leader) e4
> > > > handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) v1
> > > > 2017-09-18 21:05:29.933897 7f8a15e87700  0 log_channel(audit) log [DBG]
> > > > : from='client.5804116 10.250.0.23:0/4045099631' entity='mgr.jon'
> > > > cmd=[{"prefix": "osd metadata", "id": 4}]: dispatch
> > > > 2017-09-18 21:05:29.934062 7f8a15e87700  0 mon.1 at 0(leader) e4
> > > > handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) v1
> > > > 2017-09-18 21:05:29.934089 7f8a15e87700  0 log_channel(audit) log [DBG]
> > > > : from='client.5804116 10.250.0.23:0/4045099631' entity='mgr.jon'
> > > > cmd=[{"prefix": "osd metadata", "id": 5}]: dispatch
> > > > 2017-09-18 21:05:30.113007 7f8a1b4a8700  1 mon.1 at 0(leader).log v2152265
> > > > check_sub sending message to client.5804116 10.250.0.23:0/4045099631
> > > > with 3 entries (version 2152265)
> > > > 2017-09-18 21:05:31.154227 7f8a1b4a8700  1 mon.1 at 0(leader).log v2152266
> > > > check_sub sending message to client.5804116 10.250.0.23:0/4045099631
> > > > with 0 entries (version 2152266)
> > > > 2017-09-18 21:05:32.289428 7f8a1b4a8700  1 mon.1 at 0(leader).log v2152267
> > > > check_sub sending message to client.5804116 10.250.0.23:0/4045099631
> > > > with 0 entries (version 2152267)
> > > > 2017-09-18 21:05:36.782573 7f8a1b4a8700  1 mon.1 at 0(leader).log v2152268
> > > > check_sub sending message to client.5804116 10.250.0.23:0/4045099631
> > > > with 0 entries (version 2152268)
> > > > 2017-09-18 21:06:09.964314 7f8a1868c700  0
> > > > mon.1 at 0(leader).data_health(2028) update_stats avail 90% total 58203 MB,
> > > > used 2744 MB, avail 52473 MB
> > > > 2017-09-18 21:06:20.040930 7f8a1b4a8700  1 mon.1 at 0(leader).log v2152269
> > > > check_sub sending message to client.5804116 10.250.0.23:0/4045099631
> > > > with 0 entries (version 2152269)
> > > >
> > > >
> > > > And ceph.log
> > > >
> > > > root at jaime:~# tail -f /var/log/ceph/ceph.log
> > > > 2017-09-18 12:00:00.000160 mon.1 mon.0 10.250.0.21:6789/0 38100 :
> > > > cluster [ERR] overall HEALTH_ERR 3 backfillfull osd(s); 51727/1415883
> > > > objects misplaced (3.653%); Degraded data redundancy: 73487/1415883
> > > > objects degraded (5.190%), 30 pgs unclean, 21 pgs degraded, 21 pgs
> > > > undersized; Degraded data redundancy (low space): 29 pgs
> > > > backfill_toofull; application not enabled on 2 pool(s)
> > > > 2017-09-18 13:00:00.000160 mon.1 mon.0 10.250.0.21:6789/0 38101 :
> > > > cluster [ERR] overall HEALTH_ERR 3 backfillfull osd(s); 51727/1415883
> > > > objects misplaced (3.653%); Degraded data redundancy: 73487/1415883
> > > > objects degraded (5.190%), 30 pgs unclean, 21 pgs degraded, 21 pgs
> > > > undersized; Degraded data redundancy (low space): 29 pgs
> > > > backfill_toofull; application not enabled on 2 pool(s)
> > > > 2017-09-18 14:00:00.000133 mon.1 mon.0 10.250.0.21:6789/0 38102 :
> > > > cluster [ERR] overall HEALTH_ERR 3 backfillfull osd(s); 51727/1415883
> > > > objects misplaced (3.653%); Degraded data redundancy: 73487/1415883
> > > > objects degraded (5.190%), 30 pgs unclean, 21 pgs degraded, 21 pgs
> > > > undersized; Degraded data redundancy (low space): 29 pgs
> > > > backfill_toofull; application not enabled on 2 pool(s)
> > > > 201
> > > >
> > > > Yes, the cluster is not really healthy, indeed ....
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > >
> > > > > > 2017-09-16 19:12:00.468481 7f6469cdde00  0 ceph version 12.2.0
> > > > > > (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous (rc),
> > > > > > process (unknown),
> > > > > > pid 5624
> > > > > > 2017-09-16 19:12:00.470154 7f6469cdde00 -1 bluestore(/dev/sdb2)
> > > > > > _read_bdev_label unable to decode label at offset 102:
> > > > > > buffer::malformed_input: void
> > > > > > bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&)
> > > > > > decode past
> > > > > > end of struct encoding
> > > > > > 2017-09-16 19:12:00.471408 7f6469cdde00  1 journal _open /dev/sdb2 fd 4:
> > > > > > 750050447360 bytes, block size 4096 bytes, directio = 0, aio = 0
> > > > > > 2017-09-16 19:12:00.471727 7f6469cdde00  1 journal close /dev/sdb2
> > > > > > 2017-09-16 19:12:00.471994 7f6469cdde00  0
> > > > > > probe_block_device_fsid /dev/sdb2
> > > > > > is filestore, 00000000-0000-0000-0000-000000000000
> > > > > > 2017-09-16 19:12:05.042622 7f000b944e00  0 ceph version 12.2.0
> > > > > > (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous (rc),
> > > > > > process (unknown),
> > > > > > pid 5702
> > > > > > 2017-09-16 19:12:05.066343 7f000b944e00 -1 bluestore(/dev/sdb2)
> > > > > > _read_bdev_label unable to decode label at offset 102:
> > > > > > buffer::malformed_input: void
> > > > > > bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&)
> > > > > > decode past
> > > > > > end of struct encoding
> > > > > > 2017-09-16 19:12:05.066549 7f000b944e00  1 journal _open /dev/sdb2 fd 4:
> > > > > > 750050447360 bytes, block size 4096 bytes, directio = 0, aio = 0
> > > > > > 2017-09-16 19:12:05.066717 7f000b944e00  1 journal close /dev/sdb2
> > > > > > 2017-09-16 19:12:05.066843 7f000b944e00  0
> > > > > > probe_block_device_fsid /dev/sdb2
> > > > > > is filestore, 00000000-0000-0000-0000-000000000000
> > > > > > 2017-09-16 19:12:08.198548 7f5740748e00  0 ceph version 12.2.0
> > > > > > (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous (rc),
> > > > > > process (unknown),
> > > > > > pid 5767
> > > > > > 2017-09-16 19:12:08.223674 7f5740748e00 -1 bluestore(/dev/sdb2)
> > > > > > _read_bdev_label unable to decode label at offset 102:
> > > > > > buffer::malformed_input: void
> > > > > > bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&)
> > > > > > decode past
> > > > > > end of struct encoding
> > > > > > 2017-09-16 19:12:08.223831 7f5740748e00  1 journal _open /dev/sdb2 fd 4:
> > > > > > 750050447360 bytes, block size 4096 bytes, directio = 0, aio = 0
> > > > > > 2017-09-16 19:12:08.224213 7f5740748e00  1 journal close /dev/sdb2
> > > > > > 2017-09-16 19:12:08.224342 7f5740748e00  0
> > > > > > probe_block_device_fsid /dev/sdb2
> > > > > > is filestore, 00000000-0000-0000-0000-000000000000
> > > > > > 2017-09-16 19:12:09.149622 7f7b06058e00  0 ceph version 12.2.0
> > > > > > (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous (rc),
> > > > > > process (unknown),
> > > > > > pid 5800
> > > > > > 2017-09-16 19:12:09.173319 7f7b06058e00 -1 bluestore(/dev/sdb2)
> > > > > > _read_bdev_label unable to decode label at offset 102:
> > > > > > buffer::malformed_input: void
> > > > > > bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&)
> > > > > > decode past
> > > > > > end of struct encoding
> > > > > > 2017-09-16 19:12:09.173402 7f7b06058e00  1 journal _open /dev/sdb2 fd 4:
> > > > > > 750050447360 bytes, block size 4096 bytes, directio = 0, aio = 0
> > > > > > 2017-09-16 19:12:09.173485 7f7b06058e00  1 journal close /dev/sdb2
> > > > > > 2017-09-16 19:12:09.173511 7f7b06058e00  0
> > > > > > probe_block_device_fsid /dev/sdb2
> > > > > > is filestore, 00000000-0000-0000-0000-000000000000
> > > > > > 2017-09-16 19:12:10.197944 7f7561d50e00  0 ceph version 12.2.0
> > > > > > (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous (rc),
> > > > > > process (unknown),
> > > > > > pid 5828
> > > > > > 2017-09-16 19:12:10.222504 7f7561d50e00 -1 bluestore(/dev/sdb2)
> > > > > > _read_bdev_label unable to decode label at offset 102:
> > > > > > buffer::malformed_input: void
> > > > > > bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&)
> > > > > > decode past
> > > > > > end of struct encoding
> > > > > > 2017-09-16 19:12:10.222723 7f7561d50e00  1 journal _open /dev/sdb2 fd 4:
> > > > > > 750050447360 bytes, block size 4096 bytes, directio = 0, aio = 0
> > > > > > 2017-09-16 19:12:10.222753 7f7561d50e00  1 journal close /dev/sdb2
> > > > > > 2017-09-16 19:12:10.222785 7f7561d50e00  0
> > > > > > probe_block_device_fsid /dev/sdb2
> > > > > > is filestore, 00000000-0000-0000-0000-000000000000
> > > > > > 2017-09-16 19:12:14.370797 7f9fecb7fe00  0 ceph version 12.2.0
> > > > > > (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous (rc),
> > > > > > process (unknown),
> > > > > > pid 5964
> > > > > > 2017-09-16 19:12:14.371221 7f9fecb7fe00 -1 bluestore(/dev/sdb2)
> > > > > > _read_bdev_label unable to decode label at offset 102:
> > > > > > buffer::malformed_input: void
> > > > > > bluestore_bdev_label_t::decode(ceph::buffer::list::iterator&)
> > > > > > decode past
> > > > > > end of struct encoding
> > > > > > 2017-09-16 19:12:14.371350 7f9fecb7fe00  1 journal _open /dev/sdb2 fd 4:
> > > > > > 750050447360 bytes, block size 4096 bytes, directio = 0, aio = 0
> > > > > > 2017-09-16 19:12:14.371616 7f9fecb7fe00  1 journal close /dev/sdb2
> > > > > > 2017-09-16 19:12:14.371745 7f9fecb7fe00  0
> > > > > > probe_block_device_fsid /dev/sdb2
> > > > > > is filestore, 00000000-0000-0000-0000-000000000000
> > > > > > 2017-09-16 19:12:21.171036 7f5d7579be00  0 ceph version 12.2.0
> > > > > > (36f6c5ea099d43087ff0276121fd34e71668ae0e) luminous (rc),
> > > > > > process (unknown),
> > > > > > pid 6130
> > > > > > 2017-09-16 19:12:21.209441 7f5d7579be00  0
> > > > > > probe_block_device_fsid /dev/sdb2
> > > > > > is bluestore, 92a4a9eb-0a6a-405d-be83-11e4af42fa30
> > > > > >
> > > > > >
> > > > > >
> > > > > > Any hint ?
> > > > > >
> > > > > > Thanks by advance
> > > > > > Best regards
> > > > > >
> > > > > --
> > > > > Cheers,
> > > > > Alwin
> > > > >
> > > > > _______________________________________________
> > > > > pve-user mailing list
> > > > > pve-user at pve.proxmox.com
> > > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > > > >
> > > >
> > > > _______________________________________________
> > > > pve-user mailing list
> > > > pve-user at pve.proxmox.com
> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> > >
> > >
> > --
> > Cheers,
> > Alwin
> >
>
>
--
Cheers,
Alwin




More information about the pve-user mailing list