[PVE-User] How to restart ceph-mon?

Marco Gaiarin gaio at sv.lnf.it
Fri Feb 21 15:29:08 CET 2020


Mandi! Alwin Antreich
  In chel di` si favelave...

> Yes, that looks strange. But as said before, it is deprecated to use
> IDs. Best destroy and re-create the MON one-by-one. The default command
> will create them with the hostname as ID. Then this phenomenon should
> disappear as well.

Done, via web interface, with a little glitch.

I've stopped and dropped the monitor, but these don't stop (and drop)
the manager, and so creating a new mon va webinterface lead to:

 Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon at hulk.service -> /lib/systemd/system/ceph-mon at .service.
 INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing'
 INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing'
 INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
 INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
 INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
 INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
 INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
 INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
 INFO:ceph-create-keys:Key exists already: /etc/ceph/ceph.client.admin.keyring
 INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-osd/ceph.keyring
 INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-rgw/ceph.keyring
 INFO:ceph-create-keys:Key exists already: /var/lib/ceph/bootstrap-mds/ceph.keyring
 INFO:ceph-create-keys:Talking to monitor...
 TASK ERROR: ceph manager directory '/var/lib/ceph/mgr/ceph-hulk' already exists

probably because the task try also to fire up a mgr, that was just
created.


Anyway, nothing changed. On a rebooted node:

	root at capitanmarvel:~# ps aux | grep ceph[-]mon
	ceph        2725  0.5  0.2 522224 98428 ?        Ssl  feb18  21:14 /usr/bin/ceph-mon -i capitanmarvel --pid-file /var/run/ceph/mon.capitanmarvel.pid -c /etc/ceph/ceph.conf --cluster ceph --setuser ceph --setgroup ceph

on a node when i do a 'systemctl restart ceph-mgr@<ID>.service':

	root at hulk:~# ps aux | grep ceph[-]mon
	ceph     4166380  0.8  0.1 466648 55676 ?        Ssl  15:19   0:03 /usr/bin/ceph-mon -f --cluster ceph --id hulk --setuser ceph --setgroup ceph


All cluster is healthy and works as expected, anyway:

 root at hulk:~# ceph -s
  cluster:
    id:     8794c124-c2ec-4e81-8631-742992159bd6
    health: HEALTH_OK
 
  services:
    mon: 5 daemons, quorum blackpanther,capitanmarvel,deadpool,hulk,thor
    mgr: blackpanther(active), standbys: capitanmarvel, deadpool, thor, hulk
    osd: 12 osds: 12 up, 12 in

-- 
dott. Marco Gaiarin				        GNUPG Key ID: 240A3D66
  Associazione ``La Nostra Famiglia''          http://www.lanostrafamiglia.it/
  Polo FVG   -   Via della Bontà, 7 - 33078   -   San Vito al Tagliamento (PN)
  marco.gaiarin(at)lanostrafamiglia.it   t +39-0434-842711   f +39-0434-842797

		Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA!
      http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000
	(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA)



More information about the pve-user mailing list