[PVE-User] ceph disk replace
lists
lists at merit.unu.edu
Tue Sep 4 12:15:04 CEST 2018
Hi,
Thanks for the interesting discussion.
However, adding the new OSD in pve gui currently gives this problem:
> create OSD on /dev/sdj (xfs)
> using device '/dev/sdl' for journal
> ceph-disk: Error: journal specified but not allowed by osd backend
> TASK ERROR: command 'ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid 1397f1dc-7d94-43ea-ab12-8f8792eee9c1 --journal-dev /dev/sdj /dev/sdl' failed: exit code 1
/dev/sdj is the new OSD, as seen by the OS, and /dev/sdl is our journal
ssd. The journal ssd currently has 7 partitions, 5GB each, holding the
journal for 7 OSDs on that host, all added using the pve gui. PVE
created each partition automatically.
However, it looks as if this time pve tries to use the WHOLE ssd for a
hournal..? Should it not create an 8th partiton on /dev/sdl..?
MJ
More information about the pve-user
mailing list