[pve-devel] [PATCH v2 pve-manager 09/11] fix #4759: ceph: configure keyring for ceph-crash.service
Max Carrara
m.carrara at proxmox.com
Wed Feb 14 13:43:16 CET 2024
On 2/13/24 10:09, Max Carrara wrote:
> On 2/12/24 14:34, Fabian Grünbichler wrote:
>> On February 5, 2024 6:54 pm, Max Carrara wrote:
>>> when creating the cluster's first monitor.
>>>
>>> Signed-off-by: Max Carrara <m.carrara at proxmox.com>
>>> ---
>>> Changes v1 --> v2:
>>> * do not enable/restart `ceph-crash` anymore when creating first mon
>>> * drop changes to function `ceph_service_cmd` as they are no longer
>>> needed
>>> * create keyring for `ceph-crash` before modifying 'ceph.conf'
>>> * always set keyring for 'client.crash' section instead of only
>>> if section doesn't exist already
>>> * only modify the keyring file in `get_or_create_crash_keyring()`
>>> if the content differs from the output of `ceph auth get-or-create`
>>
>> you kinda ignored my comment about this adding yet another create keyring helper ;)
>
> Saw your other reply - will disregard this as noted ;)
>
>>
>>> PVE/API2/Ceph/MON.pm | 17 ++++++++++++++++-
>>> PVE/Ceph/Tools.pm | 39 +++++++++++++++++++++++++++++++++++++++
>>> 2 files changed, 55 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/PVE/API2/Ceph/MON.pm b/PVE/API2/Ceph/MON.pm
>>> index 1e959ef3..ae12a2d3 100644
>>> --- a/PVE/API2/Ceph/MON.pm
>>> +++ b/PVE/API2/Ceph/MON.pm
>>> @@ -459,11 +459,26 @@ __PACKAGE__->register_method ({
>>> });
>>> die $@ if $@;
>>> # automatically create manager after the first monitor is created
>>> + # and set up keyring and config for ceph-crash.service
>>> if ($is_first_monitor) {
>>> PVE::API2::Ceph::MGR->createmgr({
>>> node => $param->{node},
>>> id => $param->{node}
>>> - })
>>> + });
>>> +
>>> + eval {
>>> + PVE::Ceph::Tools::get_or_create_crash_keyring();
>>
>> this is called get_or_create, but it actually returns the path to, not
>> the key(ring).. nothing uses the return value anyway, so it could also
>> be called differently I guess and not return anything, but just from the
>> name, I'd expect the key to be returned.
>
> I agree; I initially wanted this helper to be similar to `get_or_create_ceph_admin_keyring`,
> but in this case I'll just unify the helper(s) once and for all in v3.
>
> No need to beat around the bush if I'm gonna unify them anyway, so might
> as well do it sooner than later ;)
Now that I've taken a more detailed look at all the existing code, in particular
usages of `ceph auth get-or-create` (via RADOS) and `ceph-authtool` (as command),
it's become clear to me that there's no actual benefit in creating an auth-helper
in that regard *at all*.
`ceph auth get-or-create` is only used via RADOS (`$rados->mon_command(...)`)
and `ceph-authtool` is only invoked in three places, each with separate use cases.
So, creating the keyring for 'client.crash' is (yet) an(other) exception rather
than something that benefits from a helper, so in this case, I'll think of a
different solution.
>
>>
>>> + };
>>> + warn "Unable to configure keyring for ceph-crash.service: $@" if $@;
>>> +
>>> + PVE::Cluster::cfs_lock_file('ceph.conf', undef, sub {
>>> + my $cfg = cfs_read_file('ceph.conf');
>>> +
>>> + $cfg->{'client.crash'}->{keyring} = '/etc/pve/ceph/$cluster.$name.keyring';
>>
>> I guess this doesn't make anything worse (since we starting from
>> "ceph-crash is totally broken"), but if querying or creating the keyring
>> failed, does it make sense to reference it in the config?
>
> I guess in this case it doesn't, but we would need some kind of way to
> re-init this part of the configuration in case the 'keyring' key doesn't
> get set here.
>
> Originally I planned to do this in a separate series to keep the scope of
> this series focused on bug #4759, so perhaps it's best (at least for the
> time being) to just not set the 'keyring' at all at the moment if querying
> or creating the crash key fails.
>
> So, I would change it to that in v3 and then later on supply a separate
> series for the 're-init' part, if that's alright.
>
>>
>>> +
>>> + cfs_write_file('ceph.conf', $cfg);
>>> + });
>>> + die $@ if $@;
>>
>> we could move this whole part up to where we do the monitor changes, and
>> only lock and read and write the config once..
>
> Good point, will be done in v3.
>
> Thanks again for the thorough reviews! :)
>
>>
>>> }
>>> };
>>>
>>> diff --git a/PVE/Ceph/Tools.pm b/PVE/Ceph/Tools.pm
>>> index 273a3eb6..02a932e3 100644
>>> --- a/PVE/Ceph/Tools.pm
>>> +++ b/PVE/Ceph/Tools.pm
>>> @@ -18,7 +18,9 @@ my $ccname = 'ceph'; # ceph cluster name
>>> my $ceph_cfgdir = "/etc/ceph";
>>> my $pve_ceph_cfgpath = "/etc/pve/$ccname.conf";
>>> my $ceph_cfgpath = "$ceph_cfgdir/$ccname.conf";
>>> +my $pve_ceph_cfgdir = "/etc/pve/ceph";
>>>
>>> +my $pve_ceph_crash_key_path = "$pve_ceph_cfgdir/$ccname.client.crash.keyring";
>>> my $pve_mon_key_path = "/etc/pve/priv/$ccname.mon.keyring";
>>> my $pve_ckeyring_path = "/etc/pve/priv/$ccname.client.admin.keyring";
>>> my $ckeyring_path = "/etc/ceph/ceph.client.admin.keyring";
>>> @@ -37,12 +39,14 @@ my $ceph_service = {
>>>
>>> my $config_values = {
>>> ccname => $ccname,
>>> + pve_ceph_cfgdir => $pve_ceph_cfgdir,
>>> ceph_mds_data_dir => $ceph_mds_data_dir,
>>> long_rados_timeout => 60,
>>> };
>>>
>>> my $config_files = {
>>> pve_ceph_cfgpath => $pve_ceph_cfgpath,
>>> + pve_ceph_crash_key_path => $pve_ceph_crash_key_path,
>>> pve_mon_key_path => $pve_mon_key_path,
>>> pve_ckeyring_path => $pve_ckeyring_path,
>>> ceph_bootstrap_osd_keyring => $ceph_bootstrap_osd_keyring,
>>> @@ -415,6 +419,41 @@ sub get_or_create_admin_keyring {
>>> return $pve_ckeyring_path;
>>> }
>>>
>>> +# requires connection to existing monitor
>>> +sub get_or_create_crash_keyring {
>>> + my ($rados) = @_;
>>> +
>>> + if (!defined($rados)) {
>>> + $rados = PVE::RADOS->new();
>>> + }
>>> +
>>> + my $output = $rados->mon_command({
>>> + prefix => 'auth get-or-create',
>>> + entity => 'client.crash',
>>> + caps => [
>>> + mon => 'profile crash',
>>> + mgr => 'profile crash',
>>> + ],
>>> + format => 'plain',
>>> + });
>>> +
>>> + if (! -d $pve_ceph_cfgdir) {
>>> + File::Path::make_path($pve_ceph_cfgdir);
>>> + }
>>> +
>>> + if (-f $pve_ceph_crash_key_path) {
>>> + my $contents = PVE::Tools::file_get_contents($pve_ceph_crash_key_path);
>>> +
>>> + if ($contents ne $output) {
>>> + PVE::Tools::file_set_contents($pve_ceph_crash_key_path, $output);
>>> + }
>>> + } else {
>>> + PVE::Tools::file_set_contents($pve_ceph_crash_key_path, $output);
>>> + }
>>> +
>>> + return $pve_ceph_crash_key_path;
>>> +}
>>> +
>>> # get ceph-volume managed osds
>>> sub ceph_volume_list {
>>> my $result = {};
>>> --
>>> 2.39.2
>>>
>>>
>>>
>>> _______________________________________________
>>> pve-devel mailing list
>>> pve-devel at lists.proxmox.com
>>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> pve-devel mailing list
>> pve-devel at lists.proxmox.com
>> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>>
>>
>
>
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
More information about the pve-devel
mailing list