[pve-devel] [PATCH qemu-server 14/16] introduce QSD module for qemu-storage-daemon functionality
Laurent GUERBY
laurent at guerby.net
Mon Oct 20 13:27:50 CEST 2025
Hi,
On Mon, 2025-10-20 at 11:49 +0200, Fiona Ebner wrote:
> Hi,
>
> Am 20.10.25 um 10:57 AM schrieb Laurent GUERBY:
> > On Tue, 2025-10-14 at 16:39 +0200, Fiona Ebner wrote:
> > > For now, supports creating FUSE exports based on Proxmox VE drive
> > > definitions. NBD exports could be added later. In preparation to allow
> > > qcow2 for TPM state volumes. A QEMU storage daemon instance is
> > > associated to a given VM.
> >
> > Hi,
> >
> > I wonder if this addition of qemu-storage-daemon with fuse would be
> > able to solve the following issue I just opened:
> >
> > https://bugzilla.proxmox.com/show_bug.cgi?id=6953
> >
> > "cannot set set-require-min-compat-client to reef, luminous clients
> > from kernel rbd due to VM with TPM /dev/rbd"
> >
> > The rbd kernel module feature is stuck to luminous
>
> Do you know why? Or if there is any interest to change that?
I don't know (I'm not a ceph nor kernel developper), if I look at the
latest ceph documentation it points to 4.19 kernel min version :
https://docs.ceph.com/en/latest/start/os-recommendations/#linux-kernel
This is coherent the "4.17" comment on the following feature of ceph:
https://github.com/ceph/ceph/blob/main/src/include/ceph_features.h#L157
(nothing more recent than 4.17)
The linux kernel rbd driver code doesn't change much
https://github.com/torvalds/linux/commits/master/drivers/block/rbd.c
I presume this is for maximum compatibility with potentially old-ish
userspace.
I also don't know if the rbd kernel module could advertise more recent
ceph features and fallback to luminous level in some way.
>
> > and swtpm use of
> > kernel /dev/rbd limits the usable features of the whole proxmox/ceph
> > cluster as soon as a VM with TPM is created on the cluster.
> >
> > If possible using qemu-storage-daemon to export the rbd image to swtpm
> > would still allow proxmox to leave the TPM disk on ceph while
> > benefiting from recent ceph features.
>
> It's would be possible, but it rather sounds like the real issue is that
> the kernel module is outdated. And for container volumes, krbd is also
> always used, so it would need to be adapted there too. Otherwise, you
> will still break container volumes when bumping the minimum required
> client version.
Good catch, I don't use containers on Proxmox VE so I didn't think of
that.
May be it would be wise to ask the ceph developpers what they think
about it as ceph users outside of proxmox will be affected as well.
For example the following reef based feature is documented as a
performance improvement (and "highly recommended") and the container
world is important nowadays:
https://docs.ceph.com/en/latest/rados/operations/balancer/#modes
"""
upmap-read. This balancer mode combines optimization benefits of both
upmap and read mode. Like in read mode, upmap-read makes use of pg-
upmap-primary. As such, only Reef and later clients are compatible. For
more details about client compatibility, see Operating the Read
(Primary) Balancer.
upmap-read is highly recommended for achieving the upmap mode’s
offering of balanced PG distribution as well as the read mode’s
offering of balanced reads.
"""
As a side node I had imbalance in our small proxmox/ceph cluster (8
nodes and 57 OSD) and lowering upmap_max_deviation from 5 to 2 got rid
of it:
ceph config set mgr mgr/balancer/upmap_max_deviation 2
MIN/MAX VAR: 0.85/1.21 STDDEV: 5.38 # before default 5
MIN/MAX VAR: 0.93/1.06 STDDEV: 1.56 # after set at 2
It also got rid of warnings on some OSD > 0.8 use (while average use
what at 0.6).
So it might be interesting to add proxmox VE documentation and may be
tooling for this parameter as I assume most proxmox users will have
small-ish clusters and potentially hit imbalance issues like us.
https://docs.ceph.com/en/latest/rados/operations/balancer/#throttling
Let me know if it's worth opening a separate bugzilla.
Sincerely,
Laurent GUERBY
>
> > PS: ZFS over iSCSI isn't usable for TPM as well
> > https://bugzilla.proxmox.com/show_bug.cgi?id=3662
> > TPM disfunctional with ZFS over iSCSI
>
> Yes, I'm planning to add that later, it's hopefully rather easy once the
> infrastructure is in place.
>
> Best Regards,
> Fiona
More information about the pve-devel
mailing list