[pve-devel] [PATCH ceph 3/4 v5] exclude ceph-osd-crimson when running dwz
Fabian Grünbichler
f.gruenbichler at proxmox.com
Mon Jan 26 13:33:21 CET 2026
On January 26, 2026 12:13 pm, Kefu Chai wrote:
> On Sat Jan 24, 2026 at 12:54 PM CST, Kefu Chai wrote:
>> On Fri Jan 23, 2026 at 9:03 PM CST, Fabian Grünbichler wrote:
>>> On January 23, 2026 8:56 am, Kefu Chai wrote:
>>>> The dwz tool tries to deduplicate debug information across binaries,
>>>> but it has limits on the number of DWARF DIE (Debug Information Entries)
>>>> it can handle. Large C++ binaries especially those using templates
>>>> heavilily (like Ceph's crimson components), often exceed these limits.
>>>>
>>>> When building packages with DWZ enabled, the debian packging fails with:
>>>>
>>>> ```
>>>> dh_dwz: error: Aborting due to earlier error
>>>> ```
>>>>
>>>> So let's make ceph-crimson-osd an exception when running dwz. This
>>>> change will not backported to tentacle as tentacle does not build
>>>> crimson by default.
>>>
>>> FWIW, dh_dwz will be dropped from the default sequence in forky/DH-14,
>>> so we could also consider nop-ing it entirely (unless that blows up
>>> binary sizes too much?).
>>>
>>
>> Thanks for sharing! This information is critical for this change. Will
>> we release tentacle along with forky as the base system?
>>
>> I will compare the package sizes with and without dwz enabled, and get
>> back to you over the mailing list. if sizes difference is minimal or
>> smallish, I will export DWZ=false to disable it.
>>
>
> The following table shows the size comparison of debug packages (dbg) compiled
> from ceph 20.2.0-pve1 with and without DWZ compression:
>
> +---------------------------------+--------------+--------------+--------------+------------+
> | Package Name | With DMZ (MB) | Without (MB) | Diff (MB) | Ratio |
> +=================================+==============+==============+==============+============+
> | ceph-base-dbg | 151.0 | 181.7 | +30.7 | 16.89% |
> | ceph-common-dbg | 949.5 | 1091.1 | +141.7 | 12.99% |
> | ceph-exporter-dbg | 8.3 | 8.6 | +0.3 | 3.94% |
> | ceph-fuse-dbg | 14.1 | 15.2 | +1.1 | 7.13% |
> | ceph-immutable-object-cache-dbg | 4.7 | 5.3 | +0.5 | 9.86% |
> | ceph-mds-dbg | 79.3 | 88.2 | +8.9 | 10.10% |
> | ceph-mgr-dbg | 33.8 | 36.8 | +2.9 | 7.99% |
> | ceph-mon-dbg | 194.2 | 220.4 | +26.2 | 11.91% |
> | ceph-osd-dbg | 3.7 | 3.9 | +0.2 | 4.53% |
> | ceph-test-dbg | 1488.9 | 1827.4 | +338.5 | 18.52% |
> | cephfs-mirror-dbg | 8.0 | 9.3 | +1.3 | 14.19% |
> | libcephfs-daemon-dbg | 0.1 | 0.1 | +0.0 | 2.51% |
> | libcephfs-proxy2-dbg | 0.1 | 0.1 | +0.0 | 1.43% |
> | libcephfs2-dbg | 12.7 | 13.5 | +0.8 | 6.02% |
> | librados2-dbg | 98.4 | 108.4 | +10.0 | 9.22% |
> | libradosstriper1-dbg | 7.1 | 7.3 | +0.2 | 2.83% |
> | librbd1-dbg | 113.7 | 150.9 | +37.3 | 24.70% |
> | librgw2-dbg | 300.4 | 332.9 | +32.5 | 9.75% |
> | libsqlite3-mod-ceph-dbg | 2.2 | 2.3 | +0.0 | 1.78% |
> | python3-cephfs-dbg | 1.3 | 1.3 | -0.0 | -0.03% |
> | python3-rados-dbg | 1.2 | 1.2 | +0.0 | 0.82% |
> | python3-rbd-dbg | 2.2 | 2.2 | -0.0 | -0.01% |
> | python3-rgw-dbg | 0.6 | 0.6 | +0.0 | 0.17% |
> | radosgw-dbg | 584.8 | 646.6 | +61.8 | 9.56% |
> | rbd-fuse-dbg | 2.1 | 2.2 | +0.1 | 6.22% |
> | rbd-mirror-dbg | 117.5 | 153.7 | +36.2 | 23.56% |
> | rbd-nbd-dbg | 3.4 | 3.6 | +0.2 | 4.28% |
> +---------------------------------+--------------+--------------+--------------+------------+
> | TOTAL | 4183.2 | 4914.8 | +731.5 | 14.88% |
> +---------------------------------+--------------+--------------+--------------+------------+
>
> While a 14.88% reduction may seem modest, it translates to over 700 MB saved per
> build. Not sure if we care about this overhead though.. Please let me
> know what do you think. I will update the patch accordingly.
that is quite a bit, so I guess we do want to keep dwz enabled for those
packages/executables where it works..
(we do not currently keep the ceph dbg packages on our CDN for this
reason, but since we can now split out the debug section to reduce the
load we should probably revisit that exclusion at some point..)
More information about the pve-devel
mailing list