[PVE-User] Analysis of free space...
Alwin Antreich
alwin at antreich.com
Wed Oct 1 16:03:40 CEST 2025
September 30, 2025 at 6:55 PM, "Marco Gaiarin" <[gaio at lilliput.linux.it](mailto:gaio at lilliput.linux.it?to=%22Marco%20Gaiarin%22%20%3Cgaio%40lilliput.linux.it%3E)\> wrote:
> Mandi! Matthieu Dreistadt via pve-user
> In chel di\` si favelave...
>
>
> > you can check "zfs list -o space", which will give you a more detailed
> > view of what is using the space:
>
>
> \[...\]
>
> > Used = overall used
> > Usedsnap = Used by Snapshots
> > Usedds = Used Disk Space (not counting snapshots, only live data)
> > Usedchild = Used by datasets/zvols further down in the same path (in my
> > example, rpool has the same amount of Used and Usedchild space, since
> > there is nothing directly inside of rpool itself)
>
>
> Thans for the hint. Anyway:
>
> [root at lamprologus:~#](mailto:root at lamprologus:~#) zfs list -o space | grep ^rpool-data/
> rpool-data/vm-100-disk-0 11.6T 1.07T 0B 1.07T 0B 0B
> rpool-data/vm-100-disk-1 11.6T 1.81T 0B 1.81T 0B 0B
> rpool-data/vm-100-disk-10 11.6T 1.42T 0B 1.42T 0B 0B
> rpool-data/vm-100-disk-11 11.6T 1.86T 0B 1.86T 0B 0B
> rpool-data/vm-100-disk-12 11.6T 1.64T 0B 1.64T 0B 0B
> rpool-data/vm-100-disk-13 11.6T 2.23T 0B 2.23T 0B 0B
> rpool-data/vm-100-disk-14 11.6T 1.96T 0B 1.96T 0B 0B
> rpool-data/vm-100-disk-15 11.6T 1.83T 0B 1.83T 0B 0B
> rpool-data/vm-100-disk-16 11.6T 1.89T 0B 1.89T 0B 0B
> rpool-data/vm-100-disk-17 11.6T 2.05T 0B 2.05T 0B 0B
> rpool-data/vm-100-disk-18 11.6T 3.39T 0B 3.39T 0B 0B
> rpool-data/vm-100-disk-19 11.6T 3.40T 0B 3.40T 0B 0B
> rpool-data/vm-100-disk-2 11.6T 1.31T 0B 1.31T 0B 0B
> rpool-data/vm-100-disk-20 11.6T 3.36T 0B 3.36T 0B 0B
> rpool-data/vm-100-disk-21 11.6T 2.50T 0B 2.50T 0B 0B
> rpool-data/vm-100-disk-22 11.6T 3.22T 0B 3.22T 0B 0B
> rpool-data/vm-100-disk-23 11.6T 2.73T 0B 2.73T 0B 0B
> rpool-data/vm-100-disk-24 11.6T 2.53T 0B 2.53T 0B 0B
> rpool-data/vm-100-disk-3 11.6T 213K 0B 213K 0B 0B
> rpool-data/vm-100-disk-4 11.6T 213K 0B 213K 0B 0B
> rpool-data/vm-100-disk-5 11.6T 1.48T 0B 1.48T 0B 0B
> rpool-data/vm-100-disk-6 11.6T 1.35T 0B 1.35T 0B 0B
> rpool-data/vm-100-disk-7 11.6T 930G 0B 930G 0B 0B
> rpool-data/vm-100-disk-8 11.6T 1.26T 0B 1.26T 0B 0B
> rpool-data/vm-100-disk-9 11.6T 1.30T 0B 1.30T 0B 0B
>
> seems that really i was able to put 3.40T of real data on a 2T volume...
Hm... I assume you're running a RAIDz1?
See the link: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_raid_size_space_usage_redundancy
Cheers,
Alwin
More information about the pve-user
mailing list