[PVE-User] Proxmox VE 8.1 released!
Jan Vlach
janus at volny.cz
Fri Nov 24 15:24:24 CET 2023
as per https://github.com/openzfs/zfs/issues/15526 <https://github.com/openzfs/zfs/issues/15526> discussion,
this is the workaround for the silent data corruption:
echo 0 > /sys/module/zfs/parameters/zfs_dmu_offset_next_sync
TL;DR version: this parameter (https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-dmu-offset-next-sync <https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-dmu-offset-next-sync>) has been switched to 1 in 2.1.5 to trade safety for performance.
reproducer script is here:
https://github.com/openzfs/zfs/issues/15526#issuecomment-1824966856 <https://github.com/openzfs/zfs/issues/15526#issuecomment-1824966856>
I've managed to hit the data corruption with 8 simultaneous instances of the script. Allegedly, one needs a beefy computer to hit the bug, but I've managed to hit it on 2 core AMD fujitsu futro thin client running opnsense too. (after couple tries)
I didn't hit the behavior on debian 11 (PVE8), but managed to reproduce it in debian 12 (PVE8), FreeBSD 13 and 14.
Illumos' ZFS is fine.
It seems that the bug gets more exposed, because both freebsd's and coreutils's >8.32 cp uses 'file_copy_range'.
JV
More information about the pve-user
mailing list