<div dir="ltr"><div><div><br></div>This is happening on every reboot, it is most concerning.<br><br></div>I asked over on the zfsonlinux list and got the following answer<br><br><div class="gmail_extra"><i>The details would vary depending on which system components they use, but as I recall:</i></div><div class="gmail_extra"><i>- if you create a pool with by-id, it will store the by-id devices in zpool.cache</i></div><div class="gmail_extra"><i>- during boot, zfs will try to use zpool.cache to import devices</i></div><div class="gmail_extra"><i>-
 if for whatever reason zpool.cache is not available during import phase
 (e.g. stale initrd) or by-id links changed (I've seen this on some 
versions of ubuntu, forgot which, making the links as seen by initramfs 
is different from ones seen by userland tools after boot completes), 
some version of initramfs will try to use whatever is in /dev directly</i></div><div class="gmail_extra"><i>-
 the problem above might happen if you load zfs module from initramfs 
(i.e. using zfs root), but should not happen when you use it normally 
(i.e. you do NOT have zfs-initramfs installed)</i></div><div class="gmail_extra"><br></div><br><div><div>Since I'm not using zfs for boot, can I safely uninstall zfs-initramfs?<br></div><div><br><br>I setup a 6 disk pool (3VDEV Mirrors) + SLOG and L2ARCHE on SSD using the /dev/disk/by-id names.<br><br>zpool status<br>  pool: zfs_vm<br>state: ONLINE<br>  scan: none requested<br>config:<br><br>        NAME                                                STATE     READ WRITE CKSUM<br>        zfs_vm                                              ONLINE       0     0     0<br>          mirror-0                                          ONLINE       0     0     0<br>            ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU901      ONLINE       0     0     0<br>            ata-WDC_WD6000HLHX-01JJPV0_WD-WX81E81AFWJ4      ONLINE       0     0     0<br>          mirror-1                                          ONLINE       0     0     0<br>            ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZV240      ONLINE       0     0     0<br>            ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZV027      ONLINE       0     0     0<br>          mirror-2                                          ONLINE       0     0     0<br>            ata-WDC_WD6000HLHX-01JJPV0_WD-WX41E81ZU903      ONLINE       0     0     0<br>            ata-WDC_WD6000HLHX-01JJPV0_WD-WXB1E81EFFT2      ONLINE       0     0     0<br>        logs<br>          ata-INTEL_SSDSC2BW120H6_CVTR5146027K120AGN-part1  ONLINE       0     0     0<br>        cache<br>          ata-INTEL_SSDSC2BW120H6_CVTR5146027K120AGN-part2  ONLINE       0     0     0<br><br>After a couple fo reboots I see that all the devices have changed to the /dev/sd* names!<br><br>zpool status<br>  pool: zfs_vm<br>state: ONLINE<br>  scan: none requested<br>config:<br><br>        NAME                                                STATE     READ WRITE CKSUM<br>        zfs_vm                                              ONLINE       0     0     0<br>          mirror-0                                          ONLINE       0     0     0<br>            sda                                             ONLINE       0     0     0<br>            sdb                                             ONLINE       0     0     0<br>          mirror-1                                          ONLINE       0     0     0<br>            sdc                                             ONLINE       0     0     0<br>            sdd                                             ONLINE       0     0     0<br>          mirror-2                                          ONLINE       0     0     0<br>            sde                                             ONLINE       0     0     0<br>            sdf                                             ONLINE       0     0     0<br>        logs<br>          sdh1                                              ONLINE       0     0     0<br>        cache<br>          ata-INTEL_SSDSC2BW120H6_CVTR5146027K120AGN-part2  ONLINE       0     0     0<br><br><br>I changed them back by doing a:<br><br>  zpool export zfs_vm<br>  root@vna:~# zpool import -d /dev/disk/by-id/<br><br>But I have no idea how it happened in the first place.<br><br></div></div></div>