[pve-devel] proxmox install script
Fabian Grünbichler
f.gruenbichler at proxmox.com
Wed Aug 16 12:13:32 CEST 2017
On Wed, Aug 16, 2017 at 11:35:22AM +0200, Nils Privat wrote:
> i got some questions about the proxmox-install-script:
> https://git.proxmox.com/?p=pve-installer.git;a=blob_plain;f=proxinstall;hb=HEAD
>
> 1) line 21: should be 5.0 ?
true, but that var is only used for testmode during development (where I
switched to simply testing the full ISO in a VM). fixed anyway ;)
> 2) line 781: small typo 'install'
fixed ;)
>
> 3) A question regarding function "zfs_create_rpool" (line 761):
> Wouldn't it be better to define/check the "compress", "checksum", and
> "copies" first and then
> create the pool with all the properties similar like chapter 2.3
> rather then setting each propterty step by step after creation, see
> (https://github.com/zfsonlinux/zfs/wiki/Debian-Stretch-Root-on-ZFS).
> So what i mean is something like that after you specify the ashift in line 766:
>
> ashift..line 766...
> # disable atime during install
> $cmd .= " -O atime=off"
>
> #compression
> $cmd .= " -O compression=$config_options->{compress}" >
> ....etc
>
> The same for checksum and copies and after setting all into $cmd we
> run at the end: zpool create..$cmd...
> So the last entries/lines in that function with my $value are obsolet.
the end result is the same, but you get more meaningful debug logs in
case one of the options fails.
>
>
> 4) The link above on github (debian stretch root on zfs) also suggests
> always using '/dev/disk/by-id/*' and not '/dev/sd*'.
> But when i run the installer/boot from ISO to install proxmox i get a
> list of disks like '/dev/sd*' or when i run zpool status the rpool is
> /dev/sda .... Normally, on non-root-pools, we can change that via
> export import but on a rpool?! So did i something wrong or does the
> install script using /dev/sd*?
the installer uses /dev/* because ZFS does not like the /dev/disk/by/id
paths. they are there in our installer environement, you can use them
with all kinds of programs, but zpool create fails somewhere along the
line, and I haven't been able to figure out the root cause. I suspect it
has something to with the way our ISO sets up the device nodes before
chrooting into the actual installer environment, but that is a rather
complicated beast to untangle.
you can re-import the rpool using a different device search path
directory once just like you would for a regular pool (e.g., using a
live CD - don't forget to regenerate the initramfs if you are using a
zpool.cache file!), but all of this is mainly cosmetic (ZFS does not
care about this at all - it is just for the admin to get easier to
read/identify/locate/.. device names).
if you want to investigate this further, you are more than welcome :)
TL;DR: I would like to get this fixed, but it is not very high up on my
list of TODOs. workaround is easy enough if you reall want it.
>
> 5) Regarding the function 'partition_bootable_zfs_disk' on line 913:
> As far as i can see the partition schema isnt like the recommended on
> the link above (see chapter 2.2). There i dont see a -part9 on rpool
> pools. And just for your information: there are currently some
> discussions on ZoL-github to may remove the -part9 and add a new pool
> proptery.
our installer (and ZFS support in PVE) predates that howto and uses the
classical ZFS partition numbering, but adding a BIOS boot partition. the
numbering is irrelevant for all the tools involved.
the extra reserved partition is not needed, but it also does not hurt.
we might remove it some time in the future, if upstream really drops it.
>
> 6) just a dump question on line 944: my $os_size = $hdsize - 1024 - 1024*8;
> I quess the 8M is the -part9 and what are the 1024? Shouldn't it be
> 2048 (2K) because the legacy/BIOS -part1 is 2K?
the BIOS boot partition is not 2K, it is 1M (the 2048 is referring to
sectors, which are normally 512b). none of the people complaining about
lack of 4Kn support in the installer were able to get back to us with
test feedback, and we don't have such disks in our test lab, so I am not
sure how spectacular the failure will be if you attempt to install on
4Kn disks ;)
if you have any concrete ideas how to improve the installer (or its ZFS
setup), feel free to propose changes here or in our bugzilla. you are of
course also welcome to implement changes and send us patches[1], but I
would recommend talking about your plans first if it is more than a
small bug fix.
1: https://pve.proxmox.com/wiki/Developer_Documentation
More information about the pve-devel
mailing list