[PVE-User] pve-cluster won't start with large ssh_known_hosts
Derek W. Poon
derekp+pve at ece.ubc.ca
Wed Nov 30 08:40:54 CET 2011
On 2011-11-29, at 8:58 PM, Dietmar Maurer wrote:
>> I have tracked this down to /usr/bin/pvecm, which calls
>> PVE::Cluster::ssh_merge_known_hosts(...),
>> which calls PVE::Tools::file_get_contents($sshglobalknownhosts, 128*1024),
>> which calls PVE::Tools::safe_read_from(...), which dies because the maximum
>> length is exceeded.
>
> Why is that file that large?
Dietmar,
In our department, we use cfengine to distribute an ssh_known_hosts file to all Linux machines. Each host takes two lines -- one for the RSA key, one for DSA. Our file is currently 295172 bytes, or 566 lines, corresponding to roughly 283 hosts, which is actually not unreasonable, in my opinion. Our computers can process that amount of data in a negligible amount of time.
Is there a technical reason for the 128 kiB limit, or is it an arbitrary restriction? If it is the latter, then I suggest removing it, as there is no point in having Proxmox introduce a failure mode for no particular reason.
Derek
More information about the pve-user
mailing list