[pve-devel] [RFC] towards automated integration testing
s.hanreich at proxmox.com
Mon Oct 16 13:20:18 CEST 2023
On 10/13/23 15:33, Lukas Wagner wrote:
> - Additionally, it should be easy to run these integration tests locally
> on a developer's workstation in order to write new test cases, as well
> as troubleshooting and debugging existing test cases. The local
> test environment should match the one being used for automated testing
> as closely as possible
This would also include sharing those fixture templates somewhere, do
you already have an idea on how to accomplish this? PBS sounds like a
good option for this if I'm not missing something.
> As a main mode of operation, the Systems under Test (SUTs)
> will be virtualized on top of a Proxmox VE node.
> This has the following benefits:
> - it is easy to create various test setups (fixtures), including but not
> limited to single Proxmox VE nodes, clusters, Backup servers and
> auxiliary services (e.g. an LDAP server for testing LDAP
I can imagine having to setup VMs inside the Test Setup as well for
doing various tests. Doing this manually every time could be quite
cumbersome / hard to automate. Do you have a mechanism in mind to deploy
VMs inside the test system as well? Again, PBS could be an interesting
option for this imo.
> In theory, the test runner would also be able to drive tests on real
> hardware, but of course with some limitations (harder to have a
> predictable, reproducible environment, etc.)
Maybe utilizing Aaron's installer for setting up those test systems
could at least produce somewhat identical setups? Although it is really
hard managing systems with different storage types, network cards, ... .
I've seen GitLab using tags for runners that specify certain
capabilities of systems. Maybe we could also introduce something like
that here for different bare-metal systems? E.g. a test case specifies
it needs a system with tag `ZFS` and then you can run / skip the
respective test case on that system. Managing those tags can introduce
quite a lot of churn though, so I'm not sure if this would be a good idea.
> The test script is executed by the test runner; the test outcome is
> determined by the exit code of the script. Test scripts could be written
Are you considering capturing output as well? That would make sense when
using assertions at least, so in case of failures developers have a
starting point for debugging.
Would it make sense to allow specifying a expected exit code for tests
that actually should fail - or do you consider this something that
should be handled by the test script?
I've refrained from talking about the toml files too much since it's
probably too early to say something about that, but they look good so
far from my pov.
In general this sounds like quite the exciting feature and the RFC looks
very promising already.
More information about the pve-devel