[pve-devel] [RFC] towards automated integration testing

Thomas Lamprecht t.lamprecht at proxmox.com
Tue Oct 17 18:28:22 CEST 2023


Am 17/10/2023 um 14:33 schrieb Lukas Wagner:
> On 10/17/23 08:35, Thomas Lamprecht wrote:
>>  From top of my head I'd rather do some attribute based dependency
>> annotation, so that one can depend on single tests, or whole fixture
>> on others single tests or whole fixture.
>>
> 
> The more thought I spend on it, the more I believe that inter-testcase
> deps should be avoided as much as possible. In unit testing, (hidden)

We don't plan unit testing here though and the dependencies I proposed
are the contrary from hidden, rather explicit annotated ones.

> dependencies between tests are in my experience the no. 1 cause of
> flaky tests, and I see no reason why this would not also apply for
> end-to-end integration testing.

Any source on that being the no 1 source of flaky tests? IMO that
should not make any difference, in the end you just allow better
reuse through composition of other tests (e.g., migration builds
upon clustering *set up*, not tests, if I just want to run
migration I can do clustering setup without executing its tests).

Not providing that could also mean that one has to move all logic
in the test-script, resulting in a single test per "fixture", reducing
granularity and parallelity of some running tests.

I also think that 

> I'd suggest to only allow test cases to depend on fixtures. The fixtures
> themselves could have setup/teardown hooks that allow setting up and
> cleaning up a test scenario. If needed, we could also have something
> like 'fixture inheritance', where a fixture can 'extend' another,
> supplying additional setup/teardown.
> Example: the 'outermost' or 'parent' fixture might define that we
> want a 'basic PVE installation' with the latest .debs deployed,
> while another fixture that inherits from that one might set up a
> storage of a certain type, useful for all tests that require specific
> that type of storage.

Maybe our disagreement stems mostly from different design pictures in
our head, I probably am a bit less fixed (heh) on the fixtures, or at
least the naming of that term and might use test system, or intra test
system when for your design plan fixture would be the better word.

> On the other hand, instead of inheritance, a 'role/trait'-based system
> might also work (composition >>> inheritance, after all) - and
> maybe that also aligns better with the 'properties' mentioned in
> your other mail (I mean this here:  "ostype=win*", "memory>=10G").
> 
> This is essentially a very similar pattern as in numerous other testing
> frameworks (xUnit, pytest, etc.); I think it makes sense to
> build upon this battle-proven approach.

Those are all unit testing tools though that we do already in the
sources and IIRC those do not really provide what we need here.
While starting out simple(r) and avoiding too much complexity has
certainly it's merits, I don't think we should try to draw/align
too many parallels with those tools here for us.

> 
> Regarding execution order, I'd now even suggest the polar opposite of my
> prior idea. Instead of enforcing some execution order, we could also
> actively shuffle execution order from run to run, at least for tests
> using the same fixture.
> The seed used for the RNG should be put into the test
> report and could also be provided via a flag to the test runner, in case
> we need to repeat a specific test sequence .

Hmm, this also has a chance to make  tests flaky and get a bit annoying,
like perl's hash scrambling, but not a bad idea, I'd just not do that by
default on the "armed" test system that builds on package/git/patch updates,
but possibly in addition with reporting turned off like the double tests
for idempotency-checking I wrote in my previous message.

> In that way, the runner would actively help us to hunt down
> hidden inter-TC deps, making our test suite hopefully less brittle and
> more robust in the long term.

Agree, but as mentioned above I'd not enable it by default on the dev
facing automated systems, but possibly for manual runs from devs and
a separate "test-test-system" ^^

In summary, the most important points for me is a decoupled test-system
from the automation system that can manage it, ideally such that I can
decide relatively flexible on manual runs, IMO that should not be to much
work and it guarantees for clean cut APIs from which future development,
or integration surely will benefit too.

The rest is possibly hard to determine clearly on this stage, as it's easy
(at least for me) to get lost in different understandings of terms and
design perception, but hard to convey those very clearly about "pipe dreams",
so at this stage I'll cede to add discussion churn until there's something
more concrete that I can grasp on my terms (through reading/writing code),
but that should not deter others from giving input still while at this stage.

Thanks for your work on this.

- Thomas





More information about the pve-devel mailing list