[pve-devel] [PATCH container/docs/ha-manager/manager/qemu-server v3 00/19] HA resource affinity rules
Daniel Kral
d.kral at proxmox.com
Fri Jul 4 20:20:43 CEST 2025
RFC v1: https://lore.proxmox.com/pve-devel/20250325151254.193177-1-d.kral@proxmox.com/
RFC v2: https://lore.proxmox.com/pve-devel/20250620143148.218469-1-d.kral@proxmox.com/
HA rules: https://lore.proxmox.com/pve-devel/20250704181659.465441-1-d.kral@proxmox.com/
This is the other part, where the HA rules are extended with the HA
resource affinity rules. This depends on the HA rules series linked
above.
This is yet another follow-up to the previous RFC patch series for the
HA resource affinity rules feature (formerly known as HA colocation
rules), which allow users to specify rules (or affinity/anti-affinity)
for the HA Manager, which make two or more HA resource be either kept
together or apart with respect to each other.
Changelog to v2
---------------
- split up the patch series (ofc)
- rebased on newest available master
- renamed "HA Colocation Rule" to "HA Resource Affinity Rule"
- renamed "together" and "separate" to "positive" and "negative"
respectively for resource affinity rules
- renamed any reference of a 'HA service' to 'HA resource' (e.g. rules
property 'services' is now 'resources')
- converted tri-state property 'state' to a binary 'disable' flag on HA
rules and expose the 'contradictory' state with an 'errors' hash
- remove the "use-location-rules" feature flag and implement a more
straightforward ha groups migration (not directly relevant to this
feature, but wanted to note it either way)
- added more rules config test cases
- moved PVE::HashTools back to PVE::HA::HashTools because it lacked any
other obvious use cases for now
- added the inference that all services in a positive affinity must have
negative affinity with any service that is in negative affinity with
any of the services in the positive affinity (I hope someone finds a
better wording for this ;))
- added a rule checker which makes resource affinity rules with more
services than available nodes invalid
- dropped the patch which handled too many resources in a resource
affinity rule as that made more chaos than necessary (replaced that
with the check mentioned above)
- removed the strictness requirement of node affinity rules in the
inter-plugin type checks for node/resource affinity rules
- refactored the handling of manual migrations of services in resource
affinity relationships and made them external so that these can be
shown in the qemu-server and pve-container migrate preconditions in
the web interface
TODO for v3
-----------
There are still a few things that I am currently aware of now that
should be fixed as follow-ups or in a next revision.
- Mixed usage of Node Affinity rules and Resource Affinity rules still
behave rather awkward; The current implementation is still lacking the
inference when the services in a resource affinity rule have a node
affinity rule, then the other services must be in a node affinity rule
as well; with the current checks that should be reasonable to
implement in a similar way as the inference I've written for the other
case above
- Otherwise, if we don't want the magical inference from above, one
should add a checker which disallows a resource affinity rule to have
services which are not all in the same node affinity rule OR do not
have at least a common node among them (single-priority groups ofc).
- Testing, testing, testing
- Cases that where discovered while @Michael reviewed my series (thank
you very much Michael!)
As in the previous revisions, I've run a
git rebase master --exec 'make clean && make deb'
on the series, so the tests should work for every patch.
Changelog from v2 to v3
-----------------------
I've added per-patch changelogs for patches that have been changed, but
here's a better overview about the overall changes since the RFC:
- implemented the API/CLI endpoints and web interface integration for HA
rules
- added user-facing documentation about HA rules
- implemented HA location rules as semantically equivalent replacements
to HA groups (with the addition that the 'nofailback' flag was moved
to HA services as an inverted 'failback' to remove the double negation)
- implemented a "use-location-rules" feature flag in the datacenter
config to allow users to upgrade to the feature on their own
- dropped the 'loose' colocation rules for now, as these can be a
separate feature and it's unsure how these should act without user
feedback; i have them in a separate tree with not a lot of changes in
between these patches, so they are easy to rebase as a follow-up patch
series
- moved global rule checkers to the base rules plugin and made them
more modular and aware of their own plugin-specific rule set
- fixed a race condition where positively colocated services are split
and stay on multiple nodes (e.g. when the rule has been newly added
and the services are on different nodes) -> selects the node where
most of the pos. colocated service are now
- made the HA manager aware of the positive and negative colocations
when migrating, i.e., migrating other pos. colocated service with the
to-be-migrated service and blocking if a service is migrated to a node
where a neg. colocated service already is
- refactored the select_service_node(...) subroutine a bit to have less
arguments
------
Below is the updated initial cover letter of the first RFC.
------
I chose the name "colocation" in favor of affinity/anti-affinity, since
it is a bit more concise that it is about co-locating services between
each other in contrast to locating services on nodes, but no hard
feelings to change it (same for any other names in this series).
Many thanks to @Thomas, @Fiona, @Friedrich, @Fabian, @Lukas, @Michael
and @Hannes Duerr for the discussions about this feature off-list!
Recap: HA groups
----------------
The HA Manager currently allows a service to be assigned to one HA
groups, which essentially implements an affinity to a set of nodes. This
affinity can either be unrestricted or restricted, where the first
allows recovery to nodes outside of the HA group's nodes, if those are
currently unavailable.
This allows users to constrain the set of nodes, that can be selected
from as the starting and/or recovery node. Furthermore, each node in a
HA group can have an individual priority. This further constraints the
set of possible recovery nodes to the subset of online nodes in the
highest priority group.
Introduction
------------
Colocation is the concept of an inter-service affinity relationship,
which can either be positive (keep services together) or negative (keep
services apart). This is in contrast with the service-nodes affinity
relationship implemented by HA groups.
Motivation
----------
There are many different use cases to support colocation, but two simple
examples that come to mind are:
- Two or more services need to communicate with each other very
frequently. To reduce the communication path length and therefore
hopefully the latency, keep them together on one node.
- Two or more services need a lot of computational resources and will
therefore consume much of the assigned node's resource capacity. To
reduce starving and memory stalls, keep them separate on multiple
nodes, so that they have enough resources for themselves.
And some more concrete use cases from current HA Manager users:
- "For example: let's say we have three DB VMs (DB nodes in a cluster)
which we want to run on ANY PVE host, but we don't want them to be on
the same host." [0]
- "An example is: When Server from the DMZ zone start on the same host
like the firewall or another example the application servers start on
the same host like the sql server. Services they depend on each other
have short latency to each other." [1]
HA Rules
--------
To implement colocation, this patch series introduces HA rules, which
allows users to specify the colocation requirements on services. These
are implemented with the widely used section config, where each type of
rule is a individual plugin (for now 'location' and 'colocation').
This introduces some small initial complexity for testing satisfiability
of the rules, but allows the constraint interface to be extensible, and
hopefully allow easier reasoning about the node selection process with
the added constraint rules in the future.
Colocation Rules
----------------
The two properties of colocation rules, as described in the
introduction, are rather straightforward. A typical colocation rule
inside of the config would look like the following:
colocation: some-lonely-services
services vm:101,vm:103,ct:909
affinity separate
This means that the three services vm:101, vm:103 and ct:909 must be
kept separate on different nodes. I'm very keen on naming suggestions
since I think there could be a better word than 'affinity' here. I
played around with 'keep-services', since then it would always read
something like 'keep-services separate', which is very declarative, but
this might suggest that this is a binary option to too much users (I
mean it is, but not with the values 0 and 1).
Feasibility and Inference
-------------------------
Since rules allow more complexity, it is necessary to check whether
rules are (1) feasible and (2) can be simplified, so that as many HA
rules can still be applied as are feasible.
| Feasibility
----------
The feasibility checks are implemented in PVE::HA::Rules::Location,
PVE::HA::Rules::Colocation, and PVE::HA::Rules, where the latter handles
global checks in between rule types.
| Canonicalization
----------
Additionally, colocation rules are currently simplified as follows:
- If there are multiple positive colocation rules with common services
and the same strictness, these are merged to a single positive
colocation rule (so it is easier to check which services are
positively colocated with a service).
This is implemented in PVE::HA::Rules::Colocation::plugin_canonicalize.
Special negative colocation scenarios
-------------------------------------
Just to be aware of these, there's a distinction between the following
two sets of negative colocation rules:
colocation: separate-vms
services vm:101,vm:102,vm:103
affinity separate
and
colocation: separate-vms1
services vm:101,vm:102
affinity separate
colocation: separate-vms2
services vm:102,vm:103
affinity separate
The first keeps all three services separate from each other, while the
second only keeps pair-wise services separate from each other, but
vm:101 and vm:103 might be migrated to the same node.
Additional and/or future ideas
------------------------------
- Make recomputing the online node usage more granular.
- Add information of overall free node resources to improve decision
heuristic when recovering services to nodes.
- Implementing non-strict colocation rules, e.g., which won't fail but
ignore the rule (for a timeout?, until migrated by the user?), only
considering the $target node while migrating, etc.
- When deciding the recovery node for positively colocated services,
account for the needed resources of all to-be-migrated services rather
than just the first one. This is a non-trivial problem as we currently
solve this as a online bin covering problem, i.e. selecting for each
service alone instead of selecting for all services together.
- Ignore migrations to nodes where the service may not be according to
their location rules / HA group nodes.
- Dynamic colocation rule health statistics (e.g. warn on the
satisfiability of a colocation rule), e.g. in the WebGUI and/or API.
- Property for mandatory colocation rules to specify whether all
services should be stopped if the rule cannot be satisfied.
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=5260
[2] https://bugzilla.proxmox.com/show_bug.cgi?id=5332
ha-manager:
Daniel Kral (13):
rules: introduce plugin-specific canonicalize routines
rules: add haenv node list to the rules' canonicalization stage
rules: introduce resource affinity rule plugin
rules: add global checks between node and resource affinity rules
usage: add information about a service's assigned nodes
manager: apply resource affinity rules when selecting service nodes
manager: handle resource affinity rules in manual migrations
sim: resources: add option to limit start and migrate tries to node
test: ha tester: add test cases for negative resource affinity rules
test: ha tester: add test cases for positive resource affinity rules
test: ha tester: add test cases for static scheduler resource affinity
test: rules: add test cases for resource affinity rules
api: resources: add check for resource affinity in resource migrations
debian/pve-ha-manager.install | 1 +
src/PVE/API2/HA/Resources.pm | 131 +++-
src/PVE/API2/HA/Rules.pm | 5 +-
src/PVE/API2/HA/Status.pm | 4 +-
src/PVE/CLI/ha_manager.pm | 52 +-
src/PVE/HA/Config.pm | 56 ++
src/PVE/HA/Env/PVE2.pm | 2 +
src/PVE/HA/Manager.pm | 73 +-
src/PVE/HA/Resources.pm | 3 +-
src/PVE/HA/Rules.pm | 232 ++++++-
src/PVE/HA/Rules/Makefile | 2 +-
src/PVE/HA/Rules/ResourceAffinity.pm | 642 ++++++++++++++++++
src/PVE/HA/Sim/Env.pm | 2 +
src/PVE/HA/Sim/Resources/VirtFail.pm | 29 +-
src/PVE/HA/Usage.pm | 18 +
src/PVE/HA/Usage/Basic.pm | 19 +
src/PVE/HA/Usage/Static.pm | 19 +
.../defaults-for-resource-affinity-rules.cfg | 16 +
...lts-for-resource-affinity-rules.cfg.expect | 38 ++
...onsistent-node-resource-affinity-rules.cfg | 54 ++
...nt-node-resource-affinity-rules.cfg.expect | 121 ++++
.../inconsistent-resource-affinity-rules.cfg | 11 +
...sistent-resource-affinity-rules.cfg.expect | 11 +
...ctive-negative-resource-affinity-rules.cfg | 17 +
...egative-resource-affinity-rules.cfg.expect | 30 +
.../ineffective-resource-affinity-rules.cfg | 8 +
...fective-resource-affinity-rules.cfg.expect | 9 +
...licit-negative-resource-affinity-rules.cfg | 40 ++
...egative-resource-affinity-rules.cfg.expect | 131 ++++
...licit-negative-resource-affinity-rules.cfg | 16 +
...egative-resource-affinity-rules.cfg.expect | 73 ++
...ected-positive-resource-affinity-rules.cfg | 42 ++
...ositive-resource-affinity-rules.cfg.expect | 70 ++
...-affinity-with-resource-affinity-rules.cfg | 19 +
...ty-with-resource-affinity-rules.cfg.expect | 45 ++
.../README | 26 +
.../cmdlist | 4 +
.../datacenter.cfg | 6 +
.../hardware_status | 5 +
.../log.expect | 120 ++++
.../manager_status | 1 +
.../rules_config | 19 +
.../service_config | 10 +
.../static_service_stats | 10 +
.../README | 20 +
.../cmdlist | 4 +
.../datacenter.cfg | 6 +
.../hardware_status | 5 +
.../log.expect | 174 +++++
.../manager_status | 1 +
.../rules_config | 11 +
.../service_config | 14 +
.../static_service_stats | 14 +
.../README | 22 +
.../cmdlist | 22 +
.../datacenter.cfg | 6 +
.../hardware_status | 7 +
.../log.expect | 272 ++++++++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 9 +
.../static_service_stats | 9 +
.../README | 13 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 60 ++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 6 +
.../README | 15 +
.../cmdlist | 4 +
.../hardware_status | 7 +
.../log.expect | 90 +++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 10 +
.../README | 16 +
.../cmdlist | 4 +
.../hardware_status | 7 +
.../log.expect | 110 +++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 10 +
.../README | 18 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 69 ++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 6 +
.../README | 11 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 56 ++
.../manager_status | 1 +
.../rules_config | 7 +
.../service_config | 5 +
.../README | 18 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 69 ++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 6 +
.../README | 15 +
.../cmdlist | 5 +
.../hardware_status | 5 +
.../log.expect | 52 ++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 4 +
.../README | 12 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 38 ++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 5 +
.../README | 12 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 66 ++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 6 +
.../README | 11 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 80 +++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 8 +
.../README | 17 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 89 +++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 8 +
.../README | 11 +
.../cmdlist | 4 +
.../hardware_status | 5 +
.../log.expect | 59 ++
.../manager_status | 1 +
.../rules_config | 3 +
.../service_config | 5 +
.../README | 19 +
.../cmdlist | 8 +
.../hardware_status | 5 +
.../log.expect | 281 ++++++++
.../manager_status | 1 +
.../rules_config | 15 +
.../service_config | 11 +
src/test/test_rules_config.pl | 6 +-
154 files changed, 4400 insertions(+), 39 deletions(-)
create mode 100644 src/PVE/HA/Rules/ResourceAffinity.pm
create mode 100644 src/test/rules_cfgs/defaults-for-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/defaults-for-resource-affinity-rules.cfg.expect
create mode 100644 src/test/rules_cfgs/inconsistent-node-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/inconsistent-node-resource-affinity-rules.cfg.expect
create mode 100644 src/test/rules_cfgs/inconsistent-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/inconsistent-resource-affinity-rules.cfg.expect
create mode 100644 src/test/rules_cfgs/ineffective-negative-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/ineffective-negative-resource-affinity-rules.cfg.expect
create mode 100644 src/test/rules_cfgs/ineffective-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/ineffective-resource-affinity-rules.cfg.expect
create mode 100644 src/test/rules_cfgs/infer-implicit-negative-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/infer-implicit-negative-resource-affinity-rules.cfg.expect
create mode 100644 src/test/rules_cfgs/merge-and-infer-implicit-negative-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/merge-and-infer-implicit-negative-resource-affinity-rules.cfg.expect
create mode 100644 src/test/rules_cfgs/merge-connected-positive-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/merge-connected-positive-resource-affinity-rules.cfg.expect
create mode 100644 src/test/rules_cfgs/multi-priority-node-affinity-with-resource-affinity-rules.cfg
create mode 100644 src/test/rules_cfgs/multi-priority-node-affinity-with-resource-affinity-rules.cfg.expect
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/README
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/cmdlist
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/datacenter.cfg
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/hardware_status
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/log.expect
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/manager_status
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/rules_config
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/service_config
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity1/static_service_stats
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/README
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/cmdlist
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/datacenter.cfg
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/hardware_status
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/log.expect
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/manager_status
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/rules_config
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/service_config
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity2/static_service_stats
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/README
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/cmdlist
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/datacenter.cfg
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/hardware_status
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/log.expect
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/manager_status
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/rules_config
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/service_config
create mode 100644 src/test/test-crs-static-rebalance-resource-affinity3/static_service_stats
create mode 100644 src/test/test-resource-affinity-strict-negative1/README
create mode 100644 src/test/test-resource-affinity-strict-negative1/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-negative1/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-negative1/log.expect
create mode 100644 src/test/test-resource-affinity-strict-negative1/manager_status
create mode 100644 src/test/test-resource-affinity-strict-negative1/rules_config
create mode 100644 src/test/test-resource-affinity-strict-negative1/service_config
create mode 100644 src/test/test-resource-affinity-strict-negative2/README
create mode 100644 src/test/test-resource-affinity-strict-negative2/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-negative2/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-negative2/log.expect
create mode 100644 src/test/test-resource-affinity-strict-negative2/manager_status
create mode 100644 src/test/test-resource-affinity-strict-negative2/rules_config
create mode 100644 src/test/test-resource-affinity-strict-negative2/service_config
create mode 100644 src/test/test-resource-affinity-strict-negative3/README
create mode 100644 src/test/test-resource-affinity-strict-negative3/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-negative3/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-negative3/log.expect
create mode 100644 src/test/test-resource-affinity-strict-negative3/manager_status
create mode 100644 src/test/test-resource-affinity-strict-negative3/rules_config
create mode 100644 src/test/test-resource-affinity-strict-negative3/service_config
create mode 100644 src/test/test-resource-affinity-strict-negative4/README
create mode 100644 src/test/test-resource-affinity-strict-negative4/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-negative4/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-negative4/log.expect
create mode 100644 src/test/test-resource-affinity-strict-negative4/manager_status
create mode 100644 src/test/test-resource-affinity-strict-negative4/rules_config
create mode 100644 src/test/test-resource-affinity-strict-negative4/service_config
create mode 100644 src/test/test-resource-affinity-strict-negative5/README
create mode 100644 src/test/test-resource-affinity-strict-negative5/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-negative5/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-negative5/log.expect
create mode 100644 src/test/test-resource-affinity-strict-negative5/manager_status
create mode 100644 src/test/test-resource-affinity-strict-negative5/rules_config
create mode 100644 src/test/test-resource-affinity-strict-negative5/service_config
create mode 100644 src/test/test-resource-affinity-strict-negative6/README
create mode 100644 src/test/test-resource-affinity-strict-negative6/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-negative6/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-negative6/log.expect
create mode 100644 src/test/test-resource-affinity-strict-negative6/manager_status
create mode 100644 src/test/test-resource-affinity-strict-negative6/rules_config
create mode 100644 src/test/test-resource-affinity-strict-negative6/service_config
create mode 100644 src/test/test-resource-affinity-strict-negative7/README
create mode 100644 src/test/test-resource-affinity-strict-negative7/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-negative7/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-negative7/log.expect
create mode 100644 src/test/test-resource-affinity-strict-negative7/manager_status
create mode 100644 src/test/test-resource-affinity-strict-negative7/rules_config
create mode 100644 src/test/test-resource-affinity-strict-negative7/service_config
create mode 100644 src/test/test-resource-affinity-strict-negative8/README
create mode 100644 src/test/test-resource-affinity-strict-negative8/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-negative8/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-negative8/log.expect
create mode 100644 src/test/test-resource-affinity-strict-negative8/manager_status
create mode 100644 src/test/test-resource-affinity-strict-negative8/rules_config
create mode 100644 src/test/test-resource-affinity-strict-negative8/service_config
create mode 100644 src/test/test-resource-affinity-strict-positive1/README
create mode 100644 src/test/test-resource-affinity-strict-positive1/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-positive1/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-positive1/log.expect
create mode 100644 src/test/test-resource-affinity-strict-positive1/manager_status
create mode 100644 src/test/test-resource-affinity-strict-positive1/rules_config
create mode 100644 src/test/test-resource-affinity-strict-positive1/service_config
create mode 100644 src/test/test-resource-affinity-strict-positive2/README
create mode 100644 src/test/test-resource-affinity-strict-positive2/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-positive2/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-positive2/log.expect
create mode 100644 src/test/test-resource-affinity-strict-positive2/manager_status
create mode 100644 src/test/test-resource-affinity-strict-positive2/rules_config
create mode 100644 src/test/test-resource-affinity-strict-positive2/service_config
create mode 100644 src/test/test-resource-affinity-strict-positive3/README
create mode 100644 src/test/test-resource-affinity-strict-positive3/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-positive3/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-positive3/log.expect
create mode 100644 src/test/test-resource-affinity-strict-positive3/manager_status
create mode 100644 src/test/test-resource-affinity-strict-positive3/rules_config
create mode 100644 src/test/test-resource-affinity-strict-positive3/service_config
create mode 100644 src/test/test-resource-affinity-strict-positive4/README
create mode 100644 src/test/test-resource-affinity-strict-positive4/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-positive4/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-positive4/log.expect
create mode 100644 src/test/test-resource-affinity-strict-positive4/manager_status
create mode 100644 src/test/test-resource-affinity-strict-positive4/rules_config
create mode 100644 src/test/test-resource-affinity-strict-positive4/service_config
create mode 100644 src/test/test-resource-affinity-strict-positive5/README
create mode 100644 src/test/test-resource-affinity-strict-positive5/cmdlist
create mode 100644 src/test/test-resource-affinity-strict-positive5/hardware_status
create mode 100644 src/test/test-resource-affinity-strict-positive5/log.expect
create mode 100644 src/test/test-resource-affinity-strict-positive5/manager_status
create mode 100644 src/test/test-resource-affinity-strict-positive5/rules_config
create mode 100644 src/test/test-resource-affinity-strict-positive5/service_config
base-commit: 264dc2c58d145394219f82f25d41f4fc438c4dc4
prerequisite-patch-id: 530b875c25a6bded1cc2294960cf465d5c2bcbca
prerequisite-patch-id: be76b977780d57e5fbf352bd978bdae5c940550d
prerequisite-patch-id: f7e9aa60a2062358ce66bc7ff1b1a9040e5326c6
prerequisite-patch-id: 0b58a4d7f2e46025edbe3570f75c205cacce7420
prerequisite-patch-id: 4b19363e458e614a6df1956ac5a217bfc62610d7
prerequisite-patch-id: 9b6ebaa0969b63f30f33c761eff0f8df7fd5f8d0
prerequisite-patch-id: 878a1f4702c9783218c5d8b0187a3862b85ee44b
prerequisite-patch-id: d81d430bb9a5ae9cd30067f2f4afa4dec5c085fc
prerequisite-patch-id: dad7bbb8de320efda08f7e660af2fce04490adb3
prerequisite-patch-id: f3f25c27f6a165617011ae641d581dda2c05b82e
prerequisite-patch-id: ea6202f21814509cf877d68506f37fe80059371d
prerequisite-patch-id: ec46e7ad626365020fdd6a07b99335c56cb024d0
prerequisite-patch-id: 8a19f490ae3dadeeb71da8888ac3ad1e0036407f
prerequisite-patch-id: afd04a8513a3bbfd5943a4bc2975b723c92348ad
prerequisite-patch-id: 9eec8be1085114a9acb33b90ca73616c611ccf65
prerequisite-patch-id: d1e039fd3f200201641a43f7e1cb423e526a27c9
prerequisite-patch-id: e86fb011c1574c112a8e9a30ab4401eb6fa25eb9
docs:
Daniel Kral (1):
ha: add documentation about ha resource affinity rules
Makefile | 1 +
gen-ha-rules-resource-affinity-opts.pl | 20 ++++
ha-manager.adoc | 133 +++++++++++++++++++++++++
ha-rules-resource-affinity-opts.adoc | 8 ++
4 files changed, 162 insertions(+)
create mode 100755 gen-ha-rules-resource-affinity-opts.pl
create mode 100644 ha-rules-resource-affinity-opts.adoc
base-commit: 7cc17ee5950a53bbd5b5ad81270352ccdb1c541c
prerequisite-patch-id: 92556cd6c1edfb88b397ae244d7dcd56876cd8fb
prerequisite-patch-id: f4f3b5d3ab4765a96b473a24446cf81964c12042
prerequisite-patch-id: 7ac868e0d7f8b1c08e54143c37dda9475bf14d96
manager:
Daniel Kral (3):
ui: ha: rules: add ha resource affinity rules
ui: migrate: lxc: display precondition messages for ha resource
affinity
ui: migrate: vm: display precondition messages for ha resource
affinity
www/manager6/Makefile | 2 +
www/manager6/ha/Rules.js | 12 ++
.../ha/rules/ResourceAffinityRuleEdit.js | 24 ++++
.../ha/rules/ResourceAffinityRules.js | 31 +++++
www/manager6/window/Migrate.js | 131 +++++++++++++++++-
5 files changed, 197 insertions(+), 3 deletions(-)
create mode 100644 www/manager6/ha/rules/ResourceAffinityRuleEdit.js
create mode 100644 www/manager6/ha/rules/ResourceAffinityRules.js
base-commit: c0cbe76ee90e7110934c50414bc22371cf13c01a
prerequisite-patch-id: ec6a39936719cfe38787fccb1a80af6378980723
prerequisite-patch-id: 9415da9186d58d8b31377c1f25ff18f8c2ffc5a2
prerequisite-patch-id: e22720f6d06927514b80cc496331c13fd080fd8d
prerequisite-patch-id: d1f267d8039d9bb04b1a0f9375970230a00755cb
prerequisite-patch-id: 5752652afa1754cb13a18b469137e7a04446d764
pve-container:
Daniel Kral (1):
api: introduce migration preconditions api endpoint
src/PVE/API2/LXC.pm | 141 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 141 insertions(+)
base-commit: 7b8077037b310f881e0422f3aabb7e9cf057cb72
prerequisite-patch-id: 6e1b48c8279bba02a04aecb550b19a7f5b5a86d0
prerequisite-patch-id: 13bd7463605c2fb86dea8ce2b4d11d3b57e726ad
prerequisite-patch-id: 66f15a96f8cdd9a21f57d1ee4b71dbb75b183716
prerequisite-patch-id: 0d73ae35bfd4edfd33dd09f9be3f23839df7d798
qemu-server:
Daniel Kral (1):
api: migration preconditions: add checks for ha resource affinity
rules
src/PVE/API2/Qemu.pm | 49 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
base-commit: 4bd4a6378d83be5c5d6494bea953067b762fa0bb
prerequisite-patch-id: 92fa07c7b5efc5c61274a3fcbef3fc50403c4395
prerequisite-patch-id: 45412364886b697957234a9907ff0780a03c8fe0
prerequisite-patch-id: dbd1bed03695935811133e4b47fee7503085d79c
prerequisite-patch-id: 75b3ae68437fc1a1c35ca65f6c5d7bfb7d0ae761
prerequisite-patch-id: 638838e65f6737f7b542e5e9862cf123fdaaa7c8
prerequisite-patch-id: 20f9cc9362238af5edd124cc664b83e4d254047e
prerequisite-patch-id: a192d04047d2cafe4e5acc93cc3353ae5e3a0ca9
prerequisite-patch-id: ac31a90492b4deab7aa9261e7b3297ed006f8aab
prerequisite-patch-id: b5593d700fc9395dacd9728cf321da2dbb43c953
prerequisite-patch-id: c4ecf88d9c7dbbdf3d45a4f47d8409daf711aa3a
prerequisite-patch-id: 56f032b886c0a630e55f6c7e93f054d6a413de39
prerequisite-patch-id: cb87f6db69df0462c2f75bea160427509c242f2a
prerequisite-patch-id: c798d97cd2de42ce5f5f1a8992eb63a8ff56ba3b
prerequisite-patch-id: 2342211b84632fa9c058f8f0cb30fa414413dbb2
prerequisite-patch-id: 06803397314bbb318be3136fbe378018e29cb5f9
prerequisite-patch-id: 591f2129b044240f4f73841b0c8c23fe5ecd1e25
prerequisite-patch-id: f7fc7bbcc43ae266b5a64ba749eae1462d0e8809
prerequisite-patch-id: eecdec45a3457706cf6b07a648f24e4d0f2fd463
prerequisite-patch-id: 735c77be6142fcb4509257523be5f893f982b8da
prerequisite-patch-id: 7458a9e7d30d92ed13738cde39845838901ed96e
prerequisite-patch-id: 32abe240d401f3fb55035d07cbf84d7aa51d0909
prerequisite-patch-id: c89d3c5bf26057d0b5536f1b90b2a53b4f7e4fd7
prerequisite-patch-id: c71d2f276776353aa80977d4fd2bb8ee67f0996f
prerequisite-patch-id: 7469ff6963df33ed50fc42a215a4fe26f6624e85
prerequisite-patch-id: b10f50d61f9f57d6316c93da7b4e14544f32e37f
prerequisite-patch-id: e3dd34aa01b5c4b01a80f3557ca8ee170ba951a2
prerequisite-patch-id: fb01eef662838ee474801670f65fe2450e539db8
prerequisite-patch-id: 637c768e2217f7e230dd74898ad8d5521906079c
prerequisite-patch-id: fb3ddaa3b3a1719e5288235e897b03c8a2b6e0d1
prerequisite-patch-id: cdf7cc5a3731cf28ded49d17fbc937f0529d7c4f
prerequisite-patch-id: 33ccd4f5266722525d47b3a36e82d7ad8e81ed5c
prerequisite-patch-id: 30cfcfcc835d71a7fe53b075d2ce43af20bc53b4
prerequisite-patch-id: ce5b02664f3715b95d71101d2c107ffcd910b8e3
prerequisite-patch-id: 56ac3292381ebf1cb28d13039ff8dc59eadcdec2
prerequisite-patch-id: 88c19bd062a4c446bd25d07dc3d79f8c393557f4
prerequisite-patch-id: 1f94299f0e3d203d7d6ad539b85ff166915b8102
prerequisite-patch-id: a7595017f9383a37d616bf08d7eb6e79f3f02684
prerequisite-patch-id: d32966a023905395a1904d3ecf4fb979e4a12c50
prerequisite-patch-id: ea83303358b0c89d7d44ff779333957bbd7bb6e3
prerequisite-patch-id: 4f2dab83befa91ad6ae7d3de3bffee8f633e26a4
prerequisite-patch-id: 3e581da8a18525e7c00099d4423dfd23a6aa28aa
prerequisite-patch-id: 59fb4bace486be94096bcd2291850428f6fd4281
prerequisite-patch-id: 8ab490bfe9d826dd6951e12138e844d44594918c
prerequisite-patch-id: efe99ebbf56d0ee6f65c0d4d0ef81bb1c45653e2
prerequisite-patch-id: c7189bcd0a12d489f55fceb6a15169b6d7c4c6ab
prerequisite-patch-id: c2202da4c4668425f5305528b2815ca6e18b3f2d
prerequisite-patch-id: 972023350c88368f7f8ea5c38a0db4203269b74e
prerequisite-patch-id: 979f735b7277fa8c053e346c8c5cbb7e9cca175f
prerequisite-patch-id: 397d98543aea0795a538b5f2134cf3d536864d1e
prerequisite-patch-id: b48cf1dbee9bbfaca0975a542d6c4eeb9c3a73fe
prerequisite-patch-id: 0545d14a7dade8ef5576ad29c03dd25f4e44f29c
prerequisite-patch-id: d148ba8f442aba20cb52a41c8f6282c6da7432c5
prerequisite-patch-id: 93bb7d94d66dc9f21856aa1d36e6611969637f7e
prerequisite-patch-id: 43ca3a9f6d56fd4a430a2ef206c733598321e00a
prerequisite-patch-id: 11f58168f21c7d6f63e1981660caeb1c55a67b7e
prerequisite-patch-id: 60d22700d35dc8db4e36eee1398627f8ef81ec90
prerequisite-patch-id: 53a3bef801b7dc854d547300b242ecc2086a9649
prerequisite-patch-id: 8bed06668bc4547cc6ebf6bd38684c7cfdaa2999
prerequisite-patch-id: 637814dceb301beaa41dbf8f1ab87532238c66e6
prerequisite-patch-id: 1b5c5f3c3158debb889127e26b5f695f17b56d16
prerequisite-patch-id: 1a7d8eda8e08b4017a9aff6428ad9a4fb9f3894b
Summary over all repositories:
165 files changed, 4949 insertions(+), 42 deletions(-)
--
Generated by git-murpp 0.8.0
More information about the pve-devel
mailing list