[pve-devel] [PATCH ha-manager v2 09/26] test: ha tester: add test cases for future location rules

Daniel Kral d.kral at proxmox.com
Fri Jun 20 16:31:21 CEST 2025


Add test cases to verify that the location rules, which will be added in
a following patch, are functionally equivalent to the HA groups.

These test cases verify the following scenarios for (a) unrestricted and
(b) restricted groups (i.e. loose and strict location rules):

1. If a service is manually migrated to a non-member node and failback
   is enabled, then (a)(b) migrate the service back to a member node.
2. If a service is manually migrated to a non-member node and failback
   is disabled, then (a) migrate the service back to a member node, or
   (b) do nothing for unrestricted groups.
3. If a service's node fails, where the failed node is the only
   available group member left, (a) stay in recovery, or (b) migrate the
   service to a non-member node.
4. If a service's node fails, but there is another available group
   member left, (a)(b) migrate the service to the other member node.
5. If a service's group has failback enabled and the service's node,
   which is the node with the highest priority in the group, fails and
   comes back later, (a)(b) migrate it to the second-highest prioritized
   node and automatically migrate it back to the highest priority node
   as soon as it is available again.
6. If a service's group has failback disabled and the service's node,
   which is the node with the highest priority in the group, fails and
   comes back later, (a)(b) migrate it to the second-highest prioritized
   node, but do not migrate it back to the highest priority node if it
   becomes available again.

Signed-off-by: Daniel Kral <d.kral at proxmox.com>
---
changes since v1:
    - NEW!

 src/test/test-location-loose1/README          | 10 +++
 src/test/test-location-loose1/cmdlist         |  4 +
 src/test/test-location-loose1/groups          |  2 +
 src/test/test-location-loose1/hardware_status |  5 ++
 src/test/test-location-loose1/log.expect      | 40 ++++++++++
 src/test/test-location-loose1/manager_status  |  1 +
 src/test/test-location-loose1/service_config  |  3 +
 src/test/test-location-loose2/README          | 12 +++
 src/test/test-location-loose2/cmdlist         |  4 +
 src/test/test-location-loose2/groups          |  3 +
 src/test/test-location-loose2/hardware_status |  5 ++
 src/test/test-location-loose2/log.expect      | 35 +++++++++
 src/test/test-location-loose2/manager_status  |  1 +
 src/test/test-location-loose2/service_config  |  3 +
 src/test/test-location-loose3/README          | 10 +++
 src/test/test-location-loose3/cmdlist         |  4 +
 src/test/test-location-loose3/groups          |  2 +
 src/test/test-location-loose3/hardware_status |  5 ++
 src/test/test-location-loose3/log.expect      | 56 ++++++++++++++
 src/test/test-location-loose3/manager_status  |  1 +
 src/test/test-location-loose3/service_config  |  5 ++
 src/test/test-location-loose4/README          | 14 ++++
 src/test/test-location-loose4/cmdlist         |  4 +
 src/test/test-location-loose4/groups          |  2 +
 src/test/test-location-loose4/hardware_status |  5 ++
 src/test/test-location-loose4/log.expect      | 54 ++++++++++++++
 src/test/test-location-loose4/manager_status  |  1 +
 src/test/test-location-loose4/service_config  |  5 ++
 src/test/test-location-loose5/README          | 16 ++++
 src/test/test-location-loose5/cmdlist         |  5 ++
 src/test/test-location-loose5/groups          |  2 +
 src/test/test-location-loose5/hardware_status |  5 ++
 src/test/test-location-loose5/log.expect      | 66 +++++++++++++++++
 src/test/test-location-loose5/manager_status  |  1 +
 src/test/test-location-loose5/service_config  |  3 +
 src/test/test-location-loose6/README          | 14 ++++
 src/test/test-location-loose6/cmdlist         |  5 ++
 src/test/test-location-loose6/groups          |  3 +
 src/test/test-location-loose6/hardware_status |  5 ++
 src/test/test-location-loose6/log.expect      | 52 +++++++++++++
 src/test/test-location-loose6/manager_status  |  1 +
 src/test/test-location-loose6/service_config  |  3 +
 src/test/test-location-strict1/README         | 10 +++
 src/test/test-location-strict1/cmdlist        |  4 +
 src/test/test-location-strict1/groups         |  3 +
 .../test-location-strict1/hardware_status     |  5 ++
 src/test/test-location-strict1/log.expect     | 40 ++++++++++
 src/test/test-location-strict1/manager_status |  1 +
 src/test/test-location-strict1/service_config |  3 +
 src/test/test-location-strict2/README         | 11 +++
 src/test/test-location-strict2/cmdlist        |  4 +
 src/test/test-location-strict2/groups         |  4 +
 .../test-location-strict2/hardware_status     |  5 ++
 src/test/test-location-strict2/log.expect     | 40 ++++++++++
 src/test/test-location-strict2/manager_status |  1 +
 src/test/test-location-strict2/service_config |  3 +
 src/test/test-location-strict3/README         | 10 +++
 src/test/test-location-strict3/cmdlist        |  4 +
 src/test/test-location-strict3/groups         |  3 +
 .../test-location-strict3/hardware_status     |  5 ++
 src/test/test-location-strict3/log.expect     | 74 +++++++++++++++++++
 src/test/test-location-strict3/manager_status |  1 +
 src/test/test-location-strict3/service_config |  5 ++
 src/test/test-location-strict4/README         | 14 ++++
 src/test/test-location-strict4/cmdlist        |  4 +
 src/test/test-location-strict4/groups         |  3 +
 .../test-location-strict4/hardware_status     |  5 ++
 src/test/test-location-strict4/log.expect     | 54 ++++++++++++++
 src/test/test-location-strict4/manager_status |  1 +
 src/test/test-location-strict4/service_config |  5 ++
 src/test/test-location-strict5/README         | 16 ++++
 src/test/test-location-strict5/cmdlist        |  5 ++
 src/test/test-location-strict5/groups         |  3 +
 .../test-location-strict5/hardware_status     |  5 ++
 src/test/test-location-strict5/log.expect     | 66 +++++++++++++++++
 src/test/test-location-strict5/manager_status |  1 +
 src/test/test-location-strict5/service_config |  3 +
 src/test/test-location-strict6/README         | 14 ++++
 src/test/test-location-strict6/cmdlist        |  5 ++
 src/test/test-location-strict6/groups         |  4 +
 .../test-location-strict6/hardware_status     |  5 ++
 src/test/test-location-strict6/log.expect     | 52 +++++++++++++
 src/test/test-location-strict6/manager_status |  1 +
 src/test/test-location-strict6/service_config |  3 +
 84 files changed, 982 insertions(+)
 create mode 100644 src/test/test-location-loose1/README
 create mode 100644 src/test/test-location-loose1/cmdlist
 create mode 100644 src/test/test-location-loose1/groups
 create mode 100644 src/test/test-location-loose1/hardware_status
 create mode 100644 src/test/test-location-loose1/log.expect
 create mode 100644 src/test/test-location-loose1/manager_status
 create mode 100644 src/test/test-location-loose1/service_config
 create mode 100644 src/test/test-location-loose2/README
 create mode 100644 src/test/test-location-loose2/cmdlist
 create mode 100644 src/test/test-location-loose2/groups
 create mode 100644 src/test/test-location-loose2/hardware_status
 create mode 100644 src/test/test-location-loose2/log.expect
 create mode 100644 src/test/test-location-loose2/manager_status
 create mode 100644 src/test/test-location-loose2/service_config
 create mode 100644 src/test/test-location-loose3/README
 create mode 100644 src/test/test-location-loose3/cmdlist
 create mode 100644 src/test/test-location-loose3/groups
 create mode 100644 src/test/test-location-loose3/hardware_status
 create mode 100644 src/test/test-location-loose3/log.expect
 create mode 100644 src/test/test-location-loose3/manager_status
 create mode 100644 src/test/test-location-loose3/service_config
 create mode 100644 src/test/test-location-loose4/README
 create mode 100644 src/test/test-location-loose4/cmdlist
 create mode 100644 src/test/test-location-loose4/groups
 create mode 100644 src/test/test-location-loose4/hardware_status
 create mode 100644 src/test/test-location-loose4/log.expect
 create mode 100644 src/test/test-location-loose4/manager_status
 create mode 100644 src/test/test-location-loose4/service_config
 create mode 100644 src/test/test-location-loose5/README
 create mode 100644 src/test/test-location-loose5/cmdlist
 create mode 100644 src/test/test-location-loose5/groups
 create mode 100644 src/test/test-location-loose5/hardware_status
 create mode 100644 src/test/test-location-loose5/log.expect
 create mode 100644 src/test/test-location-loose5/manager_status
 create mode 100644 src/test/test-location-loose5/service_config
 create mode 100644 src/test/test-location-loose6/README
 create mode 100644 src/test/test-location-loose6/cmdlist
 create mode 100644 src/test/test-location-loose6/groups
 create mode 100644 src/test/test-location-loose6/hardware_status
 create mode 100644 src/test/test-location-loose6/log.expect
 create mode 100644 src/test/test-location-loose6/manager_status
 create mode 100644 src/test/test-location-loose6/service_config
 create mode 100644 src/test/test-location-strict1/README
 create mode 100644 src/test/test-location-strict1/cmdlist
 create mode 100644 src/test/test-location-strict1/groups
 create mode 100644 src/test/test-location-strict1/hardware_status
 create mode 100644 src/test/test-location-strict1/log.expect
 create mode 100644 src/test/test-location-strict1/manager_status
 create mode 100644 src/test/test-location-strict1/service_config
 create mode 100644 src/test/test-location-strict2/README
 create mode 100644 src/test/test-location-strict2/cmdlist
 create mode 100644 src/test/test-location-strict2/groups
 create mode 100644 src/test/test-location-strict2/hardware_status
 create mode 100644 src/test/test-location-strict2/log.expect
 create mode 100644 src/test/test-location-strict2/manager_status
 create mode 100644 src/test/test-location-strict2/service_config
 create mode 100644 src/test/test-location-strict3/README
 create mode 100644 src/test/test-location-strict3/cmdlist
 create mode 100644 src/test/test-location-strict3/groups
 create mode 100644 src/test/test-location-strict3/hardware_status
 create mode 100644 src/test/test-location-strict3/log.expect
 create mode 100644 src/test/test-location-strict3/manager_status
 create mode 100644 src/test/test-location-strict3/service_config
 create mode 100644 src/test/test-location-strict4/README
 create mode 100644 src/test/test-location-strict4/cmdlist
 create mode 100644 src/test/test-location-strict4/groups
 create mode 100644 src/test/test-location-strict4/hardware_status
 create mode 100644 src/test/test-location-strict4/log.expect
 create mode 100644 src/test/test-location-strict4/manager_status
 create mode 100644 src/test/test-location-strict4/service_config
 create mode 100644 src/test/test-location-strict5/README
 create mode 100644 src/test/test-location-strict5/cmdlist
 create mode 100644 src/test/test-location-strict5/groups
 create mode 100644 src/test/test-location-strict5/hardware_status
 create mode 100644 src/test/test-location-strict5/log.expect
 create mode 100644 src/test/test-location-strict5/manager_status
 create mode 100644 src/test/test-location-strict5/service_config
 create mode 100644 src/test/test-location-strict6/README
 create mode 100644 src/test/test-location-strict6/cmdlist
 create mode 100644 src/test/test-location-strict6/groups
 create mode 100644 src/test/test-location-strict6/hardware_status
 create mode 100644 src/test/test-location-strict6/log.expect
 create mode 100644 src/test/test-location-strict6/manager_status
 create mode 100644 src/test/test-location-strict6/service_config

diff --git a/src/test/test-location-loose1/README b/src/test/test-location-loose1/README
new file mode 100644
index 0000000..8775b6c
--- /dev/null
+++ b/src/test/test-location-loose1/README
@@ -0,0 +1,10 @@
+Test whether a service in a unrestricted group will automatically migrate back
+to a node member in case of a manual migration to a non-member node.
+
+The test scenario is:
+- vm:101 should be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As vm:101 is manually migrated to node2, it is migrated back to node3, as
+  node3 is a group member and has higher priority than the other nodes
diff --git a/src/test/test-location-loose1/cmdlist b/src/test/test-location-loose1/cmdlist
new file mode 100644
index 0000000..a63e4fd
--- /dev/null
+++ b/src/test/test-location-loose1/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-location-loose1/groups b/src/test/test-location-loose1/groups
new file mode 100644
index 0000000..50c9a2d
--- /dev/null
+++ b/src/test/test-location-loose1/groups
@@ -0,0 +1,2 @@
+group: should_stay_here
+	nodes node3
diff --git a/src/test/test-location-loose1/hardware_status b/src/test/test-location-loose1/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-loose1/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-loose1/log.expect b/src/test/test-location-loose1/log.expect
new file mode 100644
index 0000000..e0f4d46
--- /dev/null
+++ b/src/test/test-location-loose1/log.expect
@@ -0,0 +1,40 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    143    node2/lrm: got lock 'ha_agent_node2_lock'
+info    143    node2/lrm: status change wait_for_agent_lock => active
+info    143    node2/lrm: service vm:101 - start migrate to node 'node3'
+info    143    node2/lrm: service vm:101 - end migrate to node 'node3'
+info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info    165    node3/lrm: starting service vm:101
+info    165    node3/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-location-loose1/manager_status b/src/test/test-location-loose1/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-loose1/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-loose1/service_config b/src/test/test-location-loose1/service_config
new file mode 100644
index 0000000..5f55843
--- /dev/null
+++ b/src/test/test-location-loose1/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" }
+}
diff --git a/src/test/test-location-loose2/README b/src/test/test-location-loose2/README
new file mode 100644
index 0000000..f27414b
--- /dev/null
+++ b/src/test/test-location-loose2/README
@@ -0,0 +1,12 @@
+Test whether a service in a unrestricted group with nofailback enabled will
+stay on the manual migration target node, even though the target node is not a
+member of the unrestricted group.
+
+The test scenario is:
+- vm:101 should be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As vm:101 is manually migrated to node2, vm:101 stays on node2; even though
+  node2 is not a group member, the nofailback flag prevents vm:101 to be
+  migrated back to a group member
diff --git a/src/test/test-location-loose2/cmdlist b/src/test/test-location-loose2/cmdlist
new file mode 100644
index 0000000..a63e4fd
--- /dev/null
+++ b/src/test/test-location-loose2/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-location-loose2/groups b/src/test/test-location-loose2/groups
new file mode 100644
index 0000000..59192fa
--- /dev/null
+++ b/src/test/test-location-loose2/groups
@@ -0,0 +1,3 @@
+group: should_stay_here
+	nodes node3
+	nofailback 1
diff --git a/src/test/test-location-loose2/hardware_status b/src/test/test-location-loose2/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-loose2/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-loose2/log.expect b/src/test/test-location-loose2/log.expect
new file mode 100644
index 0000000..35e2470
--- /dev/null
+++ b/src/test/test-location-loose2/log.expect
@@ -0,0 +1,35 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    143    node2/lrm: got lock 'ha_agent_node2_lock'
+info    143    node2/lrm: status change wait_for_agent_lock => active
+info    143    node2/lrm: starting service vm:101
+info    143    node2/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-location-loose2/manager_status b/src/test/test-location-loose2/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-loose2/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-loose2/service_config b/src/test/test-location-loose2/service_config
new file mode 100644
index 0000000..5f55843
--- /dev/null
+++ b/src/test/test-location-loose2/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" }
+}
diff --git a/src/test/test-location-loose3/README b/src/test/test-location-loose3/README
new file mode 100644
index 0000000..c4ddfab
--- /dev/null
+++ b/src/test/test-location-loose3/README
@@ -0,0 +1,10 @@
+Test whether a service in a unrestricted group with only one node member will
+be migrated to a non-member node in case of a failover of their previously
+assigned node.
+
+The test scenario is:
+- vm:101 should be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As node3 fails, vm:101 is migrated to node1
diff --git a/src/test/test-location-loose3/cmdlist b/src/test/test-location-loose3/cmdlist
new file mode 100644
index 0000000..eee0e40
--- /dev/null
+++ b/src/test/test-location-loose3/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-location-loose3/groups b/src/test/test-location-loose3/groups
new file mode 100644
index 0000000..50c9a2d
--- /dev/null
+++ b/src/test/test-location-loose3/groups
@@ -0,0 +1,2 @@
+group: should_stay_here
+	nodes node3
diff --git a/src/test/test-location-loose3/hardware_status b/src/test/test-location-loose3/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-loose3/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-loose3/log.expect b/src/test/test-location-loose3/log.expect
new file mode 100644
index 0000000..752300b
--- /dev/null
+++ b/src/test/test-location-loose3/log.expect
@@ -0,0 +1,56 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     23    node2/lrm: starting service vm:103
+info     23    node2/lrm: service status vm:103 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node1'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node1)
+info    241    node1/lrm: got lock 'ha_agent_node1_lock'
+info    241    node1/lrm: status change wait_for_agent_lock => active
+info    241    node1/lrm: starting service vm:101
+info    241    node1/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-location-loose3/manager_status b/src/test/test-location-loose3/manager_status
new file mode 100644
index 0000000..0967ef4
--- /dev/null
+++ b/src/test/test-location-loose3/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-location-loose3/service_config b/src/test/test-location-loose3/service_config
new file mode 100644
index 0000000..777b2a7
--- /dev/null
+++ b/src/test/test-location-loose3/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-location-loose4/README b/src/test/test-location-loose4/README
new file mode 100644
index 0000000..a08f0e1
--- /dev/null
+++ b/src/test/test-location-loose4/README
@@ -0,0 +1,14 @@
+Test whether a service in a unrestricted group with two node members will stay
+assigned to one of the node members in case of a failover of their previously
+assigned node.
+
+The test scenario is:
+- vm:101 should be kept on node2 or node3
+- vm:101 is currently running on node3
+- node2 has a higher service count than node1 to test whether the restriction
+  to node2 and node3 is applied even though the scheduler would prefer the less
+  utilized node1
+
+The expected outcome is:
+- As node3 fails, vm:101 is migrated to node2, as it's the only available node
+  left in the unrestricted group
diff --git a/src/test/test-location-loose4/cmdlist b/src/test/test-location-loose4/cmdlist
new file mode 100644
index 0000000..eee0e40
--- /dev/null
+++ b/src/test/test-location-loose4/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-location-loose4/groups b/src/test/test-location-loose4/groups
new file mode 100644
index 0000000..b1584b5
--- /dev/null
+++ b/src/test/test-location-loose4/groups
@@ -0,0 +1,2 @@
+group: should_stay_here
+	nodes node2,node3
diff --git a/src/test/test-location-loose4/hardware_status b/src/test/test-location-loose4/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-loose4/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-loose4/log.expect b/src/test/test-location-loose4/log.expect
new file mode 100644
index 0000000..847e157
--- /dev/null
+++ b/src/test/test-location-loose4/log.expect
@@ -0,0 +1,54 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     23    node2/lrm: starting service vm:103
+info     23    node2/lrm: service status vm:103 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node2)
+info    243    node2/lrm: starting service vm:101
+info    243    node2/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-location-loose4/manager_status b/src/test/test-location-loose4/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-loose4/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-loose4/service_config b/src/test/test-location-loose4/service_config
new file mode 100644
index 0000000..777b2a7
--- /dev/null
+++ b/src/test/test-location-loose4/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-location-loose5/README b/src/test/test-location-loose5/README
new file mode 100644
index 0000000..0c37044
--- /dev/null
+++ b/src/test/test-location-loose5/README
@@ -0,0 +1,16 @@
+Test whether a service in a unrestricted group with two differently prioritized
+node members will stay on the node with the highest priority in case of a
+failover or when the service is on a lower-priority node.
+
+The test scenario is:
+- vm:101 should be kept on node2 or node3
+- vm:101 is currently running on node3
+- node2 has a higher priority than node3
+
+The expected outcome is:
+- As vm:101 runs on node3, it is automatically migrated to node2, as node2 has
+  a higher priority than node3
+- As node2 fails, vm:101 is migrated to node3 as node3 is the next and only
+  available node member left in the unrestricted group
+- As node2 comes back online, vm:101 is migrated back to node2, as node2 has a
+  higher priority than node3
diff --git a/src/test/test-location-loose5/cmdlist b/src/test/test-location-loose5/cmdlist
new file mode 100644
index 0000000..6932aa7
--- /dev/null
+++ b/src/test/test-location-loose5/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node2 off" ],
+    [ "power node2 on", "network node2 on" ]
+]
diff --git a/src/test/test-location-loose5/groups b/src/test/test-location-loose5/groups
new file mode 100644
index 0000000..03a0ee9
--- /dev/null
+++ b/src/test/test-location-loose5/groups
@@ -0,0 +1,2 @@
+group: should_stay_here
+	nodes node2:2,node3:1
diff --git a/src/test/test-location-loose5/hardware_status b/src/test/test-location-loose5/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-loose5/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-loose5/log.expect b/src/test/test-location-loose5/log.expect
new file mode 100644
index 0000000..a875e11
--- /dev/null
+++ b/src/test/test-location-loose5/log.expect
@@ -0,0 +1,66 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: migrate service 'vm:101' to node 'node2' (running)
+info     20    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: service vm:101 - start migrate to node 'node2'
+info     25    node3/lrm: service vm:101 - end migrate to node 'node2'
+info     40    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info     43    node2/lrm: got lock 'ha_agent_node2_lock'
+info     43    node2/lrm: status change wait_for_agent_lock => active
+info     43    node2/lrm: starting service vm:101
+info     43    node2/lrm: service status vm:101 started
+info    120      cmdlist: execute network node2 off
+info    120    node1/crm: node 'node2': state changed from 'online' => 'unknown'
+info    122    node2/crm: status change slave => wait_for_quorum
+info    123    node2/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node2'
+info    164     watchdog: execute power node2 off
+info    163    node2/crm: killed by poweroff
+info    164    node2/lrm: killed by poweroff
+info    164     hardware: server 'node2' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute network node2 on
+info    240    node1/crm: got lock 'ha_agent_node2_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node2' to node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node3)
+info    242    node2/crm: status change wait_for_quorum => slave
+info    245    node3/lrm: starting service vm:101
+info    245    node3/lrm: service status vm:101 started
+info    260    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info    260    node1/crm: migrate service 'vm:101' to node 'node2' (running)
+info    260    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    265    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    265    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    280    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    363    node2/lrm: got lock 'ha_agent_node2_lock'
+info    363    node2/lrm: status change wait_for_agent_lock => active
+info    363    node2/lrm: starting service vm:101
+info    363    node2/lrm: service status vm:101 started
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-location-loose5/manager_status b/src/test/test-location-loose5/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-loose5/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-loose5/service_config b/src/test/test-location-loose5/service_config
new file mode 100644
index 0000000..5f55843
--- /dev/null
+++ b/src/test/test-location-loose5/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "should_stay_here" }
+}
diff --git a/src/test/test-location-loose6/README b/src/test/test-location-loose6/README
new file mode 100644
index 0000000..4ab1275
--- /dev/null
+++ b/src/test/test-location-loose6/README
@@ -0,0 +1,14 @@
+Test whether a service in a unrestricted group with nofailback enabled and two
+differently prioritized node members will stay on the current node without
+migrating back to the highest priority node.
+
+The test scenario is:
+- vm:101 should be kept on node2 or node3
+- vm:101 is currently running on node2
+- node2 has a higher priority than node3
+
+The expected outcome is:
+- As node2 fails, vm:101 is migrated to node3 as it is the only available node
+  member left in the unrestricted group
+- As node2 comes back online, vm:101 stays on node3; even though node2 has a
+  higher priority, the nofailback flag prevents vm:101 to migrate back to node2
diff --git a/src/test/test-location-loose6/cmdlist b/src/test/test-location-loose6/cmdlist
new file mode 100644
index 0000000..4dd33cc
--- /dev/null
+++ b/src/test/test-location-loose6/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node2 off"],
+    [ "power node2 on", "network node2 on" ]
+]
diff --git a/src/test/test-location-loose6/groups b/src/test/test-location-loose6/groups
new file mode 100644
index 0000000..a7aed17
--- /dev/null
+++ b/src/test/test-location-loose6/groups
@@ -0,0 +1,3 @@
+group: should_stay_here
+	nodes node2:2,node3:1
+	nofailback 1
diff --git a/src/test/test-location-loose6/hardware_status b/src/test/test-location-loose6/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-loose6/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-loose6/log.expect b/src/test/test-location-loose6/log.expect
new file mode 100644
index 0000000..bcb472b
--- /dev/null
+++ b/src/test/test-location-loose6/log.expect
@@ -0,0 +1,52 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:101
+info     23    node2/lrm: service status vm:101 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info    120      cmdlist: execute network node2 off
+info    120    node1/crm: node 'node2': state changed from 'online' => 'unknown'
+info    122    node2/crm: status change slave => wait_for_quorum
+info    123    node2/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node2'
+info    164     watchdog: execute power node2 off
+info    163    node2/crm: killed by poweroff
+info    164    node2/lrm: killed by poweroff
+info    164     hardware: server 'node2' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute network node2 on
+info    240    node1/crm: got lock 'ha_agent_node2_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node2' to node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node3)
+info    242    node2/crm: status change wait_for_quorum => slave
+info    245    node3/lrm: got lock 'ha_agent_node3_lock'
+info    245    node3/lrm: status change wait_for_agent_lock => active
+info    245    node3/lrm: starting service vm:101
+info    245    node3/lrm: service status vm:101 started
+info    260    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-location-loose6/manager_status b/src/test/test-location-loose6/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-loose6/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-loose6/service_config b/src/test/test-location-loose6/service_config
new file mode 100644
index 0000000..c4ece62
--- /dev/null
+++ b/src/test/test-location-loose6/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node2", "state": "started", "group": "should_stay_here" }
+}
diff --git a/src/test/test-location-strict1/README b/src/test/test-location-strict1/README
new file mode 100644
index 0000000..c717d58
--- /dev/null
+++ b/src/test/test-location-strict1/README
@@ -0,0 +1,10 @@
+Test whether a service in a restricted group will automatically migrate back to
+a restricted node member in case of a manual migration to a non-member node.
+
+The test scenario is:
+- vm:101 must be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As vm:101 is manually migrated to node2, it is migrated back to node3, as
+  node3 is the only available node member left in the restricted group
diff --git a/src/test/test-location-strict1/cmdlist b/src/test/test-location-strict1/cmdlist
new file mode 100644
index 0000000..a63e4fd
--- /dev/null
+++ b/src/test/test-location-strict1/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-location-strict1/groups b/src/test/test-location-strict1/groups
new file mode 100644
index 0000000..370865f
--- /dev/null
+++ b/src/test/test-location-strict1/groups
@@ -0,0 +1,3 @@
+group: must_stay_here
+	nodes node3
+	restricted 1
diff --git a/src/test/test-location-strict1/hardware_status b/src/test/test-location-strict1/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-strict1/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-strict1/log.expect b/src/test/test-location-strict1/log.expect
new file mode 100644
index 0000000..e0f4d46
--- /dev/null
+++ b/src/test/test-location-strict1/log.expect
@@ -0,0 +1,40 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    143    node2/lrm: got lock 'ha_agent_node2_lock'
+info    143    node2/lrm: status change wait_for_agent_lock => active
+info    143    node2/lrm: service vm:101 - start migrate to node 'node3'
+info    143    node2/lrm: service vm:101 - end migrate to node 'node3'
+info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info    165    node3/lrm: starting service vm:101
+info    165    node3/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-location-strict1/manager_status b/src/test/test-location-strict1/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-strict1/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-strict1/service_config b/src/test/test-location-strict1/service_config
new file mode 100644
index 0000000..36ea15b
--- /dev/null
+++ b/src/test/test-location-strict1/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" }
+}
diff --git a/src/test/test-location-strict2/README b/src/test/test-location-strict2/README
new file mode 100644
index 0000000..f4d06a1
--- /dev/null
+++ b/src/test/test-location-strict2/README
@@ -0,0 +1,11 @@
+Test whether a service in a restricted group with nofailback enabled will
+automatically migrate back to a restricted node member in case of a manual
+migration to a non-member node.
+
+The test scenario is:
+- vm:101 must be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As vm:101 is manually migrated to node2, it is migrated back to node3, as
+  node3 is the only available node member left in the restricted group
diff --git a/src/test/test-location-strict2/cmdlist b/src/test/test-location-strict2/cmdlist
new file mode 100644
index 0000000..a63e4fd
--- /dev/null
+++ b/src/test/test-location-strict2/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "service vm:101 migrate node2" ]
+]
diff --git a/src/test/test-location-strict2/groups b/src/test/test-location-strict2/groups
new file mode 100644
index 0000000..e43eafc
--- /dev/null
+++ b/src/test/test-location-strict2/groups
@@ -0,0 +1,4 @@
+group: must_stay_here
+	nodes node3
+	restricted 1
+	nofailback 1
diff --git a/src/test/test-location-strict2/hardware_status b/src/test/test-location-strict2/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-strict2/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-strict2/log.expect b/src/test/test-location-strict2/log.expect
new file mode 100644
index 0000000..e0f4d46
--- /dev/null
+++ b/src/test/test-location-strict2/log.expect
@@ -0,0 +1,40 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute service vm:101 migrate node2
+info    120    node1/crm: got crm command: migrate vm:101 node2
+info    120    node1/crm: migrate service 'vm:101' to node 'node2'
+info    120    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    125    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    125    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    140    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    140    node1/crm: migrate service 'vm:101' to node 'node3' (running)
+info    140    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node2, target = node3)
+info    143    node2/lrm: got lock 'ha_agent_node2_lock'
+info    143    node2/lrm: status change wait_for_agent_lock => active
+info    143    node2/lrm: service vm:101 - start migrate to node 'node3'
+info    143    node2/lrm: service vm:101 - end migrate to node 'node3'
+info    160    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node3)
+info    165    node3/lrm: starting service vm:101
+info    165    node3/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-location-strict2/manager_status b/src/test/test-location-strict2/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-strict2/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-strict2/service_config b/src/test/test-location-strict2/service_config
new file mode 100644
index 0000000..36ea15b
--- /dev/null
+++ b/src/test/test-location-strict2/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" }
+}
diff --git a/src/test/test-location-strict3/README b/src/test/test-location-strict3/README
new file mode 100644
index 0000000..5aced39
--- /dev/null
+++ b/src/test/test-location-strict3/README
@@ -0,0 +1,10 @@
+Test whether a service in a restricted group with only one node member will
+stay in recovery in case of a failover of their previously assigned node.
+
+The test scenario is:
+- vm:101 must be kept on node3
+- vm:101 is currently running on node3
+
+The expected outcome is:
+- As node3 fails, vm:101 stays in recovery since there's no available node
+  member left in the restricted group
diff --git a/src/test/test-location-strict3/cmdlist b/src/test/test-location-strict3/cmdlist
new file mode 100644
index 0000000..eee0e40
--- /dev/null
+++ b/src/test/test-location-strict3/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-location-strict3/groups b/src/test/test-location-strict3/groups
new file mode 100644
index 0000000..370865f
--- /dev/null
+++ b/src/test/test-location-strict3/groups
@@ -0,0 +1,3 @@
+group: must_stay_here
+	nodes node3
+	restricted 1
diff --git a/src/test/test-location-strict3/hardware_status b/src/test/test-location-strict3/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-strict3/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-strict3/log.expect b/src/test/test-location-strict3/log.expect
new file mode 100644
index 0000000..47f9776
--- /dev/null
+++ b/src/test/test-location-strict3/log.expect
@@ -0,0 +1,74 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     23    node2/lrm: starting service vm:103
+info     23    node2/lrm: service status vm:103 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+err     240    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     260    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     280    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     300    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     320    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     340    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     360    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     380    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     400    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     420    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     440    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     460    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     480    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     500    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     520    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     540    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     560    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     580    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     600    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     620    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     640    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     660    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     680    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+err     700    node1/crm: recovering service 'vm:101' from fenced node 'node3' failed, no recovery node found
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-location-strict3/manager_status b/src/test/test-location-strict3/manager_status
new file mode 100644
index 0000000..0967ef4
--- /dev/null
+++ b/src/test/test-location-strict3/manager_status
@@ -0,0 +1 @@
+{}
diff --git a/src/test/test-location-strict3/service_config b/src/test/test-location-strict3/service_config
new file mode 100644
index 0000000..9adf02c
--- /dev/null
+++ b/src/test/test-location-strict3/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-location-strict4/README b/src/test/test-location-strict4/README
new file mode 100644
index 0000000..25ded53
--- /dev/null
+++ b/src/test/test-location-strict4/README
@@ -0,0 +1,14 @@
+Test whether a service in a restricted group with two node members will stay
+assigned to one of the node members in case of a failover of their previously
+assigned node.
+
+The test scenario is:
+- vm:101 must be kept on node2 or node3
+- vm:101 is currently running on node3
+- node2 has a higher service count than node1 to test whether the restriction
+  to node2 and node3 is applied even though the scheduler would prefer the less
+  utilized node1
+
+The expected outcome is:
+- As node3 fails, vm:101 is migrated to node2, as it's the only available node
+  left in the restricted group
diff --git a/src/test/test-location-strict4/cmdlist b/src/test/test-location-strict4/cmdlist
new file mode 100644
index 0000000..eee0e40
--- /dev/null
+++ b/src/test/test-location-strict4/cmdlist
@@ -0,0 +1,4 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node3 off" ]
+]
diff --git a/src/test/test-location-strict4/groups b/src/test/test-location-strict4/groups
new file mode 100644
index 0000000..0ad2abc
--- /dev/null
+++ b/src/test/test-location-strict4/groups
@@ -0,0 +1,3 @@
+group: must_stay_here
+	nodes node2,node3
+	restricted 1
diff --git a/src/test/test-location-strict4/hardware_status b/src/test/test-location-strict4/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-strict4/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-strict4/log.expect b/src/test/test-location-strict4/log.expect
new file mode 100644
index 0000000..847e157
--- /dev/null
+++ b/src/test/test-location-strict4/log.expect
@@ -0,0 +1,54 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: adding new service 'vm:102' on node 'node2'
+info     20    node1/crm: adding new service 'vm:103' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: service 'vm:102': state changed from 'request_start' to 'started'  (node = node2)
+info     20    node1/crm: service 'vm:103': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:102
+info     23    node2/lrm: service status vm:102 started
+info     23    node2/lrm: starting service vm:103
+info     23    node2/lrm: service status vm:103 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: starting service vm:101
+info     25    node3/lrm: service status vm:101 started
+info    120      cmdlist: execute network node3 off
+info    120    node1/crm: node 'node3': state changed from 'online' => 'unknown'
+info    124    node3/crm: status change slave => wait_for_quorum
+info    125    node3/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node3': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node3'
+info    166     watchdog: execute power node3 off
+info    165    node3/crm: killed by poweroff
+info    166    node3/lrm: killed by poweroff
+info    166     hardware: server 'node3' stopped by poweroff (watchdog)
+info    240    node1/crm: got lock 'ha_agent_node3_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: node 'node3': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node3' to node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node2)
+info    243    node2/lrm: starting service vm:101
+info    243    node2/lrm: service status vm:101 started
+info    720     hardware: exit simulation - done
diff --git a/src/test/test-location-strict4/manager_status b/src/test/test-location-strict4/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-strict4/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-strict4/service_config b/src/test/test-location-strict4/service_config
new file mode 100644
index 0000000..9adf02c
--- /dev/null
+++ b/src/test/test-location-strict4/service_config
@@ -0,0 +1,5 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" },
+    "vm:102": { "node": "node2", "state": "started" },
+    "vm:103": { "node": "node2", "state": "started" }
+}
diff --git a/src/test/test-location-strict5/README b/src/test/test-location-strict5/README
new file mode 100644
index 0000000..a4e67f4
--- /dev/null
+++ b/src/test/test-location-strict5/README
@@ -0,0 +1,16 @@
+Test whether a service in a restricted group with two differently prioritized
+node members will stay on the node with the highest priority in case of a
+failover or when the service is on a lower-priority node.
+
+The test scenario is:
+- vm:101 must be kept on node2 or node3
+- vm:101 is currently running on node3
+- node2 has a higher priority than node3
+
+The expected outcome is:
+- As vm:101 runs on node3, it is automatically migrated to node2, as node2 has
+  a higher priority than node3
+- As node2 fails, vm:101 is migrated to node3 as node3 is the next and only
+  available node member left in the restricted group
+- As node2 comes back online, vm:101 is migrated back to node2, as node2 has a
+  higher priority than node3
diff --git a/src/test/test-location-strict5/cmdlist b/src/test/test-location-strict5/cmdlist
new file mode 100644
index 0000000..6932aa7
--- /dev/null
+++ b/src/test/test-location-strict5/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node2 off" ],
+    [ "power node2 on", "network node2 on" ]
+]
diff --git a/src/test/test-location-strict5/groups b/src/test/test-location-strict5/groups
new file mode 100644
index 0000000..ec3cd79
--- /dev/null
+++ b/src/test/test-location-strict5/groups
@@ -0,0 +1,3 @@
+group: must_stay_here
+	nodes node2:2,node3:1
+	restricted 1
diff --git a/src/test/test-location-strict5/hardware_status b/src/test/test-location-strict5/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-strict5/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-strict5/log.expect b/src/test/test-location-strict5/log.expect
new file mode 100644
index 0000000..a875e11
--- /dev/null
+++ b/src/test/test-location-strict5/log.expect
@@ -0,0 +1,66 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node3'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node3)
+info     20    node1/crm: migrate service 'vm:101' to node 'node2' (running)
+info     20    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     24    node3/crm: status change wait_for_quorum => slave
+info     25    node3/lrm: got lock 'ha_agent_node3_lock'
+info     25    node3/lrm: status change wait_for_agent_lock => active
+info     25    node3/lrm: service vm:101 - start migrate to node 'node2'
+info     25    node3/lrm: service vm:101 - end migrate to node 'node2'
+info     40    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info     43    node2/lrm: got lock 'ha_agent_node2_lock'
+info     43    node2/lrm: status change wait_for_agent_lock => active
+info     43    node2/lrm: starting service vm:101
+info     43    node2/lrm: service status vm:101 started
+info    120      cmdlist: execute network node2 off
+info    120    node1/crm: node 'node2': state changed from 'online' => 'unknown'
+info    122    node2/crm: status change slave => wait_for_quorum
+info    123    node2/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node2'
+info    164     watchdog: execute power node2 off
+info    163    node2/crm: killed by poweroff
+info    164    node2/lrm: killed by poweroff
+info    164     hardware: server 'node2' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute network node2 on
+info    240    node1/crm: got lock 'ha_agent_node2_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node2' to node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node3)
+info    242    node2/crm: status change wait_for_quorum => slave
+info    245    node3/lrm: starting service vm:101
+info    245    node3/lrm: service status vm:101 started
+info    260    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info    260    node1/crm: migrate service 'vm:101' to node 'node2' (running)
+info    260    node1/crm: service 'vm:101': state changed from 'started' to 'migrate'  (node = node3, target = node2)
+info    265    node3/lrm: service vm:101 - start migrate to node 'node2'
+info    265    node3/lrm: service vm:101 - end migrate to node 'node2'
+info    280    node1/crm: service 'vm:101': state changed from 'migrate' to 'started'  (node = node2)
+info    363    node2/lrm: got lock 'ha_agent_node2_lock'
+info    363    node2/lrm: status change wait_for_agent_lock => active
+info    363    node2/lrm: starting service vm:101
+info    363    node2/lrm: service status vm:101 started
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-location-strict5/manager_status b/src/test/test-location-strict5/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-strict5/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-strict5/service_config b/src/test/test-location-strict5/service_config
new file mode 100644
index 0000000..36ea15b
--- /dev/null
+++ b/src/test/test-location-strict5/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node3", "state": "started", "group": "must_stay_here" }
+}
diff --git a/src/test/test-location-strict6/README b/src/test/test-location-strict6/README
new file mode 100644
index 0000000..c558afd
--- /dev/null
+++ b/src/test/test-location-strict6/README
@@ -0,0 +1,14 @@
+Test whether a service in a restricted group with nofailback enabled and two
+differently prioritized node members will stay on the current node without
+migrating back to the highest priority node.
+
+The test scenario is:
+- vm:101 must be kept on node2 or node3
+- vm:101 is currently running on node2
+- node2 has a higher priority than node3
+
+The expected outcome is:
+- As node2 fails, vm:101 is migrated to node3 as it is the only available node
+  member left in the restricted group
+- As node2 comes back online, vm:101 stays on node3; even though node2 has a
+  higher priority, the nofailback flag prevents vm:101 to migrate back to node2
diff --git a/src/test/test-location-strict6/cmdlist b/src/test/test-location-strict6/cmdlist
new file mode 100644
index 0000000..4dd33cc
--- /dev/null
+++ b/src/test/test-location-strict6/cmdlist
@@ -0,0 +1,5 @@
+[
+    [ "power node1 on", "power node2 on", "power node3 on"],
+    [ "network node2 off"],
+    [ "power node2 on", "network node2 on" ]
+]
diff --git a/src/test/test-location-strict6/groups b/src/test/test-location-strict6/groups
new file mode 100644
index 0000000..cdd0e50
--- /dev/null
+++ b/src/test/test-location-strict6/groups
@@ -0,0 +1,4 @@
+group: must_stay_here
+	nodes node2:2,node3:1
+	restricted 1
+	nofailback 1
diff --git a/src/test/test-location-strict6/hardware_status b/src/test/test-location-strict6/hardware_status
new file mode 100644
index 0000000..451beb1
--- /dev/null
+++ b/src/test/test-location-strict6/hardware_status
@@ -0,0 +1,5 @@
+{
+  "node1": { "power": "off", "network": "off" },
+  "node2": { "power": "off", "network": "off" },
+  "node3": { "power": "off", "network": "off" }
+}
diff --git a/src/test/test-location-strict6/log.expect b/src/test/test-location-strict6/log.expect
new file mode 100644
index 0000000..bcb472b
--- /dev/null
+++ b/src/test/test-location-strict6/log.expect
@@ -0,0 +1,52 @@
+info      0     hardware: starting simulation
+info     20      cmdlist: execute power node1 on
+info     20    node1/crm: status change startup => wait_for_quorum
+info     20    node1/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node2 on
+info     20    node2/crm: status change startup => wait_for_quorum
+info     20    node2/lrm: status change startup => wait_for_agent_lock
+info     20      cmdlist: execute power node3 on
+info     20    node3/crm: status change startup => wait_for_quorum
+info     20    node3/lrm: status change startup => wait_for_agent_lock
+info     20    node1/crm: got lock 'ha_manager_lock'
+info     20    node1/crm: status change wait_for_quorum => master
+info     20    node1/crm: node 'node1': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info     20    node1/crm: node 'node3': state changed from 'unknown' => 'online'
+info     20    node1/crm: adding new service 'vm:101' on node 'node2'
+info     20    node1/crm: service 'vm:101': state changed from 'request_start' to 'started'  (node = node2)
+info     22    node2/crm: status change wait_for_quorum => slave
+info     23    node2/lrm: got lock 'ha_agent_node2_lock'
+info     23    node2/lrm: status change wait_for_agent_lock => active
+info     23    node2/lrm: starting service vm:101
+info     23    node2/lrm: service status vm:101 started
+info     24    node3/crm: status change wait_for_quorum => slave
+info    120      cmdlist: execute network node2 off
+info    120    node1/crm: node 'node2': state changed from 'online' => 'unknown'
+info    122    node2/crm: status change slave => wait_for_quorum
+info    123    node2/lrm: status change active => lost_agent_lock
+info    160    node1/crm: service 'vm:101': state changed from 'started' to 'fence'
+info    160    node1/crm: node 'node2': state changed from 'unknown' => 'fence'
+emai    160    node1/crm: FENCE: Try to fence node 'node2'
+info    164     watchdog: execute power node2 off
+info    163    node2/crm: killed by poweroff
+info    164    node2/lrm: killed by poweroff
+info    164     hardware: server 'node2' stopped by poweroff (watchdog)
+info    220      cmdlist: execute power node2 on
+info    220    node2/crm: status change startup => wait_for_quorum
+info    220    node2/lrm: status change startup => wait_for_agent_lock
+info    220      cmdlist: execute network node2 on
+info    240    node1/crm: got lock 'ha_agent_node2_lock'
+info    240    node1/crm: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: node 'node2': state changed from 'fence' => 'unknown'
+emai    240    node1/crm: SUCCEED: fencing: acknowledged - got agent lock for node 'node2'
+info    240    node1/crm: service 'vm:101': state changed from 'fence' to 'recovery'
+info    240    node1/crm: recover service 'vm:101' from fenced node 'node2' to node 'node3'
+info    240    node1/crm: service 'vm:101': state changed from 'recovery' to 'started'  (node = node3)
+info    242    node2/crm: status change wait_for_quorum => slave
+info    245    node3/lrm: got lock 'ha_agent_node3_lock'
+info    245    node3/lrm: status change wait_for_agent_lock => active
+info    245    node3/lrm: starting service vm:101
+info    245    node3/lrm: service status vm:101 started
+info    260    node1/crm: node 'node2': state changed from 'unknown' => 'online'
+info    820     hardware: exit simulation - done
diff --git a/src/test/test-location-strict6/manager_status b/src/test/test-location-strict6/manager_status
new file mode 100644
index 0000000..9e26dfe
--- /dev/null
+++ b/src/test/test-location-strict6/manager_status
@@ -0,0 +1 @@
+{}
\ No newline at end of file
diff --git a/src/test/test-location-strict6/service_config b/src/test/test-location-strict6/service_config
new file mode 100644
index 0000000..1d371e1
--- /dev/null
+++ b/src/test/test-location-strict6/service_config
@@ -0,0 +1,3 @@
+{
+    "vm:101": { "node": "node2", "state": "started", "group": "must_stay_here" }
+}
-- 
2.39.5





More information about the pve-devel mailing list