Files
nova/releasenotes/notes/ironic-shards-5641e4b1ab5bb7aa.yaml
John Garbutt 9068db09e4 Add nova-manage ironic-compute-node-move
When people transition from three ironic nova-compute processes down
to one process, we need a way to move the ironic nodes, and any
associcated instances, between nova-compute processes.

For saftey, a nova-compute process must first be forced_down via
the API, similar to when using evacaute, before moving the associated
ironic nodes to another nova-compute process. The destination
nova-compute process should ideally not be running, but not forced
down.

blueprint ironic-shards

Change-Id: I7ef25e27bf8c47f994e28c59858cf3df30975b05
2023-08-31 18:19:49 +01:00

18 lines
910 B
YAML

---
features:
- |
Ironic nova-compute services can now target a specific shard of ironic
nodes by setting the config ``[ironic]shard``.
This is particularly useful when using active-passive methods to choose
on which physical host your ironic nova-compute process is running,
while ensuring ``[DEFAULT]host`` stays the same for each shard.
You can use this alongside ``[ironic]conductor_group`` to further limit
which ironic nodes are managed by each nova-compute service.
Note that when you use ``[ironic]shard`` the ``[ironic]peer_list``
is hard coded to a single nova-compute service.
There is a new nova-manage command ``db ironic_compute_node_move`` that
can be used to move ironic nodes, and the associated instances, between
nova-compute services. This is useful when migrating from the legacy
hash ring based HA towards the new sharding approach.