
With adding Octavia Load Balancer to the `openstack.cloud.quota`
module [1] we have a chicken-egg situation:
* If Octavia endpoint exist, but Octavia service is not yet running or
disabled on HAProxy, openstack_resources will fail on quotas
* We need service_setup to run before openstack_resources as
otherwise there will be no user to create/upload SSH key for
Quota management inside of the Octavia role also creates a confusion,
as Octavia does use default `service` project by default, which
is shared with other services, like Trove, which may result in
conflicts between these services.
As a solution it is proposed to deprecate quota management in
favor of ``openstack.osa.openstack_resources`` playbook and guide
operators on way to set quotas in their user_variables instead.
[1] 57c63e7918 (diff-ca4fad21675b7d9b029b213a9629606546fe7009)
Depends-On: https://review.opendev.org/c/openstack/ansible-role-pki/+/958661
Change-Id: Ib6a5c074ef99459f8707ab0874edb55d39d495ad
Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com>
25 lines
885 B
YAML
25 lines
885 B
YAML
---
|
|
deprecations:
|
|
- |
|
|
Quota management for Octavia service has been deperecated in favor of
|
|
centralized approach through ``openstack.osa.openstack_resources``
|
|
playbook.
|
|
As default project name was ``service``, defining quota inside of the
|
|
Octavia role was causing conflicts with other services (like Trove).
|
|
|
|
Respective variables were deprecated and have no effect:
|
|
|
|
* octavia_num_instances
|
|
* octavia_ram
|
|
* octavia_gigabytes
|
|
* octavia_num_server_groups
|
|
* octavia_num_server_group_members
|
|
* octavia_num_cores
|
|
* octavia_num_secgroups
|
|
* octavia_num_ports
|
|
* octavia_num_security_group_rules
|
|
* octavia_num_volumes
|
|
|
|
Please reffer to `Octavia documentation <https://docs.openstack.org/openstack-ansible-os_octavia/latest/configure-octavia.html>`_
|
|
for more information how to manage service quotas.
|