Deprecate quota management with Octavia
With adding Octavia Load Balancer to the `openstack.cloud.quota`
module [1] we have a chicken-egg situation:
* If Octavia endpoint exist, but Octavia service is not yet running or
disabled on HAProxy, openstack_resources will fail on quotas
* We need service_setup to run before openstack_resources as
otherwise there will be no user to create/upload SSH key for
Quota management inside of the Octavia role also creates a confusion,
as Octavia does use default `service` project by default, which
is shared with other services, like Trove, which may result in
conflicts between these services.
As a solution it is proposed to deprecate quota management in
favor of ``openstack.osa.openstack_resources`` playbook and guide
operators on way to set quotas in their user_variables instead.
[1] 57c63e7918 (diff-ca4fad21675b7d9b029b213a9629606546fe7009)
Depends-On: https://review.opendev.org/c/openstack/ansible-role-pki/+/958661
Change-Id: Ib6a5c074ef99459f8707ab0874edb55d39d495ad
Signed-off-by: Dmitriy Rabotyagov <dmitriy.rabotyagov@cleura.com>
This commit is contained in:
@@ -703,18 +703,6 @@ octavia_ca_private_key_passphrase: "{{ octavia_cert_client_password }}"
|
||||
# octavia_ovnsb_user_ssl_cert: <path to cert on ansible deployment host>
|
||||
# octavia_ovnsb_user_ssl_key: <path to cert on ansible deployment host>
|
||||
|
||||
# Quotas for the Octavia user - assuming active/passive topology
|
||||
octavia_num_instances: 10000 # 5000 LB in active/passive
|
||||
octavia_ram: "{{ (octavia_num_instances | int) * 1024 }}"
|
||||
octavia_gigabytes: "{{ (octavia_num_volumes | int) * (octavia_cinder_volume_size | int) }}"
|
||||
octavia_num_server_groups: "{{ ((octavia_num_instances | int) * 0.5) | int | abs }}"
|
||||
octavia_num_server_group_members: 50
|
||||
octavia_num_cores: "{{ octavia_num_instances }}"
|
||||
octavia_num_secgroups: "{{ (octavia_num_instances | int) * 1.5 | int | abs }}" # average 3 listener per lb
|
||||
octavia_num_ports: "{{ (octavia_num_instances | int) * 10 }}" # at least instances * 10
|
||||
octavia_num_security_group_rules: "{{ (octavia_num_secgroups | int) * 100 }}"
|
||||
octavia_num_volumes: "{{ octavia_num_instances }}"
|
||||
|
||||
## Tunable overrides
|
||||
octavia_octavia_conf_overrides: {}
|
||||
octavia_api_paste_ini_overrides: {}
|
||||
|
@@ -34,6 +34,50 @@ OpenStack-Ansible deployment
|
||||
#. Run the haproxy-install.yml playbook to add the new octavia API endpoints
|
||||
to the load balancer.
|
||||
|
||||
Define project quota for Amphora driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Amphora driver for Octavia is a default option on spawning Load Balancers
|
||||
with Octavia. The driver relies on OpenStack Nova/Neutron services to spawn
|
||||
VMs from a specialized image which serve as Load Balancers.
|
||||
|
||||
These VMs are created in a ``service`` project by default, and tenants have
|
||||
no direct access to them.
|
||||
|
||||
With that operator must ensure, that the ``service`` project
|
||||
has sufficient quotas defined to handle all tenant Load Balancers in it.
|
||||
|
||||
The suggested way of doing that is through leveraging the
|
||||
``openstack.osa.openstack_resources`` playbook and defining following
|
||||
variables in ``user_variables.yml`` or ``group_vars/utility_all``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# In case of `octavia_loadbalancer_topology` set to ACTIVE_STANDBY (default)
|
||||
# each Load Balancer will create 2 VMs
|
||||
_max_amphora_instances: 10000
|
||||
openstack_user_identity:
|
||||
quotas:
|
||||
- name: "service"
|
||||
# Default Amphora flavor is 1 Core, 1024MB RAM
|
||||
cores: "{{ _max_amphora_instances }}"
|
||||
ram: "{{ (_max_amphora_instances | int) * 1024 }}"
|
||||
instances: "{{ _max_amphora_instances }}"
|
||||
port: "{{ (_max_amphora_instances | int) * 10 }}"
|
||||
server_groups: "{{ ((_max_amphora_instances | int) * 0.5) | int | abs }}"
|
||||
server_group_members: 50
|
||||
# A security group is created per Load Balancer listener
|
||||
security_group: "{{ (_max_amphora_instances | int) * 1.5 | int | abs }}"
|
||||
security_group_rule: "{{ ((_max_amphora_instances | int) * 1.5 | int | abs) * 100 }}"
|
||||
# If `octavia_cinder_enabled: true` also define these
|
||||
volumes: "{{ _max_amphora_instances }}"
|
||||
# Volume size is defined with `octavia_cinder_volume_size` with default of 20
|
||||
gigabytes: "{{ (_max_amphora_instances | int) * 20 }}"
|
||||
|
||||
These values will be applied on running ``openstack-ansible openstack.osa.openstack_resources``,
|
||||
or as part of ``openstack.osa.setup_openstack`` playbook.
|
||||
|
||||
|
||||
Setup a neutron network for use by octavia
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@@ -0,0 +1,24 @@
|
||||
---
|
||||
deprecations:
|
||||
- |
|
||||
Quota management for Octavia service has been deperecated in favor of
|
||||
centralized approach through ``openstack.osa.openstack_resources``
|
||||
playbook.
|
||||
As default project name was ``service``, defining quota inside of the
|
||||
Octavia role was causing conflicts with other services (like Trove).
|
||||
|
||||
Respective variables were deprecated and have no effect:
|
||||
|
||||
* octavia_num_instances
|
||||
* octavia_ram
|
||||
* octavia_gigabytes
|
||||
* octavia_num_server_groups
|
||||
* octavia_num_server_group_members
|
||||
* octavia_num_cores
|
||||
* octavia_num_secgroups
|
||||
* octavia_num_ports
|
||||
* octavia_num_security_group_rules
|
||||
* octavia_num_volumes
|
||||
|
||||
Please reffer to `Octavia documentation <https://docs.openstack.org/openstack-ansible-os_octavia/latest/configure-octavia.html>`_
|
||||
for more information how to manage service quotas.
|
@@ -32,19 +32,6 @@
|
||||
openstack_resources_deploy_host: "{{ octavia_resources_deploy_host }}"
|
||||
openstack_resources_deploy_python_interpreter: "{{ octavia_resources_deploy_python_interpreter }}"
|
||||
openstack_resources_image: "{{ (octavia_download_artefact | bool) | ternary({'images': octavia_amp_image_resource}, {}) }}"
|
||||
openstack_resources_identity:
|
||||
quotas:
|
||||
- name: "{{ octavia_service_project_name }}"
|
||||
cores: "{{ octavia_num_cores }}"
|
||||
gigabytes: "{{ octavia_cinder_enabled | ternary(octavia_gigabytes, omit) }}"
|
||||
instances: "{{ octavia_num_instances }}"
|
||||
ram: "{{ octavia_ram }}"
|
||||
server_groups: "{{ octavia_num_server_groups }}"
|
||||
server_group_members: "{{ octavia_num_server_group_members }}"
|
||||
security_group: "{{ octavia_num_secgroups }}"
|
||||
security_group_rule: "{{ octavia_num_security_group_rules }}"
|
||||
port: "{{ octavia_num_ports }}"
|
||||
volumes: "{{ octavia_cinder_enabled | ternary(octavia_num_volumes, omit) }}"
|
||||
# Network Resources
|
||||
_octavia_networks:
|
||||
networks:
|
||||
|
Reference in New Issue
Block a user