Remove installation guide for openSUSE/SLES
openSUSE stopped providing OpenStack packages some time ago. Co-authored-by: Takashi Kajinami <kajinamit@oss.nttdata.com> Change-Id: I0912954a2c374a8fbd1b01a2823475a557742af5
This commit is contained in:
@@ -1,143 +0,0 @@
|
||||
Install and configure compute node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The compute node handles connectivity and security groups for instances.
|
||||
|
||||
|
||||
|
||||
|
||||
Install the components
|
||||
----------------------
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install --no-recommends \
|
||||
openstack-neutron-openvswitch-agent bridge-utils
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
Configure the common component
|
||||
------------------------------
|
||||
|
||||
The Networking common component configuration includes the
|
||||
authentication mechanism, message queue, and plug-in.
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, comment out any ``connection`` options
|
||||
because compute nodes do not directly access the database.
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
||||
account in RabbitMQ.
|
||||
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
Configure networking options
|
||||
----------------------------
|
||||
|
||||
Choose the same networking option that you chose for the controller node to
|
||||
configure services specific to it. Afterwards, return here and proceed to
|
||||
:ref:`neutron-compute-compute-obs`.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
compute-install-option1-obs.rst
|
||||
compute-install-option2-obs.rst
|
||||
|
||||
.. _neutron-compute-compute-obs:
|
||||
|
||||
Configure the Compute service to use the Networking service
|
||||
-----------------------------------------------------------
|
||||
|
||||
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
||||
|
||||
* In the ``[neutron]`` section, configure access parameters:
|
||||
|
||||
.. path /etc/nova/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[neutron]
|
||||
# ...
|
||||
auth_url = http://controller:5000
|
||||
auth_type = password
|
||||
project_domain_name = Default
|
||||
user_domain_name = Default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
See the :nova-doc:`compute service configuration guide <configuration/config.html#neutron>`
|
||||
for the full set of options including overriding the service catalog
|
||||
endpoint URL if necessary.
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
#. The Networking service initialization scripts expect the variable
|
||||
``NEUTRON_PLUGIN_CONF`` in the ``/etc/sysconfig/neutron`` file to
|
||||
reference the ML2 plug-in configuration file. Ensure that the
|
||||
``/etc/sysconfig/neutron`` file contains the following:
|
||||
|
||||
.. path /etc/sysconfig/neutron
|
||||
.. code-block:: ini
|
||||
|
||||
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
|
||||
|
||||
.. end
|
||||
|
||||
#. Restart the Compute service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart openstack-nova-compute.service
|
||||
|
||||
.. end
|
||||
|
||||
#. Start the Open vSwitch agent and configure it to start when the
|
||||
system boots:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-neutron-openvswitch-agent.service
|
||||
# systemctl start openstack-neutron-openvswitch-agent.service
|
||||
|
||||
.. end
|
||||
|
||||
|
@@ -1,70 +0,0 @@
|
||||
Networking Option 1: Provider networks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Configure the Networking components on a *compute* node.
|
||||
|
||||
Configure the Open vSwitch agent
|
||||
--------------------------------
|
||||
|
||||
The Open vSwitch agent builds layer-2 (bridging and switching) virtual
|
||||
networking infrastructure for instances and handles security groups.
|
||||
|
||||
* Edit the ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` file and
|
||||
complete the following actions:
|
||||
|
||||
* In the ``[ovs]`` section, map the provider virtual network to the
|
||||
provider physical bridge:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:PROVIDER_BRIDGE_NAME
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``PROVIDER_BRIDGE_NAME`` with the name of the bridge connected to
|
||||
the underlying provider physical network.
|
||||
See :doc:`environment-networking-obs`
|
||||
and :doc:`../admin/deploy-ovs-provider` for more information.
|
||||
|
||||
* Ensure ``PROVIDER_BRIDGE_NAME`` external bridge is created and
|
||||
``PROVIDER_INTERFACE_NAME`` is added to that bridge
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# ovs-vsctl add-br $PROVIDER_BRIDGE_NAME
|
||||
# ovs-vsctl add-port $PROVIDER_BRIDGE_NAME $PROVIDER_INTERFACE_NAME
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Open vSwitch native or the hybrid iptables firewall driver:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
# ...
|
||||
enable_security_group = true
|
||||
firewall_driver = openvswitch
|
||||
#firewall_driver = iptables_hybrid
|
||||
|
||||
.. end
|
||||
|
||||
* In the case of using the hybrid iptables firewall driver, ensure your
|
||||
Linux operating system kernel supports network bridge filters by verifying
|
||||
all the following ``sysctl`` values are set to ``1``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
net.bridge.bridge-nf-call-iptables
|
||||
net.bridge.bridge-nf-call-ip6tables
|
||||
|
||||
.. end
|
||||
|
||||
To enable networking bridge support, typically the ``br_netfilter`` kernel
|
||||
module needs to be loaded. Check your operating system's documentation for
|
||||
additional details on enabling this module.
|
||||
|
||||
Return to *Networking compute node configuration*
|
@@ -1,91 +0,0 @@
|
||||
Networking Option 2: Self-service networks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Configure the Networking components on a *compute* node.
|
||||
|
||||
Configure the Open vSwitch agent
|
||||
--------------------------------
|
||||
|
||||
The Open vSwitch agent builds layer-2 (bridging and switching) virtual
|
||||
networking infrastructure for instances and handles security groups.
|
||||
|
||||
* Edit the ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` file and
|
||||
complete the following actions:
|
||||
|
||||
* In the ``[ovs]`` section, map the provider virtual network to the
|
||||
provider physical bridge and configure the IP address of
|
||||
the physical network interface that handles overlay networks:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:PROVIDER_BRIDGE_NAME
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``PROVIDER_BRIDGE_NAME`` with the name of the bridge connected to
|
||||
the underlying provider physical network.
|
||||
See :doc:`environment-networking-obs`
|
||||
and :doc:`../admin/deploy-ovs-provider` for more information.
|
||||
|
||||
Also replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
underlying physical network interface that handles overlay networks. The
|
||||
example architecture uses the management interface to tunnel traffic to
|
||||
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
|
||||
the management IP address of the compute node. See
|
||||
:doc:`environment-networking-obs` for more information.
|
||||
|
||||
* Ensure ``PROVIDER_BRIDGE_NAME`` external bridge is created and
|
||||
``PROVIDER_INTERFACE_NAME`` is added to that bridge
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# ovs-vsctl add-br $PROVIDER_BRIDGE_NAME
|
||||
# ovs-vsctl add-port $PROVIDER_BRIDGE_NAME $PROVIDER_INTERFACE_NAME
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[agent]`` section, enable VXLAN overlay networks and enable
|
||||
layer-2 population:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
tunnel_types = vxlan
|
||||
l2_population = true
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Open vSwitch native or the hybrid iptables firewall driver:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
# ...
|
||||
enable_security_group = true
|
||||
firewall_driver = openvswitch
|
||||
#firewall_driver = iptables_hybrid
|
||||
|
||||
.. end
|
||||
|
||||
* In the case of using the hybrid iptables firewall driver, ensure your
|
||||
Linux operating system kernel supports network bridge filters by verifying
|
||||
all the following ``sysctl`` values are set to ``1``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
net.bridge.bridge-nf-call-iptables
|
||||
net.bridge.bridge-nf-call-ip6tables
|
||||
|
||||
.. end
|
||||
|
||||
To enable networking bridge support, typically the ``br_netfilter`` kernel
|
||||
module needs to be loaded. Check your operating system's documentation for
|
||||
additional details on enabling this module.
|
||||
|
||||
Return to *Networking compute node configuration*.
|
@@ -1,326 +0,0 @@
|
||||
Install and configure controller node
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Before you configure the OpenStack Networking (neutron) service, you
|
||||
must create a database, service credentials, and API endpoints.
|
||||
|
||||
#. To create the database, complete these steps:
|
||||
|
||||
* Use the database access client to connect to the database
|
||||
server as the ``root`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ mysql -u root -p
|
||||
|
||||
.. end
|
||||
|
||||
* Create the ``neutron`` database:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> CREATE DATABASE neutron;
|
||||
|
||||
.. end
|
||||
|
||||
* Grant proper access to the ``neutron`` database, replacing
|
||||
``NEUTRON_DBPASS`` with a suitable password:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
|
||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
|
||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
||||
|
||||
.. end
|
||||
|
||||
* Exit the database access client.
|
||||
|
||||
#. Source the ``admin`` credentials to gain access to admin-only CLI
|
||||
commands:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ . admin-openrc
|
||||
|
||||
.. end
|
||||
|
||||
#. To create the service credentials, complete these steps:
|
||||
|
||||
* Create the ``neutron`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack user create --domain default --password-prompt neutron
|
||||
|
||||
User Password:
|
||||
Repeat User Password:
|
||||
+---------------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+----------------------------------+
|
||||
| domain_id | default |
|
||||
| enabled | True |
|
||||
| id | fdb0f541e28141719b6a43c8944bf1fb |
|
||||
| name | neutron |
|
||||
| options | {} |
|
||||
| password_expires_at | None |
|
||||
+---------------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
* Add the ``admin`` role to the ``neutron`` user:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack role add --project service --user neutron admin
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
* Create the ``neutron`` service entity:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack service create --name neutron \
|
||||
--description "OpenStack Networking" network
|
||||
|
||||
+-------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+----------------------------------+
|
||||
| description | OpenStack Networking |
|
||||
| enabled | True |
|
||||
| id | f71529314dab4a4d8eca427e701d209e |
|
||||
| name | neutron |
|
||||
| type | network |
|
||||
+-------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
#. Create the Networking service API endpoints:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
network public http://controller:9696
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
|
||||
| interface | public |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
||||
| service_name | neutron |
|
||||
| service_type | network |
|
||||
| url | http://controller:9696 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
network internal http://controller:9696
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 09753b537ac74422a68d2d791cf3714f |
|
||||
| interface | internal |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
||||
| service_name | neutron |
|
||||
| service_type | network |
|
||||
| url | http://controller:9696 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
$ openstack endpoint create --region RegionOne \
|
||||
network admin http://controller:9696
|
||||
|
||||
+--------------+----------------------------------+
|
||||
| Field | Value |
|
||||
+--------------+----------------------------------+
|
||||
| enabled | True |
|
||||
| id | 1ee14289c9374dffb5db92a5c112fc4e |
|
||||
| interface | admin |
|
||||
| region | RegionOne |
|
||||
| region_id | RegionOne |
|
||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
||||
| service_name | neutron |
|
||||
| service_type | network |
|
||||
| url | http://controller:9696 |
|
||||
+--------------+----------------------------------+
|
||||
|
||||
.. end
|
||||
|
||||
Configure networking options
|
||||
----------------------------
|
||||
|
||||
You can deploy the Networking service using one of two architectures
|
||||
represented by options 1 and 2.
|
||||
|
||||
Option 1 deploys the simplest possible architecture that only supports
|
||||
attaching instances to provider (external) networks. No self-service (private)
|
||||
networks, routers, or floating IP addresses. Only the ``admin`` or other
|
||||
privileged user can manage provider networks.
|
||||
|
||||
Option 2 augments option 1 with layer-3 services that support attaching
|
||||
instances to self-service networks. The ``demo`` or other unprivileged
|
||||
user can manage self-service networks including routers that provide
|
||||
connectivity between self-service and provider networks. Additionally,
|
||||
floating IP addresses provide connectivity to instances using self-service
|
||||
networks from external networks such as the Internet.
|
||||
|
||||
Self-service networks typically use overlay networks. Overlay network
|
||||
protocols such as VXLAN include additional headers that increase overhead
|
||||
and decrease space available for the payload or user data. Without knowledge
|
||||
of the virtual network infrastructure, instances attempt to send packets
|
||||
using the default Ethernet maximum transmission unit (MTU) of 1500
|
||||
bytes. The Networking service automatically provides the correct MTU value
|
||||
to instances via DHCP. However, some cloud images do not use DHCP or ignore
|
||||
the DHCP MTU option and require configuration using metadata or a script.
|
||||
|
||||
.. note::
|
||||
|
||||
Option 2 also supports attaching instances to provider networks.
|
||||
|
||||
Choose one of the following networking options to configure services
|
||||
specific to it. Afterwards, return here and proceed to
|
||||
:ref:`neutron-controller-metadata-agent-obs`.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
controller-install-option1-obs.rst
|
||||
controller-install-option2-obs.rst
|
||||
|
||||
.. _neutron-controller-metadata-agent-obs:
|
||||
|
||||
Configure the metadata agent
|
||||
----------------------------
|
||||
|
||||
The metadata agent provides configuration information
|
||||
such as credentials to instances.
|
||||
|
||||
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the metadata host and shared
|
||||
secret:
|
||||
|
||||
.. path /etc/neutron/metadata_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
nova_metadata_host = controller
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
||||
|
||||
Configure the Compute service to use the Networking service
|
||||
-----------------------------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
The Nova compute service must be installed to complete this step.
|
||||
For more details see the compute install guide found under the
|
||||
`Installation Guides` section of the
|
||||
`docs website <https://docs.openstack.org>`_.
|
||||
|
||||
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
||||
|
||||
* In the ``[neutron]`` section, configure access parameters, enable the
|
||||
metadata proxy, and configure the secret:
|
||||
|
||||
.. path /etc/nova/nova.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[neutron]
|
||||
# ...
|
||||
auth_url = http://controller:5000
|
||||
auth_type = password
|
||||
project_domain_name = Default
|
||||
user_domain_name = Default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
service_metadata_proxy = true
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
||||
proxy.
|
||||
|
||||
See the :nova-doc:`compute service configuration guide <configuration/config.html#neutron>`
|
||||
for the full set of options including overriding the service catalog
|
||||
endpoint URL if necessary.
|
||||
|
||||
Finalize installation
|
||||
---------------------
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
SLES enables apparmor by default and restricts dnsmasq. You need to
|
||||
either completely disable apparmor or disable only the dnsmasq
|
||||
profile:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ln -s /etc/apparmor.d/usr.sbin.dnsmasq /etc/apparmor.d/disable/
|
||||
# systemctl restart apparmor
|
||||
|
||||
.. end
|
||||
|
||||
#. Restart the Compute API service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl restart openstack-nova-api.service
|
||||
|
||||
.. end
|
||||
|
||||
#. Start the Networking services and configure them to start when the system
|
||||
boots.
|
||||
|
||||
For both networking options:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-neutron.service \
|
||||
openstack-neutron-openvswitch-agent.service \
|
||||
openstack-neutron-dhcp-agent.service \
|
||||
openstack-neutron-metadata-agent.service
|
||||
# systemctl start openstack-neutron.service \
|
||||
openstack-neutron-openvswitch-agent.service \
|
||||
openstack-neutron-dhcp-agent.service \
|
||||
openstack-neutron-metadata-agent.service
|
||||
|
||||
.. end
|
||||
|
||||
For networking option 2, also enable and start the layer-3 service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# systemctl enable openstack-neutron-l3-agent.service
|
||||
# systemctl start openstack-neutron-l3-agent.service
|
||||
|
||||
.. end
|
||||
|
||||
|
@@ -1,311 +0,0 @@
|
||||
Networking Option 1: Provider networks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Install and configure the Networking components on the *controller* node.
|
||||
|
||||
Install the components
|
||||
----------------------
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install --no-recommends openstack-neutron \
|
||||
openstack-neutron-server openstack-neutron-openvswitch-agent \
|
||||
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent \
|
||||
bridge-utils
|
||||
|
||||
.. end
|
||||
|
||||
Configure the server component
|
||||
------------------------------
|
||||
|
||||
The Networking server component configuration includes the database,
|
||||
authentication mechanism, message queue, topology change notifications,
|
||||
and plug-in.
|
||||
|
||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
||||
|
||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
||||
database.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other ``connection`` options in the
|
||||
``[database]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
||||
plug-in and disable additional plug-ins:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
core_plugin = ml2
|
||||
service_plugins =
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in RabbitMQ.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||
Identity service access:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
www_authenticate_uri = http://controller:5000
|
||||
auth_url = http://controller:5000
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = Default
|
||||
user_domain_name = Default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
||||
notify Compute of network topology changes:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
notify_nova_on_port_status_changes = true
|
||||
notify_nova_on_port_data_changes = true
|
||||
|
||||
[nova]
|
||||
# ...
|
||||
auth_url = http://controller:5000
|
||||
auth_type = password
|
||||
project_domain_name = Default
|
||||
user_domain_name = Default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = nova
|
||||
password = NOVA_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
||||
user in the Identity service.
|
||||
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
|
||||
.. end
|
||||
|
||||
Configure the Modular Layer 2 (ML2) plug-in
|
||||
-------------------------------------------
|
||||
|
||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
||||
and switching) virtual networking infrastructure for instances.
|
||||
|
||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
type_drivers = flat,vlan
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[ml2]`` section, disable self-service networks:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
tenant_network_types =
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
mechanism_drivers = openvswitch
|
||||
|
||||
.. end
|
||||
|
||||
.. warning::
|
||||
|
||||
After you configure the ML2 plug-in, removing values in the
|
||||
``type_drivers`` option can lead to database inconsistency.
|
||||
|
||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
extension_drivers = port_security
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
||||
network as a flat network:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
# ...
|
||||
flat_networks = provider
|
||||
|
||||
.. end
|
||||
|
||||
Configure the Open vSwitch agent
|
||||
--------------------------------
|
||||
|
||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
||||
networking infrastructure for instances and handles security groups.
|
||||
|
||||
* Edit the ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` file and
|
||||
complete the following actions:
|
||||
|
||||
* In the ``[ovs]`` section, map the provider virtual network to the
|
||||
provider physical bridge:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:PROVIDER_BRIDGE_NAME
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``PROVIDER_BRIDGE_NAME`` with the name of the bridge connected to
|
||||
the underlying provider physical network.
|
||||
See :doc:`environment-networking-obs`
|
||||
and :doc:`../admin/deploy-ovs-provider` for more information.
|
||||
|
||||
* Ensure ``PROVIDER_BRIDGE_NAME`` external bridge is created and
|
||||
``PROVIDER_INTERFACE_NAME`` is added to that bridge
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# ovs-vsctl add-br $PROVIDER_BRIDGE_NAME
|
||||
# ovs-vsctl add-port $PROVIDER_BRIDGE_NAME $PROVIDER_INTERFACE_NAME
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Open vSwitch native or the hybrid iptables firewall driver:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
# ...
|
||||
enable_security_group = true
|
||||
firewall_driver = openvswitch
|
||||
#firewall_driver = iptables_hybrid
|
||||
|
||||
.. end
|
||||
|
||||
* In the case of using the hybrid iptables firewall driver, ensure your
|
||||
Linux operating system kernel supports network bridge filters by verifying
|
||||
all the following ``sysctl`` values are set to ``1``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
net.bridge.bridge-nf-call-iptables
|
||||
net.bridge.bridge-nf-call-ip6tables
|
||||
|
||||
.. end
|
||||
|
||||
To enable networking bridge support, typically the ``br_netfilter`` kernel
|
||||
module needs to be loaded. Check your operating system's documentation for
|
||||
additional details on enabling this module.
|
||||
|
||||
Configure the DHCP agent
|
||||
------------------------
|
||||
|
||||
The DHCP agent provides DHCP services for virtual networks.
|
||||
|
||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the Dnsmasq DHCP driver, and enable
|
||||
isolated metadata so instances on provider networks can access metadata
|
||||
over the network:
|
||||
|
||||
.. path /etc/neutron/dhcp_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||
enable_isolated_metadata = true
|
||||
|
||||
.. end
|
||||
|
||||
Create the provider network
|
||||
---------------------------
|
||||
|
||||
Follow `this provider network document <https://docs.openstack.org/install-guide/launch-instance-networks-provider.html>`_ from the General Installation Guide.
|
||||
|
||||
|
||||
Return to *Networking controller node configuration*.
|
@@ -1,347 +0,0 @@
|
||||
Networking Option 2: Self-service networks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Install and configure the Networking components on the *controller* node.
|
||||
|
||||
Install the components
|
||||
----------------------
|
||||
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# zypper install --no-recommends openstack-neutron \
|
||||
openstack-neutron-server openstack-neutron-openvswitch-agent \
|
||||
openstack-neutron-l3-agent openstack-neutron-dhcp-agent \
|
||||
openstack-neutron-metadata-agent bridge-utils dnsmasq
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
|
||||
Configure the server component
|
||||
------------------------------
|
||||
|
||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[database]`` section, configure database access:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[database]
|
||||
# ...
|
||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
||||
database.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other ``connection`` options in the
|
||||
``[database]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
||||
plug-in and router service:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
core_plugin = ml2
|
||||
service_plugins = router
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
||||
message queue access:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
||||
``openstack`` account in RabbitMQ.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
||||
Identity service access:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
auth_strategy = keystone
|
||||
|
||||
[keystone_authtoken]
|
||||
# ...
|
||||
www_authenticate_uri = http://controller:5000
|
||||
auth_url = http://controller:5000
|
||||
memcached_servers = controller:11211
|
||||
auth_type = password
|
||||
project_domain_name = Default
|
||||
user_domain_name = Default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = NEUTRON_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
||||
user in the Identity service.
|
||||
|
||||
.. note::
|
||||
|
||||
Comment out or remove any other options in the
|
||||
``[keystone_authtoken]`` section.
|
||||
|
||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
||||
notify Compute of network topology changes:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
notify_nova_on_port_status_changes = true
|
||||
notify_nova_on_port_data_changes = true
|
||||
|
||||
[nova]
|
||||
# ...
|
||||
auth_url = http://controller:5000
|
||||
auth_type = password
|
||||
project_domain_name = Default
|
||||
user_domain_name = Default
|
||||
region_name = RegionOne
|
||||
project_name = service
|
||||
username = nova
|
||||
password = NOVA_PASS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
||||
user in the Identity service.
|
||||
|
||||
|
||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
||||
|
||||
.. path /etc/neutron/neutron.conf
|
||||
.. code-block:: ini
|
||||
|
||||
[oslo_concurrency]
|
||||
# ...
|
||||
lock_path = /var/lib/neutron/tmp
|
||||
|
||||
.. end
|
||||
|
||||
Configure the Modular Layer 2 (ML2) plug-in
|
||||
-------------------------------------------
|
||||
|
||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
||||
and switching) virtual networking infrastructure for instances.
|
||||
|
||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
||||
following actions:
|
||||
|
||||
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
type_drivers = flat,vlan,vxlan
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[ml2]`` section, enable VXLAN self-service networks:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
tenant_network_types = vxlan
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
||||
mechanisms:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
mechanism_drivers = openvswitch,l2population
|
||||
|
||||
.. end
|
||||
|
||||
.. warning::
|
||||
|
||||
After you configure the ML2 plug-in, removing values in the
|
||||
``type_drivers`` option can lead to database inconsistency.
|
||||
|
||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
# ...
|
||||
extension_drivers = port_security
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
||||
network as a flat network:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
# ...
|
||||
flat_networks = provider
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
||||
range for self-service networks:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_vxlan]
|
||||
# ...
|
||||
vni_ranges = 1:1000
|
||||
|
||||
.. end
|
||||
|
||||
Configure the Open vSwitch agent
|
||||
--------------------------------
|
||||
|
||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
||||
networking infrastructure for instances and handles security groups.
|
||||
|
||||
* Edit the ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` file and
|
||||
complete the following actions:
|
||||
|
||||
* In the ``[ovs]`` section, map the provider virtual network to the
|
||||
provider physical bridge and configure the IP address of
|
||||
the physical network interface that handles overlay networks:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:PROVIDER_BRIDGE_NAME
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
.. end
|
||||
|
||||
Replace ``PROVIDER_BRIDGE_NAME`` with the name of the bridge connected to
|
||||
the underlying provider physical network.
|
||||
See :doc:`environment-networking-obs`
|
||||
and :doc:`../admin/deploy-ovs-provider` for more information.
|
||||
|
||||
Also replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
underlying physical network interface that handles overlay networks. The
|
||||
example architecture uses the management interface to tunnel traffic to
|
||||
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
|
||||
the management IP address of the controller node. See
|
||||
:doc:`environment-networking-obs` for more information.
|
||||
|
||||
* Ensure ``PROVIDER_BRIDGE_NAME`` external bridge is created and
|
||||
``PROVIDER_INTERFACE_NAME`` is added to that bridge
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# ovs-vsctl add-br $PROVIDER_BRIDGE_NAME
|
||||
# ovs-vsctl add-port $PROVIDER_BRIDGE_NAME $PROVIDER_INTERFACE_NAME
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[agent]`` section, enable VXLAN overlay networks and enable
|
||||
layer-2 population:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
tunnel_types = vxlan
|
||||
l2_population = true
|
||||
|
||||
.. end
|
||||
|
||||
* In the ``[securitygroup]`` section, enable security groups and
|
||||
configure the Open vSwitch native or the hybrid iptables firewall driver:
|
||||
|
||||
.. path /etc/neutron/plugins/ml2/openvswitch_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
# ...
|
||||
enable_security_group = true
|
||||
firewall_driver = openvswitch
|
||||
#firewall_driver = iptables_hybrid
|
||||
|
||||
.. end
|
||||
|
||||
* In the case of using the hybrid iptables firewall driver, ensure your
|
||||
Linux operating system kernel supports network bridge filters by verifying
|
||||
all the following ``sysctl`` values are set to ``1``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
net.bridge.bridge-nf-call-iptables
|
||||
net.bridge.bridge-nf-call-ip6tables
|
||||
|
||||
.. end
|
||||
|
||||
To enable networking bridge support, typically the ``br_netfilter`` kernel
|
||||
module needs to be loaded. Check your operating system's documentation for
|
||||
additional details on enabling this module.
|
||||
|
||||
Configure the layer-3 agent
|
||||
---------------------------
|
||||
|
||||
The Layer-3 (L3) agent provides routing and NAT services for
|
||||
self-service virtual networks.
|
||||
|
||||
* Edit the ``/etc/neutron/l3_agent.ini`` file in case additional customization
|
||||
is needed.
|
||||
|
||||
|
||||
Configure the DHCP agent
|
||||
------------------------
|
||||
|
||||
The DHCP agent provides DHCP services for virtual networks.
|
||||
|
||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
||||
actions:
|
||||
|
||||
* In the ``[DEFAULT]`` section, configure the Dnsmasq DHCP driver and enable
|
||||
isolated metadata so instances on provider networks can access metadata
|
||||
over the network:
|
||||
|
||||
.. path /etc/neutron/dhcp_agent.ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||
enable_isolated_metadata = true
|
||||
|
||||
.. end
|
||||
|
||||
Return to *Networking controller node configuration*.
|
@@ -1,48 +0,0 @@
|
||||
Compute node
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.31
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
.. note::
|
||||
|
||||
Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
|
||||
|
||||
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
|
||||
contain the following:
|
||||
|
||||
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: bash
|
||||
|
||||
STARTMODE='auto'
|
||||
BOOTPROTO='static'
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``compute1``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@@ -1,44 +0,0 @@
|
||||
Controller node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Configure network interfaces
|
||||
----------------------------
|
||||
|
||||
#. Configure the first interface as the management interface:
|
||||
|
||||
IP address: 10.0.0.11
|
||||
|
||||
Network mask: 255.255.255.0 (or /24)
|
||||
|
||||
Default gateway: 10.0.0.1
|
||||
|
||||
#. The provider interface uses a special configuration without an IP
|
||||
address assigned to it. Configure the second interface as the provider
|
||||
interface:
|
||||
|
||||
Replace ``INTERFACE_NAME`` with the actual interface name. For example,
|
||||
*eth1* or *ens224*.
|
||||
|
||||
|
||||
|
||||
|
||||
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
|
||||
contain the following:
|
||||
|
||||
.. path /etc/sysconfig/network/ifcfg-INTERFACE_NAME
|
||||
.. code-block:: ini
|
||||
|
||||
STARTMODE='auto'
|
||||
BOOTPROTO='static'
|
||||
|
||||
.. end
|
||||
|
||||
|
||||
#. Reboot the system to activate the changes.
|
||||
|
||||
Configure name resolution
|
||||
-------------------------
|
||||
|
||||
#. Set the hostname of the node to ``controller``.
|
||||
|
||||
#. .. include:: shared/edit_hosts_file.txt
|
@@ -1,91 +0,0 @@
|
||||
Host networking
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
After installing the operating system on each node for the architecture
|
||||
that you choose to deploy, you must configure the network interfaces. We
|
||||
recommend that you disable any automated network management tools and
|
||||
manually edit the appropriate configuration files for your distribution.
|
||||
For more information on how to configure networking on your
|
||||
distribution, see the `SLES 12
|
||||
<https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_basicnet_manconf.html>`__
|
||||
or `openSUSE
|
||||
<https://doc.opensuse.org/documentation/leap/reference/html/book-reference/cha-network.html>`__
|
||||
documentation.
|
||||
|
||||
All nodes require Internet access for administrative purposes such as package
|
||||
installation, security updates, Domain Name System (DNS), and
|
||||
Network Time Protocol (NTP). In most cases, nodes should obtain
|
||||
Internet access through the management network interface.
|
||||
To highlight the importance of network separation, the example architectures
|
||||
use `private address space <https://tools.ietf.org/html/rfc1918>`__ for the
|
||||
management network and assume that the physical network infrastructure
|
||||
provides Internet access via Network Address Translation (NAT)
|
||||
or other methods. The example architectures use routable IP address space for
|
||||
the provider (external) network and assume that the physical network
|
||||
infrastructure provides direct Internet access.
|
||||
|
||||
In the provider networks architecture, all instances attach directly
|
||||
to the provider network. In the self-service (private) networks architecture,
|
||||
instances can attach to a self-service or provider network. Self-service
|
||||
networks can reside entirely within OpenStack or provide some level of external
|
||||
network access using Network Address Translation (NAT) through
|
||||
the provider network.
|
||||
|
||||
.. _figure-networklayout:
|
||||
|
||||
.. figure:: figures/networklayout.png
|
||||
:alt: Network layout
|
||||
|
||||
The example architectures assume use of the following networks:
|
||||
|
||||
* Management on 10.0.0.0/24 with gateway 10.0.0.1
|
||||
|
||||
This network requires a gateway to provide Internet access to all
|
||||
nodes for administrative purposes such as package installation,
|
||||
security updates, Domain Name System (DNS), and
|
||||
Network Time Protocol (NTP).
|
||||
|
||||
* Provider on 203.0.113.0/24 with gateway 203.0.113.1
|
||||
|
||||
This network requires a gateway to provide Internet access to
|
||||
instances in your OpenStack environment.
|
||||
|
||||
You can modify these ranges and gateways to work with your particular
|
||||
network infrastructure.
|
||||
|
||||
Network interface names vary by distribution. Traditionally,
|
||||
interfaces use ``eth`` followed by a sequential number. To cover all
|
||||
variations, this guide refers to the first interface as the
|
||||
interface with the lowest number and the second interface as the
|
||||
interface with the highest number.
|
||||
|
||||
Unless you intend to use the exact configuration provided in this
|
||||
example architecture, you must modify the networks in this procedure to
|
||||
match your environment. Each node must resolve the other nodes by
|
||||
name in addition to IP address. For example, the ``controller`` name must
|
||||
resolve to ``10.0.0.11``, the IP address of the management interface on
|
||||
the controller node.
|
||||
|
||||
.. warning::
|
||||
|
||||
Reconfiguring network interfaces will interrupt network
|
||||
connectivity. We recommend using a local terminal session for these
|
||||
procedures.
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution enables a restrictive firewall by
|
||||
default. During the installation process, certain steps will fail
|
||||
unless you alter or disable the firewall. For more information
|
||||
about securing your environment, refer to the `OpenStack Security
|
||||
Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
environment-networking-controller-obs.rst
|
||||
environment-networking-compute-obs.rst
|
||||
environment-networking-storage-cinder.rst
|
||||
environment-networking-verify-obs.rst
|
@@ -1,89 +0,0 @@
|
||||
Verify connectivity
|
||||
-------------------
|
||||
|
||||
We recommend that you verify network connectivity to the Internet and
|
||||
among the nodes before proceeding further.
|
||||
|
||||
#. From the *controller* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *controller* node, test access to the management interface on the
|
||||
*compute* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 compute1
|
||||
|
||||
PING compute1 (10.0.0.31) 56(84) bytes of data.
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- compute1 ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the Internet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 openstack.org
|
||||
|
||||
PING openstack.org (174.143.194.225) 56(84) bytes of data.
|
||||
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
|
||||
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms
|
||||
|
||||
--- openstack.org ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
|
||||
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
|
||||
|
||||
.. end
|
||||
|
||||
#. From the *compute* node, test access to the management interface on the
|
||||
*controller* node:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ping -c 4 controller
|
||||
|
||||
PING controller (10.0.0.11) 56(84) bytes of data.
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
|
||||
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms
|
||||
|
||||
--- controller ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
|
||||
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
|
||||
|
||||
.. end
|
||||
|
||||
.. note::
|
||||
|
||||
Your distribution enables a restrictive firewall by
|
||||
default. During the installation process, certain steps will fail
|
||||
unless you alter or disable the firewall. For more information
|
||||
about securing your environment, refer to the `OpenStack Security
|
||||
Guide <https://docs.openstack.org/security-guide/>`_.
|
||||
|
||||
|
@@ -10,7 +10,6 @@ Networking service Installation Guide
|
||||
overview.rst
|
||||
common/get-started-networking.rst
|
||||
concepts.rst
|
||||
install-obs.rst
|
||||
install-rdo.rst
|
||||
install-ubuntu.rst
|
||||
ovn/index.rst
|
||||
|
@@ -1,13 +0,0 @@
|
||||
.. _networking-obs:
|
||||
|
||||
============================================================
|
||||
Install and configure for openSUSE and SUSE Linux Enterprise
|
||||
============================================================
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
environment-networking-obs.rst
|
||||
controller-install-obs.rst
|
||||
compute-install-obs.rst
|
||||
verify.rst
|
Reference in New Issue
Block a user