The 'force' parameter of os-brick's disconnect_volume() method allows callers to ignore flushing errors and ensure that devices are being removed from the host. We should use force=True when we are going to delete an instance to avoid leaving leftover devices connected to the compute host which could then potentially be reused to map to volumes to an instance that should not have access to those volumes. We can use force=True even when disconnecting a volume that will not be deleted on termination because os-brick will always attempt to flush and disconnect gracefully before forcefully removing devices. Closes-Bug: #2004555 Change-Id: I3629b84d3255a8fe9d8a7cea8c6131d7c40899e8
9.5 KiB
Install and configure a compute node for openSUSE and SUSE Linux Enterprise
This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.
Note
This section assumes that you are following the instructions in this
guide step-by-step to configure the first compute node. If you want to
configure additional compute nodes, prepare them in a similar fashion to
the first compute node in the example architectures
<overview-example-architectures> section. Each additional
compute node requires a unique IP address.
Install and configure components
Install the packages:
# zypper install openstack-nova-compute genisoimage qemu-kvm libvirtEdit the
/etc/nova/nova.conffile and complete the following actions:In the
[DEFAULT]section, enable only the compute and metadata APIs:[DEFAULT] # ... enabled_apis = osapi_compute,metadataIn the
[DEFAULT]section, set thecompute_driver:[DEFAULT] # ... compute_driver = libvirt.LibvirtDriverIn the
[DEFAULT]section, configureRabbitMQmessage queue access:[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASSwith the password you chose for theopenstackaccount inRabbitMQ.In the
[api]and[keystone_authtoken]sections, configure Identity service access:[api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = NOVA_PASSReplace
NOVA_PASSwith the password you chose for thenovauser in the Identity service.Note
Comment out or remove any other options in the
[keystone_authtoken]section.In the
[service_user]section, configureservice user tokens <service_user_token>:[service_user] send_service_user_token = true auth_url = https://controller/identity auth_strategy = keystone auth_type = password project_domain_name = Default project_name = service user_domain_name = Default username = nova password = NOVA_PASSReplace
NOVA_PASSwith the password you chose for thenovauser in the Identity service.In the
[DEFAULT]section, configure themy_ipoption:[DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESSReplace
MANAGEMENT_INTERFACE_IP_ADDRESSwith the IP address of the management network interface on your compute node, typically10.0.0.31for the first node in theexample architecture <overview-example-architectures>.Configure the
[neutron]section of /etc/nova/nova.conf. Refer to theNetworking service install guide <install/compute-install-obs.html>for more details.In the
[vnc]section, enable and configure remote console access:[vnc] # ... enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.htmlThe server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.
Note
If the web browser to access remote consoles resides on a host that cannot resolve the
controllerhostname, you must replacecontrollerwith the management interface IP address of the controller node.In the
[glance]section, configure the location of the Image service API:[glance] # ... api_servers = http://controller:9292In the
[oslo_concurrency]section, configure the lock path:[oslo_concurrency] # ... lock_path = /var/run/novaIn the
[placement]section, configure the Placement API:[placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = PLACEMENT_PASSReplace
PLACEMENT_PASSwith the password you choose for theplacementuser in the Identity service. Comment out any other options in the[placement]section.
Ensure the kernel module
nbdis loaded.# modprobe nbdEnsure the module loads on every boot by adding
nbdto the/etc/modules-load.d/nbd.conffile.
Finalize installation
Determine whether your compute node supports hardware acceleration for virtual machines:
$ egrep -c '(vmx|svm)' /proc/cpuinfoIf this command returns a value of
one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.If this command returns a value of
zero, your compute node does not support hardware acceleration and you must configurelibvirtto use QEMU instead of KVM.Edit the
[libvirt]section in the/etc/nova/nova.conffile as follows:[libvirt] # ... virt_type = qemu
Start the Compute service including its dependencies and configure them to start automatically when the system boots:
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
Note
If the nova-compute service fails to start, check
/var/log/nova/nova-compute.log. The error message
AMQP server on controller:5672 is unreachable likely
indicates that the firewall on the controller node is preventing access
to port 5672. Configure the firewall to open port 5672 on the controller
node and restart nova-compute service on the compute
node.
Add the compute node to the cell database
Important
Run the following commands on the controller node.
Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database:
$ . admin-openrc $ openstack compute service list --service nova-compute +----+-------+--------------+------+-------+---------+----------------------------+ | ID | Host | Binary | Zone | State | Status | Updated At | +----+-------+--------------+------+-------+---------+----------------------------+ | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | +----+-------+--------------+------+-------+---------+----------------------------+Discover compute hosts:
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3Note
When you add new compute nodes, you must run
nova-manage cell_v2 discover_hostson the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in/etc/nova/nova.conf:[scheduler] discover_hosts_in_cells_interval = 300