Clean-up misc eventlet references

Also does some minor doc revisions to provide improved clarity,
and some minor follow-up fixes related to prior changes.

Change-Id: I0409da2ad45df06f2dbd1c5cd3c2afd83ec10c32
Signed-off-by: Julia Kreger <juliaashleykreger@gmail.com>
This commit is contained in:
Julia Kreger
2025-07-22 15:26:20 -07:00
parent 9cb25d3e3a
commit 0c70327a65
11 changed files with 19 additions and 56 deletions

View File

@@ -53,9 +53,8 @@ trade-offs.
as prior requests complete. In environments with long running synchronous as prior requests complete. In environments with long running synchronous
calls, such as use of the vendor passthru interface, this can be very calls, such as use of the vendor passthru interface, this can be very
problematic. problematic.
* As a combined ``ironic`` process. In this case, green threads_ are used, * As a combined ``ironic`` process. In this case, a single primary process
which allows for a smaller memory footprint at the expense of only using with two worker sub-processes are used.
one CPU core.
When the webserver is launched by the API process directly, the default is When the webserver is launched by the API process directly, the default is
based upon the number of CPU sockets in your machine. based upon the number of CPU sockets in your machine.
@@ -63,8 +62,7 @@ based upon the number of CPU sockets in your machine.
When launching using uwsgi, this will entirely vary upon your configuration, When launching using uwsgi, this will entirely vary upon your configuration,
but balancing workers/threads based upon your load and needs is highly but balancing workers/threads based upon your load and needs is highly
advisable. Each worker process is unique and consumes far more memory than advisable. Each worker process is unique and consumes far more memory than
a comparable number of worker threads. At the same time, the scheduler will a comparable number of worker threads.
focus on worker processes as the threads are greenthreads.
.. note:: .. note::
Host operating systems featuring in-memory de-duplication should see Host operating systems featuring in-memory de-duplication should see
@@ -121,12 +119,12 @@ structure and layout, and what deploy interface is being used.
Threads Threads
------- -------
The conductor uses green threads based on Eventlet_ project to allow a very The conductor uses python threads to enable concurrency. When a
high concurrency while keeping the memory footprint low. When a request comes request comes from the API to the conductor over the RPC, the conductor
from the API to the conductor over the RPC, the conductor verifies it, acquires verifies it, acquires a node-level lock (if needed) and launches a processing
a node-level lock (if needed) and launches a processing thread for further thread for further handling. The maximum number of such threads is limited to
handling. The maximum number of such threads is limited to the value of the value of :oslo.config:option:`conductor.workers_pool_size`
:oslo.config:option:`conductor.workers_pool_size` configuration option. configuration option.
.. note:: .. note::
Some workers are always or regularly occupied by internal processes, e.g. Some workers are always or regularly occupied by internal processes, e.g.
@@ -156,8 +154,6 @@ reserved for API requests and other critical tasks. Periodic tasks and agent
heartbeats cannot use them. This ensures that the API stays responsive even heartbeats cannot use them. This ensures that the API stays responsive even
under extreme internal load. under extreme internal load.
.. _eventlet: https://eventlet.net/
Database Database
======== ========

View File

@@ -14,15 +14,6 @@
import os import os
import sys import sys
import eventlet
# NOTE(dims): monkey patch subprocess to prevent failures in latest eventlet
# See https://github.com/eventlet/eventlet/issues/398
try:
eventlet.monkey_patch(subprocess=True)
except TypeError:
pass
# -- General configuration ---------------------------------------------------- # -- General configuration ----------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory, # If extensions (or modules to document with autodoc) are in another directory,

View File

@@ -21,9 +21,9 @@ backend.init_backend(backend.BackendType.THREADING)
from ironic.common import i18n # noqa from ironic.common import i18n # noqa
# NOTE(TheJulia): We are setting a default thread stack size to for all # NOTE(TheJulia): We are setting a default thread stack size for all the
# following thread invocations. Ultimately, while the python minimum is # following thread invocations. Ultimately, while the python minimum is
# any positive number with a minimum of 32760 Bytes, in 4096 Byte # any positive number with a minimum of 32768 Bytes, in 4096 Byte
# increments, this appears to work well in basic benchmarking. # increments, this appears to work well in basic benchmarking.
threading.stack_size( threading.stack_size(
os.environ.get('IRONIC_THREAD_STACK_SIZE', 65536)) os.environ.get('IRONIC_THREAD_STACK_SIZE', 65536))

View File

@@ -392,8 +392,8 @@ class BaseDriverFactory(object):
@classmethod @classmethod
def _init_extension_manager(cls): def _init_extension_manager(cls):
# NOTE(tenbrae): Use lockutils to avoid a potential race in eventlet # NOTE(tenbrae): Use lockutils to avoid a potential race
# that might try to create two driver factories. # that might try to create two driver factories.
with lockutils.lock(cls._entrypoint_name, do_log=False): with lockutils.lock(cls._entrypoint_name, do_log=False):
# NOTE(tenbrae): In case multiple greenthreads queue up on this # NOTE(tenbrae): In case multiple greenthreads queue up on this
# lock before _extension_manager is initialized, prevent # lock before _extension_manager is initialized, prevent

View File

@@ -638,11 +638,9 @@ class TaskManager(object):
# - background task finished with no errors. # - background task finished with no errors.
# - background task has crashed with exception. # - background task has crashed with exception.
# - callback was added after the background task has # - callback was added after the background task has
# finished or crashed. While eventlet currently doesn't # finished or crashed.
# schedule the new thread until the current thread blocks
# for some reason, this is true.
# All of the above are asserted in tests such that we'll # All of the above are asserted in tests such that we'll
# catch if eventlet ever changes this behavior. # catch if the behavior changes.
fut = None fut = None
try: try:
fut = self._spawn_method(*self._spawn_args, fut = self._spawn_method(*self._spawn_args,
@@ -655,7 +653,7 @@ class TaskManager(object):
# Don't unlock! The unlock will occur when the # Don't unlock! The unlock will occur when the
# thread finishes. # thread finishes.
# NOTE(yuriyz): A race condition with process_event() # NOTE(yuriyz): A race condition with process_event()
# in callback is possible here if eventlet changes behavior. # in callback is possible here.
# E.g., if the execution of the new thread (that handles the # E.g., if the execution of the new thread (that handles the
# event processing) finishes before we get here, that new # event processing) finishes before we get here, that new
# thread may emit the "end" notification before we emit the # thread may emit the "end" notification before we emit the

View File

@@ -86,7 +86,6 @@ def update_opt_defaults():
'oslo.messaging=INFO', 'oslo.messaging=INFO',
'oslo_messaging=INFO', 'oslo_messaging=INFO',
'stevedore=INFO', 'stevedore=INFO',
'eventlet.wsgi.server=INFO',
'iso8601=WARNING', 'iso8601=WARNING',
'requests=WARNING', 'requests=WARNING',
'urllib3.connectionpool=WARNING', 'urllib3.connectionpool=WARNING',

View File

@@ -25,8 +25,6 @@ from urllib import parse as urlparse
from oslo_log import log from oslo_log import log
from oslo_utils import encodeutils from oslo_utils import encodeutils
from oslo_utils import eventletutils
from oslo_utils import importutils
import websockify import websockify
from websockify import websockifyserver from websockify import websockifyserver
@@ -115,19 +113,6 @@ class IronicProxyRequestHandler(websockify.ProxyRequestHandler):
def new_websocket_client(self): def new_websocket_client(self):
"""Called after a new WebSocket connection has been established.""" """Called after a new WebSocket connection has been established."""
# TODO(TheJulia): Once we remove eventlet support overall, we can
# remove this code down to the use_hub() invocation. This is because
# there really is no intermediate state for this code and we either
# need to launch without eventlet, or invoke code along these lines
# to invoke eventlet.hubs.use_hub() for the service to operate
# properly.
eventlet = importutils.try_import('eventlet')
if eventlet and eventletutils.is_monkey_patched("thread"):
# If eventlet monkey patching has been invoked,
# reopen the eventlet hub to make sure we don't share an epoll
# fd with parent and/or siblings, which would be bad
eventlet.hubs.use_hub()
# The ironic expected behavior is to have token # The ironic expected behavior is to have token
# passed to the method GET of the request # passed to the method GET of the request
qs = urlparse.parse_qs(urlparse.urlparse(self.path).query) qs = urlparse.parse_qs(urlparse.urlparse(self.path).query)

View File

@@ -36,7 +36,6 @@ import subprocess
import tempfile import tempfile
import time import time
from eventlet.green import subprocess as green_subprocess
from oslo_concurrency import processutils from oslo_concurrency import processutils
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import excutils from oslo_utils import excutils
@@ -752,10 +751,7 @@ def _set_and_wait(task, power_action, driver_info, timeout=None):
_exec_ipmitool(driver_info, cmd) _exec_ipmitool(driver_info, cmd)
except (exception.PasswordFileFailedToCreate, except (exception.PasswordFileFailedToCreate,
processutils.ProcessExecutionError, processutils.ProcessExecutionError,
subprocess.TimeoutExpired, subprocess.TimeoutExpired) as e:
# NOTE(TheJulia): Remove once we remove the eventlet support.
# https://github.com/eventlet/eventlet/issues/624
green_subprocess.TimeoutExpired) as e:
LOG.warning("IPMI power action %(cmd)s failed for node %(node_id)s " LOG.warning("IPMI power action %(cmd)s failed for node %(node_id)s "
"with error: %(error)s.", "with error: %(error)s.",
{'node_id': driver_info['uuid'], 'cmd': cmd, 'error': e}) {'node_id': driver_info['uuid'], 'cmd': cmd, 'error': e})

View File

@@ -156,7 +156,8 @@ class ServiceSetUpMixin(object):
"""Stand up a service, much like conductor base_manager. """Stand up a service, much like conductor base_manager.
Ironic is a complex service, and the reality is that threading Ironic is a complex service, and the reality is that threading
in a post-eventlet world makes things far more complicated. makes things far more complicated.
The fun thing is that it is not actually that more complicated, The fun thing is that it is not actually that more complicated,
but that we need to do things sanely and different for service but that we need to do things sanely and different for service
startup than we need to do to predicate test setup. Largely around startup than we need to do to predicate test setup. Largely around

View File

@@ -30,8 +30,6 @@ from ironic.tests.unit.objects import utils as obj_utils
INFO_DICT = db_utils.get_test_redfish_info() INFO_DICT = db_utils.get_test_redfish_info()
@mock.patch('oslo_utils.eventletutils.EventletEvent.wait',
lambda *args, **kwargs: None)
class RedfishPowerTestCase(db_base.DbTestCase): class RedfishPowerTestCase(db_base.DbTestCase):
def setUp(self): def setUp(self):

View File

@@ -6,7 +6,6 @@ pbr>=6.0.0 # Apache-2.0
SQLAlchemy>=1.4.0 # MIT SQLAlchemy>=1.4.0 # MIT
alembic>=1.4.2 # MIT alembic>=1.4.2 # MIT
automaton>=1.9.0 # Apache-2.0 automaton>=1.9.0 # Apache-2.0
eventlet>=0.30.1 # MIT
WebOb>=1.7.1 # MIT WebOb>=1.7.1 # MIT
keystoneauth1>=4.2.0 # Apache-2.0 keystoneauth1>=4.2.0 # Apache-2.0
stevedore>=1.29.0 # Apache-2.0 stevedore>=1.29.0 # Apache-2.0