Files
auto-scaling-sig/use-cases/autoscale-compute-basic-metrics.rst
Joseph Davis 8e2ce29cdf Capture initial set of Use Cases
Pulling from the etherpad at the Denver 2019 PTG and inserting in
to the template format.  Much more detail can be fleshed out, but
this gives a framework to start.

Change-Id: If15093cb80f1230f3a626253676c88d162cbb2b2
Story: 2005751
Task: 33433
2019-05-30 12:00:06 +01:00

4.2 KiB

Auto-scale Compute based on Consumption of Resources

As a user of OpenStack I want to define a logical group of compute resources which are increased or decreased automatically based on the consumption of discrete physical resources within my instances, for example CPU or Memory utilization; Disk IOPS, etc.

Problem description

In many ways, this is the basic use case for Auto-Scaling in OpenStack.

An OpenStack user, whether admin or operator, defines logic for triggering the auto-scaling based on the consumption of resources in the cloud. The resources could be CPU cycles, storage, memory, network, or a combination. The logic will typically define both scale up and scale down thresholds, and include an upper and lower bound for scaling. An orchestration engine for the cloud will be instructed to perform the scale up or scale down operations as specified.

This use case was called out in the Denver 2019 PTG - https://etherpad.openstack.org/p/DEN-auto-scaling-SIG

OpenStack projects used

  • ...
  • ...

Inputs and decision-making

The resources monitoring used to determine auto scaling can be any monitored metric.

A few classic examples of metrics are:

In most cases, a high and a low threshold for the metric are determined to correspond with scale up or scale down actions.

Auto-scaling

Existing implementation(s)

  • Monasca and Heat

    Monasca can monitor the physical resources and generate alarms if a resource usage exceeds a threshold. The alarm notification can then trigger a Heat template and scale the topology appropriately.

  • <TODO: record other options>

Future work

Dependencies