The pollsters in Ceilometer are grouped in "sources" (A.K.A "polling task"); normally, people group pollsters in "sources" by interval, for instance, putting together all pollsters that should gather data every 10 minutes, and so on. On the other hand, some other people configure the "sources" to represent a polling context, such as, all pollsters that collect data from instances together, all pollsters that collect data from routers, load balancers, RadosGW, and so on. The "sources" definition, are all processed in their own thread. Therefore, "sources" are processed in parallel. On the other hand, the pollsters inside the "sources" are processed in a serial fashion in the same thread. This can be a problem, if a "source" has many pollsters, and their data collection and processing takes a while to finish. Of course, one can take it to the extreme, and configure only one pollster per "source". However, that is not a very interesting solution, and would make the configuration a bit odd. This patch proposes a configuration to enable operators to define the number of threads to use when processing pollsters of a "source". The value one (1) means that the processing is in a serial fashion (not ordered!). The value zero (0) means that we will use as much threads as the number of pollsters configured in the polling task. Any other positive integer can be used to fix an upper bound limit to the number of threads used for processing pollsters in parallel. One must bear in mind that, using more than one thread might not take full advantage of the discovery cache and pollsters cache processes; it is possible though to improve/use/develop pollsters that synchronize themselves in the cache objects. Change-Id: I80b2f18a70cea1ab6e31b37e75ac93ee897e6cb4 Signed-off-by: Rafael Weingärtner <rafael@apache.org>
7 lines
185 B
YAML
7 lines
185 B
YAML
---
|
|
features:
|
|
- |
|
|
Introduce ``threads_to_process_pollsters`` to enable operators to define
|
|
the number of pollsters that can be executed in parallel inside a
|
|
polling task.
|