Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I2210814ef992df70767b0b41355dbcb88c268ef8
This commit is contained in:
Tony Breeds
2017-09-12 15:29:00 -06:00
parent bdd47a1b44
commit 66005e08a9
243 changed files with 14 additions and 24400 deletions

View File

@@ -1,8 +0,0 @@
[run]
branch = True
source = aodh
omit = aodh/tests/*
[report]
ignore_errors = True

22
.gitignore vendored
View File

@@ -1,22 +0,0 @@
*.egg*
*.mo
*.pyc
*~
.*.swp
.*sw?
.coverage
.testrepository
.tox
AUTHORS
build/*
ChangeLog
cover/*
dist/*
doc/build
doc/source/_static/
etc/aodh/aodh.conf
subunit.log
# Files created by releasenotes build
releasenotes/build
/doc/source/contributor/api/

View File

@@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/aodh.git

View File

@@ -1,33 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>
Adam Gandelman <adamg@canonical.com> <adamg@ubuntu.com>
Alan Pevec <alan.pevec@redhat.com> <apevec@redhat.com>
Alexei Kornienko <akornienko@mirantis.com> <alexei.kornienko@gmail.com>
ChangBo Guo(gcb) <eric.guo@easystack.cn> Chang Bo Guo <guochbo@cn.ibm.com>
Chinmaya Bharadwaj <chinmaya-bharadwaj.a@hp.com> chinmay <chinmaya-bharadwaj.a@hp.com>
Clark Boylan <cboylan@sapwetik.org> <clark.boylan@gmail.com>
Doug Hellmann <doug@doughellmann.com> <doug.hellmann@dreamhost.com>
Fei Long Wang <flwang@catalyst.net.nz> <flwang@cn.ibm.com>
Fengqian Gao <fengqian.gao@intel.com> Fengqian <fengqian.gao@intel.com>
Fengqian Gao <fengqian.gao@intel.com> Fengqian.Gao <fengqian.gao@intel.com>
Gordon Chung <gord@live.ca> gordon chung <gord@live.ca>
Gordon Chung <gord@live.ca> Gordon Chung <chungg@ca.ibm.com>
Gordon Chung <gord@live.ca> gordon chung <chungg@ca.ibm.com>
Ildiko Vancsa <ildiko.vancsa@ericsson.com> Ildiko <ildiko.vancsa@ericsson.com>
John H. Tran <jhtran@att.com> John Tran <jhtran@att.com>
Julien Danjou <julien.danjou@enovance.com> <julien@danjou.info>
LiuSheng <liusheng@huawei.com> liu-sheng <liusheng@huawei.com>
Mehdi Abaakouk <mehdi.abaakouk@enovance.com> <sileht@sileht.net>
Nejc Saje <nsaje@redhat.com> <nejc@saje.info>
Nejc Saje <nsaje@redhat.com> <nejc.saje@xlab.si>
Nicolas Barcet (nijaba) <nick@enovance.com> <nick.barcet@canonical.com>
Pádraig Brady <pbrady@redhat.com> <P@draigBrady.com>
Rich Bowen <rbowen@redhat.com> <rbowen@rcbowen.com>
Sandy Walsh <sandy.walsh@rackspace.com> <sandy@sandywalsh.com>
Sascha Peilicke <speilicke@suse.com> <saschpe@gmx.de>
Sean Dague <sean.dague@samsung.com> <sean@dague.net>
Shengjie Min <shengjie_min@dell.com> shengjie-min <shengjie_min@dell.com>
Shuangtai Tian <shuangtai.tian@intel.com> shuangtai <shuangtai.tian@intel.com>
Swann Croiset <swann.croiset@bull.net> <swann@oopss.org>
ZhiQiang Fan <zhiqiang.fan@huawei.com> <aji.zqfan@gmail.com>

View File

@@ -1,9 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-600} \
${PYTHON:-python} -m subunit.run discover ${OS_TEST_PATH:-./aodh/tests} -t . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list
# NOTE(chdent): Only used/matches on gabbi-related tests.
group_regex=(gabbi\.(suitemaker|driver)\.test_gabbi_([^_]+))_

View File

@@ -1,16 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps documented at:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/aodh

View File

@@ -1,9 +0,0 @@
Aodh Style Commandments
=======================
- Step 1: Read the OpenStack Style Commandments
https://docs.openstack.org/hacking/latest/
- Step 2: Read on
Aodh Specific Commandments
--------------------------

176
LICENSE
View File

@@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@@ -1,15 +0,0 @@
= Generalist Code Reviewers =
The current members of aodh-core are listed here:
https://launchpad.net/~aodh-drivers/+members#active
This group can +2 and approve patches in aodh. However, they may
choose to seek feedback from the appropriate specialist maintainer before
approving a patch if it is in any way controversial or risky.
= IRC handles of maintainers =
gordc
jd_
llu
sileht

14
README Normal file
View File

@@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@@ -1,11 +0,0 @@
aodh
====
Release notes can be read online at:
https://docs.openstack.org/aodh/latest/contributor/releasenotes/index.html
Documentation for the project can be found at:
https://docs.openstack.org/aodh/latest/
The project home is at:
https://launchpad.net/aodh

View File

@@ -1,20 +0,0 @@
# Copyright 2014 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
class NotImplementedError(NotImplementedError):
# FIXME(jd) This is used by WSME to return a correct HTTP code. We should
# not expose it here but wrap our methods in the API to convert it to a
# proper HTTP error.
code = 501

View File

@@ -1,30 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from oslo_config import cfg
# Register options for the service
OPTS = [
cfg.StrOpt('paste_config',
default=os.path.abspath(
os.path.join(
os.path.dirname(__file__), "api-paste.ini")),
help="Configuration file for WSGI definition of API."),
cfg.StrOpt(
'auth_mode',
default="keystone",
help="Authentication mode to use. Unset to disable authentication"),
]

View File

@@ -1,47 +0,0 @@
[composite:aodh+noauth]
use = egg:Paste#urlmap
/ = aodhversions_pipeline
/v2 = aodhv2_noauth_pipeline
/healthcheck = healthcheck
[composite:aodh+keystone]
use = egg:Paste#urlmap
/ = aodhversions_pipeline
/v2 = aodhv2_keystone_pipeline
/healthcheck = healthcheck
[app:healthcheck]
use = egg:oslo.middleware#healthcheck
oslo_config_project = aodh
[pipeline:aodhversions_pipeline]
pipeline = cors http_proxy_to_wsgi aodhversions
[app:aodhversions]
paste.app_factory = aodh.api.app:app_factory
root = aodh.api.controllers.root.VersionsController
[pipeline:aodhv2_keystone_pipeline]
pipeline = cors http_proxy_to_wsgi request_id authtoken aodhv2
[pipeline:aodhv2_noauth_pipeline]
pipeline = cors http_proxy_to_wsgi request_id aodhv2
[app:aodhv2]
paste.app_factory = aodh.api.app:app_factory
root = aodh.api.controllers.v2.root.V2Controller
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
oslo_config_project = aodh
[filter:request_id]
paste.filter_factory = oslo_middleware:RequestId.factory
[filter:cors]
paste.filter_factory = oslo_middleware.cors:filter_factory
oslo_config_project = aodh
[filter:http_proxy_to_wsgi]
paste.filter_factory = oslo_middleware.http_proxy_to_wsgi:HTTPProxyToWSGI.factory
oslo_config_project = aodh

View File

@@ -1,87 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2015-2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import uuid
from oslo_config import cfg
from oslo_log import log
from paste import deploy
import pecan
from aodh.api import hooks
from aodh.api import middleware
from aodh import service
from aodh import storage
LOG = log.getLogger(__name__)
# NOTE(sileht): pastedeploy uses ConfigParser to handle
# global_conf, since python 3 ConfigParser doesn't
# allow storing object as config value, only strings are
# permit, so to be able to pass an object created before paste load
# the app, we store them into a global var. But the each loaded app
# store it's configuration in unique key to be concurrency safe.
global APPCONFIGS
APPCONFIGS = {}
def setup_app(root, conf):
app_hooks = [hooks.ConfigHook(conf),
hooks.DBHook(
storage.get_connection_from_config(conf)),
hooks.TranslationHook()]
return pecan.make_app(
root,
hooks=app_hooks,
wrap_app=middleware.ParsableErrorMiddleware,
guess_content_type_from_ext=False
)
def load_app(conf):
global APPCONFIGS
# Build the WSGI app
cfg_path = conf.api.paste_config
if not os.path.isabs(cfg_path):
cfg_path = conf.find_file(cfg_path)
if cfg_path is None or not os.path.exists(cfg_path):
raise cfg.ConfigFilesNotFoundError([conf.api.paste_config])
config = dict(conf=conf)
configkey = str(uuid.uuid4())
APPCONFIGS[configkey] = config
LOG.info("WSGI config used: %s", cfg_path)
return deploy.loadapp("config:" + cfg_path,
name="aodh+" + (
conf.api.auth_mode
if conf.api.auth_mode else "noauth"
),
global_conf={'configkey': configkey})
def app_factory(global_config, **local_conf):
global APPCONFIGS
appconfig = APPCONFIGS.get(global_config.get('configkey'))
return setup_app(root=local_conf.get('root'), **appconfig)
def build_wsgi_app(argv=None):
return load_app(service.prepare_service(argv=argv))

View File

@@ -1,23 +0,0 @@
# -*- mode: python -*-
#
# Copyright 2013 New Dream Network, LLC (DreamHost)
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Use this file for deploying the API under mod_wsgi.
See http://pecan.readthedocs.org/en/latest/deployment.html for details.
"""
from aodh.api import app
application = app.build_wsgi_app(argv=[])

View File

@@ -1,51 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
MEDIA_TYPE_JSON = 'application/vnd.openstack.telemetry-%s+json'
MEDIA_TYPE_XML = 'application/vnd.openstack.telemetry-%s+xml'
class VersionsController(object):
@pecan.expose('json')
def index(self):
base_url = pecan.request.host_url
available = [{'tag': 'v2', 'date': '2013-02-13T00:00:00Z', }]
collected = [version_descriptor(base_url, v['tag'], v['date'])
for v in available]
versions = {'versions': {'values': collected}}
return versions
def version_descriptor(base_url, version, released_on):
url = version_url(base_url, version)
return {
'id': version,
'links': [
{'href': url, 'rel': 'self', },
{'href': 'http://docs.openstack.org/',
'rel': 'describedby', 'type': 'text/html', }],
'media-types': [
{'base': 'application/json', 'type': MEDIA_TYPE_JSON % version, },
{'base': 'application/xml', 'type': MEDIA_TYPE_XML % version, }],
'status': 'stable',
'updated': released_on,
}
def version_url(base_url, version_number):
return '%s/%s' % (base_url, version_number)

View File

@@ -1,119 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
from stevedore import named
from wsme.rest import json as wjson
from wsme import types as wtypes
from aodh.api.controllers.v2 import base
from aodh.i18n import _
class InvalidCompositeRule(base.ClientSideError):
def __init__(self, error):
err = _('Invalid input composite rule: %s, it should '
'be a dict with an "and" or "or" as key, and the '
'value of dict should be a list of basic threshold '
'rules or sub composite rules, can be nested.') % error
super(InvalidCompositeRule, self).__init__(err)
class CompositeRule(wtypes.UserType):
"""Composite alarm rule.
A simple dict type to preset composite rule.
"""
basetype = wtypes.text
name = 'composite_rule'
threshold_plugins = None
def __init__(self):
threshold_rules = ('threshold',
'gnocchi_resources_threshold',
'gnocchi_aggregation_by_metrics_threshold',
'gnocchi_aggregation_by_resources_threshold')
CompositeRule.threshold_plugins = named.NamedExtensionManager(
"aodh.alarm.rule", threshold_rules)
super(CompositeRule, self).__init__()
@staticmethod
def valid_composite_rule(rules):
if isinstance(rules, dict) and len(rules) == 1:
and_or_key = list(rules)[0]
if and_or_key not in ('and', 'or'):
raise base.ClientSideError(
_('Threshold rules should be combined with "and" or "or"'))
if isinstance(rules[and_or_key], list):
for sub_rule in rules[and_or_key]:
CompositeRule.valid_composite_rule(sub_rule)
else:
raise InvalidCompositeRule(rules)
elif isinstance(rules, dict):
rule_type = rules.pop('type', None)
if not rule_type:
raise base.ClientSideError(_('type must be set in every rule'))
if rule_type not in CompositeRule.threshold_plugins:
plugins = sorted(CompositeRule.threshold_plugins.names())
err = _('Unsupported sub-rule type :%(rule)s in composite '
'rule, should be one of: %(plugins)s') % {
'rule': rule_type,
'plugins': plugins}
raise base.ClientSideError(err)
plugin = CompositeRule.threshold_plugins[rule_type].plugin
wjson.fromjson(plugin, rules)
rule_dict = plugin(**rules).as_dict()
rules.update(rule_dict)
rules.update(type=rule_type)
else:
raise InvalidCompositeRule(rules)
@staticmethod
def validate(value):
try:
json.dumps(value)
except TypeError:
raise base.ClientSideError(_('%s is not JSON serializable')
% value)
else:
CompositeRule.valid_composite_rule(value)
return value
@staticmethod
def frombasetype(value):
return CompositeRule.validate(value)
@staticmethod
def create_hook(alarm):
pass
@staticmethod
def validate_alarm(alarm):
pass
@staticmethod
def update_hook(alarm):
pass
@staticmethod
def as_dict():
pass
@staticmethod
def __call__(**rule):
return rule
composite_rule = CompositeRule()

View File

@@ -1,61 +0,0 @@
#
# Copyright 2015 NEC Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import wsme
from wsme import types as wtypes
from aodh.api.controllers.v2 import base
from aodh.i18n import _
class AlarmEventRule(base.AlarmRule):
"""Alarm Event Rule.
Describe when to trigger the alarm based on an event
"""
event_type = wsme.wsattr(wtypes.text)
"The type of event (default is '*')"
query = wsme.wsattr([base.Query])
"The query to find the event (default is [])"
def __init__(self, event_type=None, query=None):
event_type = event_type or '*'
query = [base.Query(**q) for q in query or []]
super(AlarmEventRule, self).__init__(event_type=event_type,
query=query)
@classmethod
def validate_alarm(cls, alarm):
for i in alarm.event_rule.query:
i._get_value_as_type()
@property
def default_description(self):
return _('Alarm when %s event occurred.') % self.event_type
def as_dict(self):
rule = self.as_dict_from_keys(['event_type'])
rule['query'] = [q.as_dict() for q in self.query]
return rule
@classmethod
def sample(cls):
return cls(event_type='compute.instance.update',
query=[{'field': 'traits.instance_id"',
'value': '153462d0-a9b8-4b5b-8175-9e4b05e9b856',
'op': 'eq',
'type': 'string'}])

View File

@@ -1,215 +0,0 @@
#
# Copyright 2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import threading
import cachetools
from gnocchiclient import client
from gnocchiclient import exceptions
from keystoneauth1 import exceptions as ka_exceptions
from oslo_config import cfg
from oslo_serialization import jsonutils
import pecan
import wsme
from wsme import types as wtypes
from aodh.api.controllers.v2 import base
from aodh.api.controllers.v2 import utils as v2_utils
from aodh import keystone_client
GNOCCHI_OPTS = [
cfg.StrOpt('gnocchi_external_project_owner',
default="service",
help='Project name of resources creator in Gnocchi. '
'(For example the Ceilometer project name'),
]
class GnocchiUnavailable(Exception):
code = 503
class AlarmGnocchiThresholdRule(base.AlarmRule):
comparison_operator = base.AdvEnum('comparison_operator', str,
'lt', 'le', 'eq', 'ne', 'ge', 'gt',
default='eq')
"The comparison against the alarm threshold"
threshold = wsme.wsattr(float, mandatory=True)
"The threshold of the alarm"
aggregation_method = wsme.wsattr(wtypes.text, mandatory=True)
"The aggregation_method to compare to the threshold"
evaluation_periods = wsme.wsattr(wtypes.IntegerType(minimum=1), default=1)
"The number of historical periods to evaluate the threshold"
granularity = wsme.wsattr(wtypes.IntegerType(minimum=1), default=60)
"The time range in seconds over which query"
cache = cachetools.TTLCache(maxsize=1, ttl=3600)
lock = threading.RLock()
@classmethod
def validate_alarm(cls, alarm):
alarm_rule = getattr(alarm, "%s_rule" % alarm.type)
aggregation_method = alarm_rule.aggregation_method
if aggregation_method not in cls._get_aggregation_methods():
raise base.ClientSideError(
'aggregation_method should be in %s not %s' % (
cls._get_aggregation_methods(), aggregation_method))
@staticmethod
@cachetools.cached(cache, lock=lock)
def _get_aggregation_methods():
conf = pecan.request.cfg
gnocchi_client = client.Client(
'1', keystone_client.get_session(conf),
interface=conf.service_credentials.interface,
region_name=conf.service_credentials.region_name)
try:
return gnocchi_client.capabilities.list().get(
'aggregation_methods', [])
except exceptions.ClientException as e:
raise base.ClientSideError(e.message, status_code=e.code)
except Exception as e:
raise GnocchiUnavailable(e)
class MetricOfResourceRule(AlarmGnocchiThresholdRule):
metric = wsme.wsattr(wtypes.text, mandatory=True)
"The name of the metric"
resource_id = wsme.wsattr(wtypes.text, mandatory=True)
"The id of a resource"
resource_type = wsme.wsattr(wtypes.text, mandatory=True)
"The resource type"
def as_dict(self):
rule = self.as_dict_from_keys(['granularity', 'comparison_operator',
'threshold', 'aggregation_method',
'evaluation_periods',
'metric',
'resource_id',
'resource_type'])
return rule
class AggregationMetricByResourcesLookupRule(AlarmGnocchiThresholdRule):
metric = wsme.wsattr(wtypes.text, mandatory=True)
"The name of the metric"
query = wsme.wsattr(wtypes.text, mandatory=True)
('The query to filter the metric, Don\'t forget to filter out '
'deleted resources (example: {"and": [{"=": {"ended_at": null}}, ...]}), '
'Otherwise Gnocchi will try to create the aggregate against obsolete '
'resources')
resource_type = wsme.wsattr(wtypes.text, mandatory=True)
"The resource type"
def as_dict(self):
rule = self.as_dict_from_keys(['granularity', 'comparison_operator',
'threshold', 'aggregation_method',
'evaluation_periods',
'metric',
'query',
'resource_type'])
return rule
cache = cachetools.TTLCache(maxsize=1, ttl=3600)
lock = threading.RLock()
@staticmethod
@cachetools.cached(cache, lock=lock)
def get_external_project_owner():
kc = keystone_client.get_client(pecan.request.cfg)
project_name = pecan.request.cfg.api.gnocchi_external_project_owner
try:
project = kc.projects.find(name=project_name)
return project.id
except ka_exceptions.NotFound:
return None
@classmethod
def validate_alarm(cls, alarm):
super(AggregationMetricByResourcesLookupRule,
cls).validate_alarm(alarm)
rule = alarm.gnocchi_aggregation_by_resources_threshold_rule
# check the query string is a valid json
try:
query = jsonutils.loads(rule.query)
except ValueError:
raise wsme.exc.InvalidInput('rule/query', rule.query)
conf = pecan.request.cfg
# Scope the alarm to the project id if needed
auth_project = v2_utils.get_auth_project(alarm.project_id)
if auth_project:
perms_filter = {"=": {"created_by_project_id": auth_project}}
external_project_owner = cls.get_external_project_owner()
if external_project_owner:
perms_filter = {"or": [
perms_filter,
{"and": [
{"=": {"created_by_project_id":
external_project_owner}},
{"=": {"project_id": auth_project}}]}
]}
query = {"and": [perms_filter, query]}
rule.query = jsonutils.dumps(query)
gnocchi_client = client.Client(
'1', keystone_client.get_session(conf),
interface=conf.service_credentials.interface,
region_name=conf.service_credentials.region_name)
try:
gnocchi_client.metric.aggregation(
metrics=rule.metric,
query=query,
aggregation=rule.aggregation_method,
needed_overlap=0,
resource_type=rule.resource_type)
except exceptions.ClientException as e:
if e.code == 404:
# NOTE(sileht): We are fine here, we just want to ensure the
# 'query' payload is valid for Gnocchi If the metric
# doesn't exists yet, it doesn't matter
return
raise base.ClientSideError(e.message, status_code=e.code)
except Exception as e:
raise GnocchiUnavailable(e)
class AggregationMetricsByIdLookupRule(AlarmGnocchiThresholdRule):
metrics = wsme.wsattr([wtypes.text], mandatory=True)
"A list of metric Ids"
def as_dict(self):
rule = self.as_dict_from_keys(['granularity', 'comparison_operator',
'threshold', 'aggregation_method',
'evaluation_periods',
'metrics'])
return rule

View File

@@ -1,161 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from ceilometerclient import client as ceiloclient
from ceilometerclient import exc as ceiloexc
import pecan
import wsme
from wsme import types as wtypes
from aodh.api.controllers.v2 import base
from aodh.api.controllers.v2 import utils as v2_utils
from aodh.i18n import _
from aodh import keystone_client
from aodh import storage
class AlarmThresholdRule(base.AlarmRule):
"""Alarm Threshold Rule
Describe when to trigger the alarm based on computed statistics
"""
meter_name = wsme.wsattr(wtypes.text, mandatory=True)
"The name of the meter"
# FIXME(sileht): default doesn't work
# workaround: default is set in validate method
query = wsme.wsattr([base.Query], default=[])
"""The query to find the data for computing statistics.
Ownership settings are automatically included based on the Alarm owner.
"""
period = wsme.wsattr(wtypes.IntegerType(minimum=1), default=60)
"The time range in seconds over which query"
comparison_operator = base.AdvEnum('comparison_operator', str,
'lt', 'le', 'eq', 'ne', 'ge', 'gt',
default='eq')
"The comparison against the alarm threshold"
threshold = wsme.wsattr(float, mandatory=True)
"The threshold of the alarm"
statistic = base.AdvEnum('statistic', str, 'max', 'min', 'avg', 'sum',
'count', default='avg')
"The statistic to compare to the threshold"
evaluation_periods = wsme.wsattr(wtypes.IntegerType(minimum=1), default=1)
"The number of historical periods to evaluate the threshold"
exclude_outliers = wsme.wsattr(bool, default=False)
"Whether datapoints with anomalously low sample counts are excluded"
ceilometer_sample_api_is_supported = None
def __init__(self, query=None, **kwargs):
query = [base.Query(**q) for q in query] if query else []
super(AlarmThresholdRule, self).__init__(query=query, **kwargs)
@classmethod
def _check_ceilometer_sample_api(cls):
# Check it only once
if cls.ceilometer_sample_api_is_supported is None:
auth_config = pecan.request.cfg.service_credentials
client = ceiloclient.get_client(
version=2,
session=keystone_client.get_session(pecan.request.cfg),
# ceiloclient adapter options
region_name=auth_config.region_name,
interface=auth_config.interface,
)
try:
client.statistics.list(
meter_name="idontthinkthatexistsbutwhatever")
except Exception as e:
if isinstance(e, ceiloexc.HTTPException):
if e.code == 410:
cls.ceilometer_sample_api_is_supported = False
elif e.code < 500:
cls.ceilometer_sample_api_is_supported = True
else:
raise
else:
raise
else:
# I don't think this meter can exist but how known
cls.ceilometer_sample_api_is_supported = True
if cls.ceilometer_sample_api_is_supported is False:
raise base.ClientSideError(
"This telemetry installation is not configured to support"
"alarm of type 'threshold")
@staticmethod
def validate(threshold_rule):
# note(sileht): wsme default doesn't work in some case
# workaround for https://bugs.launchpad.net/wsme/+bug/1227039
if not threshold_rule.query:
threshold_rule.query = []
# Timestamp is not allowed for AlarmThresholdRule query, as the alarm
# evaluator will construct timestamp bounds for the sequence of
# statistics queries as the sliding evaluation window advances
# over time.
v2_utils.validate_query(threshold_rule.query,
storage.SampleFilter.__init__,
allow_timestamps=False)
return threshold_rule
@classmethod
def validate_alarm(cls, alarm):
cls._check_ceilometer_sample_api()
# ensure an implicit constraint on project_id is added to
# the query if not already present
alarm.threshold_rule.query = v2_utils.sanitize_query(
alarm.threshold_rule.query,
storage.SampleFilter.__init__,
on_behalf_of=alarm.project_id
)
@property
def default_description(self):
return (_('Alarm when %(meter_name)s is %(comparison_operator)s a '
'%(statistic)s of %(threshold)s over %(period)s seconds') %
dict(comparison_operator=self.comparison_operator,
statistic=self.statistic,
threshold=self.threshold,
meter_name=self.meter_name,
period=self.period))
def as_dict(self):
rule = self.as_dict_from_keys(['period', 'comparison_operator',
'threshold', 'statistic',
'evaluation_periods', 'meter_name',
'exclude_outliers'])
rule['query'] = [q.as_dict() for q in self.query]
return rule
@classmethod
def sample(cls):
return cls(meter_name='cpu_util',
period=60,
evaluation_periods=1,
threshold=300.0,
statistic='avg',
comparison_operator='gt',
query=[{'field': 'resource_id',
'value': '2a4d689b-f0b8-49c1-9eef-87cae58d80db',
'op': 'eq',
'type': 'string'}])

View File

@@ -1,841 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
import itertools
import json
import warnings
import croniter
import debtcollector
from oslo_config import cfg
from oslo_log import log
from oslo_utils import netutils
from oslo_utils import timeutils
from oslo_utils import uuidutils
import pecan
from pecan import rest
import pytz
import six
from six.moves.urllib import parse as urlparse
from stevedore import extension
import wsme
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
import aodh
from aodh.api.controllers.v2 import base
from aodh.api.controllers.v2 import utils as v2_utils
from aodh.api import rbac
from aodh.i18n import _
from aodh import keystone_client
from aodh import messaging
from aodh import notifier
from aodh.storage import models
LOG = log.getLogger(__name__)
ALARM_API_OPTS = [
cfg.IntOpt('user_alarm_quota',
deprecated_group='DEFAULT',
help='Maximum number of alarms defined for a user.'
),
cfg.IntOpt('project_alarm_quota',
deprecated_group='DEFAULT',
help='Maximum number of alarms defined for a project.'
),
cfg.IntOpt('alarm_max_actions',
default=-1,
deprecated_group='DEFAULT',
help='Maximum count of actions for each state of an alarm, '
'non-positive number means no limit.'),
]
state_kind = ["ok", "alarm", "insufficient data"]
state_kind_enum = wtypes.Enum(str, *state_kind)
severity_kind = ["low", "moderate", "critical"]
severity_kind_enum = wtypes.Enum(str, *severity_kind)
ALARM_REASON_DEFAULT = "Not evaluated yet"
ALARM_REASON_MANUAL = "Manually set via API"
class OverQuota(base.ClientSideError):
def __init__(self, data):
d = {
'u': data.user_id,
'p': data.project_id
}
super(OverQuota, self).__init__(
_("Alarm quota exceeded for user %(u)s on project %(p)s") % d,
status_code=403)
def is_over_quota(conn, project_id, user_id):
"""Returns False if an alarm is within the set quotas, True otherwise.
:param conn: a backend connection object
:param project_id: the ID of the project setting the alarm
:param user_id: the ID of the user setting the alarm
"""
over_quota = False
# Start by checking for user quota
user_alarm_quota = pecan.request.cfg.api.user_alarm_quota
if user_alarm_quota is not None:
user_alarms = list(conn.get_alarms(user=user_id))
over_quota = len(user_alarms) >= user_alarm_quota
# If the user quota isn't reached, we check for the project quota
if not over_quota:
project_alarm_quota = pecan.request.cfg.api.project_alarm_quota
if project_alarm_quota is not None:
project_alarms = list(conn.get_alarms(project=project_id))
over_quota = len(project_alarms) >= project_alarm_quota
return over_quota
class CronType(wtypes.UserType):
"""A user type that represents a cron format."""
basetype = six.string_types
name = 'cron'
@staticmethod
def validate(value):
# raises ValueError if invalid
croniter.croniter(value)
return value
class AlarmTimeConstraint(base.Base):
"""Representation of a time constraint on an alarm."""
name = wsme.wsattr(wtypes.text, mandatory=True)
"The name of the constraint"
_description = None # provide a default
def get_description(self):
if not self._description:
return ('Time constraint at %s lasting for %s seconds'
% (self.start, self.duration))
return self._description
def set_description(self, value):
self._description = value
description = wsme.wsproperty(wtypes.text, get_description,
set_description)
"The description of the constraint"
start = wsme.wsattr(CronType(), mandatory=True)
"Start point of the time constraint, in cron format"
duration = wsme.wsattr(wtypes.IntegerType(minimum=0), mandatory=True)
"How long the constraint should last, in seconds"
timezone = wsme.wsattr(wtypes.text, default="")
"Timezone of the constraint"
def as_dict(self):
return self.as_dict_from_keys(['name', 'description', 'start',
'duration', 'timezone'])
@staticmethod
def validate(tc):
if tc.timezone:
try:
pytz.timezone(tc.timezone)
except Exception:
raise base.ClientSideError(_("Timezone %s is not valid")
% tc.timezone)
return tc
@classmethod
def sample(cls):
return cls(name='SampleConstraint',
description='nightly build every night at 23h for 3 hours',
start='0 23 * * *',
duration=10800,
timezone='Europe/Ljubljana')
ALARMS_RULES = extension.ExtensionManager("aodh.alarm.rule")
LOG.debug("alarm rules plugin loaded: %s" % ",".join(ALARMS_RULES.names()))
ACTIONS_SCHEMA = extension.ExtensionManager(
notifier.AlarmNotifierService.NOTIFIER_EXTENSIONS_NAMESPACE).names()
class Alarm(base.Base):
"""Representation of an alarm."""
alarm_id = wtypes.text
"The UUID of the alarm"
name = wsme.wsattr(wtypes.text, mandatory=True)
"The name for the alarm"
_description = None # provide a default
def get_description(self):
rule = getattr(self, '%s_rule' % self.type, None)
if not self._description:
if hasattr(rule, 'default_description'):
return six.text_type(rule.default_description)
return "%s alarm rule" % self.type
return self._description
def set_description(self, value):
self._description = value
description = wsme.wsproperty(wtypes.text, get_description,
set_description)
"The description of the alarm"
enabled = wsme.wsattr(bool, default=True)
"This alarm is enabled?"
ok_actions = wsme.wsattr([wtypes.text], default=[])
"The actions to do when alarm state change to ok"
alarm_actions = wsme.wsattr([wtypes.text], default=[])
"The actions to do when alarm state change to alarm"
insufficient_data_actions = wsme.wsattr([wtypes.text], default=[])
"The actions to do when alarm state change to insufficient data"
repeat_actions = wsme.wsattr(bool, default=False)
"The actions should be re-triggered on each evaluation cycle"
type = base.AdvEnum('type', str, *ALARMS_RULES.names(),
mandatory=True)
"Explicit type specifier to select which rule to follow below."
time_constraints = wtypes.wsattr([AlarmTimeConstraint], default=[])
"""Describe time constraints for the alarm"""
# These settings are ignored in the PUT or POST operations, but are
# filled in for GET
project_id = wtypes.text
"The ID of the project or tenant that owns the alarm"
user_id = wtypes.text
"The ID of the user who created the alarm"
timestamp = datetime.datetime
"The date of the last alarm definition update"
state = base.AdvEnum('state', str, *state_kind,
default='insufficient data')
"The state offset the alarm"
state_timestamp = datetime.datetime
"The date of the last alarm state changed"
state_reason = wsme.wsattr(wtypes.text, default=ALARM_REASON_DEFAULT)
"The reason of the current state"
severity = base.AdvEnum('severity', str, *severity_kind,
default='low')
"The severity of the alarm"
def __init__(self, rule=None, time_constraints=None, **kwargs):
super(Alarm, self).__init__(**kwargs)
if rule:
setattr(self, '%s_rule' % self.type,
ALARMS_RULES[self.type].plugin(**rule))
if time_constraints:
self.time_constraints = [AlarmTimeConstraint(**tc)
for tc in time_constraints]
@staticmethod
def validate(alarm):
if alarm.type == 'threshold':
warnings.simplefilter("always")
debtcollector.deprecate(
"Ceilometer's API is deprecated as of Ocata. Therefore, "
" threshold rule alarms are no longer supported.",
version="5.0.0")
Alarm.check_rule(alarm)
Alarm.check_alarm_actions(alarm)
ALARMS_RULES[alarm.type].plugin.validate_alarm(alarm)
if alarm.time_constraints:
tc_names = [tc.name for tc in alarm.time_constraints]
if len(tc_names) > len(set(tc_names)):
error = _("Time constraint names must be "
"unique for a given alarm.")
raise base.ClientSideError(error)
return alarm
@staticmethod
def check_rule(alarm):
rule = '%s_rule' % alarm.type
if getattr(alarm, rule) in (wtypes.Unset, None):
error = _("%(rule)s must be set for %(type)s"
" type alarm") % {"rule": rule, "type": alarm.type}
raise base.ClientSideError(error)
rule_set = None
for ext in ALARMS_RULES:
name = "%s_rule" % ext.name
if getattr(alarm, name):
if rule_set is None:
rule_set = name
else:
error = _("%(rule1)s and %(rule2)s cannot be set at the "
"same time") % {'rule1': rule_set, 'rule2': name}
raise base.ClientSideError(error)
@staticmethod
def check_alarm_actions(alarm):
max_actions = pecan.request.cfg.api.alarm_max_actions
for state in state_kind:
actions_name = state.replace(" ", "_") + '_actions'
actions = getattr(alarm, actions_name)
if not actions:
continue
action_set = set(actions)
if len(actions) != len(action_set):
LOG.info('duplicate actions are found: %s, '
'remove duplicate ones', actions)
actions = list(action_set)
setattr(alarm, actions_name, actions)
if 0 < max_actions < len(actions):
error = _('%(name)s count exceeds maximum value '
'%(maximum)d') % {"name": actions_name,
"maximum": max_actions}
raise base.ClientSideError(error)
limited = rbac.get_limited_to_project(pecan.request.headers,
pecan.request.enforcer)
for action in actions:
try:
url = netutils.urlsplit(action)
except Exception:
error = _("Unable to parse action %s") % action
raise base.ClientSideError(error)
if url.scheme not in ACTIONS_SCHEMA:
error = _("Unsupported action %s") % action
raise base.ClientSideError(error)
if limited and url.scheme in ('log', 'test'):
error = _('You are not authorized to create '
'action: %s') % action
raise base.ClientSideError(error, status_code=401)
@classmethod
def sample(cls):
return cls(alarm_id=None,
name="SwiftObjectAlarm",
description="An alarm",
type='threshold',
time_constraints=[AlarmTimeConstraint.sample().as_dict()],
user_id="c96c887c216949acbdfbd8b494863567",
project_id="c96c887c216949acbdfbd8b494863567",
enabled=True,
timestamp=datetime.datetime(2015, 1, 1, 12, 0, 0, 0),
state="ok",
severity="moderate",
state_reason="threshold over 90%",
state_timestamp=datetime.datetime(2015, 1, 1, 12, 0, 0, 0),
ok_actions=["http://site:8000/ok"],
alarm_actions=["http://site:8000/alarm"],
insufficient_data_actions=["http://site:8000/nodata"],
repeat_actions=False,
)
def as_dict(self, db_model):
d = super(Alarm, self).as_dict(db_model)
for k in d:
if k.endswith('_rule'):
del d[k]
rule = getattr(self, "%s_rule" % self.type)
d['rule'] = rule if isinstance(rule, dict) else rule.as_dict()
if self.time_constraints:
d['time_constraints'] = [tc.as_dict()
for tc in self.time_constraints]
return d
@staticmethod
def _is_trust_url(url):
return url.scheme.startswith('trust+')
def _get_existing_trust_ids(self):
for action in itertools.chain(self.ok_actions or [],
self.alarm_actions or [],
self.insufficient_data_actions or []):
url = netutils.urlsplit(action)
if self._is_trust_url(url):
trust_id = url.username
if trust_id and url.password == 'delete':
yield trust_id
def update_actions(self, old_alarm=None):
trustor_user_id = pecan.request.headers.get('X-User-Id')
trustor_project_id = pecan.request.headers.get('X-Project-Id')
roles = pecan.request.headers.get('X-Roles', '')
if roles:
roles = roles.split(',')
else:
roles = []
auth_plugin = pecan.request.environ.get('keystone.token_auth')
if old_alarm:
prev_trust_ids = set(old_alarm._get_existing_trust_ids())
else:
prev_trust_ids = set()
trust_id = prev_trust_ids.pop() if prev_trust_ids else None
trust_id_used = False
for actions in (self.ok_actions, self.alarm_actions,
self.insufficient_data_actions):
if actions is not None:
for index, action in enumerate(actions[:]):
url = netutils.urlsplit(action)
if self._is_trust_url(url):
if '@' in url.netloc:
continue
if trust_id is None:
# We have a trust action without a trust ID,
# create it
trust_id = keystone_client.create_trust_id(
pecan.request.cfg,
trustor_user_id, trustor_project_id, roles,
auth_plugin)
if trust_id_used:
pw = ''
else:
pw = ':delete'
trust_id_used = True
netloc = '%s%s@%s' % (trust_id, pw, url.netloc)
url = urlparse.SplitResult(url.scheme, netloc,
url.path, url.query,
url.fragment)
actions[index] = url.geturl()
if trust_id is not None and not trust_id_used:
prev_trust_ids.add(trust_id)
for old_trust_id in prev_trust_ids:
keystone_client.delete_trust_id(old_trust_id, auth_plugin)
def delete_actions(self):
auth_plugin = pecan.request.environ.get('keystone.token_auth')
for trust_id in self._get_existing_trust_ids():
keystone_client.delete_trust_id(trust_id, auth_plugin)
Alarm.add_attributes(**{"%s_rule" % ext.name: ext.plugin
for ext in ALARMS_RULES})
class AlarmChange(base.Base):
"""Representation of an event in an alarm's history."""
event_id = wtypes.text
"The UUID of the change event"
alarm_id = wtypes.text
"The UUID of the alarm"
type = wtypes.Enum(str,
'creation',
'rule change',
'state transition',
'deletion')
"The type of change"
detail = wtypes.text
"JSON fragment describing change"
project_id = wtypes.text
"The project ID of the initiating identity"
user_id = wtypes.text
"The user ID of the initiating identity"
on_behalf_of = wtypes.text
"The tenant on behalf of which the change is being made"
timestamp = datetime.datetime
"The time/date of the alarm change"
@classmethod
def sample(cls):
return cls(alarm_id='e8ff32f772a44a478182c3fe1f7cad6a',
type='rule change',
detail='{"threshold": 42.0, "evaluation_periods": 4}',
user_id="3e5d11fda79448ac99ccefb20be187ca",
project_id="b6f16144010811e387e4de429e99ee8c",
on_behalf_of="92159030020611e3b26dde429e99ee8c",
timestamp=datetime.datetime(2015, 1, 1, 12, 0, 0, 0),
)
def _send_notification(event, payload):
notification = event.replace(" ", "_")
notification = "alarm.%s" % notification
transport = messaging.get_transport(pecan.request.cfg)
notifier = messaging.get_notifier(transport, publisher_id="aodh.api")
# FIXME(sileht): perhaps we need to copy some infos from the
# pecan request headers like nova does
notifier.info({}, notification, payload)
def stringify_timestamps(data):
"""Stringify any datetimes in given dict."""
return dict((k, v.isoformat()
if isinstance(v, datetime.datetime) else v)
for (k, v) in six.iteritems(data))
class AlarmController(rest.RestController):
"""Manages operations on a single alarm."""
_custom_actions = {
'history': ['GET'],
'state': ['PUT', 'GET'],
}
def __init__(self, alarm_id):
pecan.request.context['alarm_id'] = alarm_id
self._id = alarm_id
def _enforce_rbac(self, rbac_directive):
# TODO(sileht): We should be able to relax this since we
# pass the alarm object to the enforcer.
auth_project = rbac.get_limited_to_project(pecan.request.headers,
pecan.request.enforcer)
alarms = list(pecan.request.storage.get_alarms(alarm_id=self._id,
project=auth_project))
if not alarms:
raise base.AlarmNotFound(alarm=self._id, auth_project=auth_project)
alarm = alarms[0]
target = {'user_id': alarm.user_id,
'project_id': alarm.project_id}
rbac.enforce(rbac_directive, pecan.request.headers,
pecan.request.enforcer, target)
return alarm
def _record_change(self, data, now, on_behalf_of=None, type=None):
if not pecan.request.cfg.record_history:
return
if not data:
return
type = type or models.AlarmChange.RULE_CHANGE
scrubbed_data = stringify_timestamps(data)
detail = json.dumps(scrubbed_data)
user_id = pecan.request.headers.get('X-User-Id')
project_id = pecan.request.headers.get('X-Project-Id')
on_behalf_of = on_behalf_of or project_id
severity = scrubbed_data.get('severity')
payload = dict(event_id=uuidutils.generate_uuid(),
alarm_id=self._id,
type=type,
detail=detail,
user_id=user_id,
project_id=project_id,
on_behalf_of=on_behalf_of,
timestamp=now,
severity=severity)
try:
pecan.request.storage.record_alarm_change(payload)
except aodh.NotImplementedError:
pass
# Revert to the pre-json'ed details ...
payload['detail'] = scrubbed_data
_send_notification(type, payload)
def _record_delete(self, alarm):
if not alarm:
return
type = models.AlarmChange.DELETION
detail = {'state': alarm.state}
user_id = pecan.request.headers.get('X-User-Id')
project_id = pecan.request.headers.get('X-Project-Id')
payload = dict(event_id=uuidutils.generate_uuid(),
alarm_id=self._id,
type=type,
detail=detail,
user_id=user_id,
project_id=project_id,
on_behalf_of=project_id,
timestamp=timeutils.utcnow(),
severity=alarm.severity)
pecan.request.storage.delete_alarm(alarm.alarm_id)
_send_notification(type, payload)
@wsme_pecan.wsexpose(Alarm)
def get(self):
"""Return this alarm."""
return Alarm.from_db_model(self._enforce_rbac('get_alarm'))
@wsme_pecan.wsexpose(Alarm, body=Alarm)
def put(self, data):
"""Modify this alarm.
:param data: an alarm within the request body.
"""
# Ensure alarm exists
alarm_in = self._enforce_rbac('change_alarm')
now = timeutils.utcnow()
data.alarm_id = self._id
user, project = rbac.get_limited_to(pecan.request.headers,
pecan.request.enforcer)
if user:
data.user_id = user
elif data.user_id == wtypes.Unset:
data.user_id = alarm_in.user_id
if project:
data.project_id = project
elif data.project_id == wtypes.Unset:
data.project_id = alarm_in.project_id
data.timestamp = now
if alarm_in.state != data.state:
data.state_timestamp = now
data.state_reason = ALARM_REASON_MANUAL
else:
data.state_timestamp = alarm_in.state_timestamp
data.state_reason = alarm_in.state_reason
ALARMS_RULES[data.type].plugin.update_hook(data)
old_data = Alarm.from_db_model(alarm_in)
old_alarm = old_data.as_dict(models.Alarm)
data.update_actions(old_data)
updated_alarm = data.as_dict(models.Alarm)
try:
alarm_in = models.Alarm(**updated_alarm)
except Exception:
LOG.exception("Error while putting alarm: %s", updated_alarm)
raise base.ClientSideError(_("Alarm incorrect"))
alarm = pecan.request.storage.update_alarm(alarm_in)
change = dict((k, v) for k, v in updated_alarm.items()
if v != old_alarm[k] and k not in
['timestamp', 'state_timestamp'])
self._record_change(change, now, on_behalf_of=alarm.project_id)
return Alarm.from_db_model(alarm)
@wsme_pecan.wsexpose(None, status_code=204)
def delete(self):
"""Delete this alarm."""
# ensure alarm exists before deleting
alarm = self._enforce_rbac('delete_alarm')
self._record_delete(alarm)
alarm_object = Alarm.from_db_model(alarm)
alarm_object.delete_actions()
@wsme_pecan.wsexpose([AlarmChange], [base.Query], [str], int, str)
def history(self, q=None, sort=None, limit=None, marker=None):
"""Assembles the alarm history requested.
:param q: Filter rules for the changes to be described.
:param sort: A list of pairs of sort key and sort dir.
:param limit: The maximum number of items to be return.
:param marker: The pagination query marker.
"""
# Ensure alarm exists
self._enforce_rbac('alarm_history')
q = q or []
# allow history to be returned for deleted alarms, but scope changes
# returned to those carried out on behalf of the auth'd tenant, to
# avoid inappropriate cross-tenant visibility of alarm history
auth_project = rbac.get_limited_to_project(pecan.request.headers,
pecan.request.enforcer)
conn = pecan.request.storage
kwargs = v2_utils.query_to_kwargs(
q, conn.get_alarm_changes, ['on_behalf_of', 'alarm_id'])
if sort or limit or marker:
kwargs['pagination'] = v2_utils.get_pagination_options(
sort, limit, marker, models.AlarmChange)
return [AlarmChange.from_db_model(ac)
for ac in conn.get_alarm_changes(self._id, auth_project,
**kwargs)]
@wsme.validate(state_kind_enum)
@wsme_pecan.wsexpose(state_kind_enum, body=state_kind_enum)
def put_state(self, state):
"""Set the state of this alarm.
:param state: an alarm state within the request body.
"""
alarm = self._enforce_rbac('change_alarm_state')
# note(sileht): body are not validated by wsme
# Workaround for https://bugs.launchpad.net/wsme/+bug/1227229
if state not in state_kind:
raise base.ClientSideError(_("state invalid"))
now = timeutils.utcnow()
alarm.state = state
alarm.state_timestamp = now
alarm.state_reason = ALARM_REASON_MANUAL
alarm = pecan.request.storage.update_alarm(alarm)
change = {'state': alarm.state,
'state_reason': alarm.state_reason}
self._record_change(change, now, on_behalf_of=alarm.project_id,
type=models.AlarmChange.STATE_TRANSITION)
return alarm.state
@wsme_pecan.wsexpose(state_kind_enum)
def get_state(self):
"""Get the state of this alarm."""
return self._enforce_rbac('get_alarm_state').state
class AlarmsController(rest.RestController):
"""Manages operations on the alarms collection."""
@pecan.expose()
def _lookup(self, alarm_id, *remainder):
return AlarmController(alarm_id), remainder
@staticmethod
def _record_creation(conn, data, alarm_id, now):
if not pecan.request.cfg.record_history:
return
type = models.AlarmChange.CREATION
scrubbed_data = stringify_timestamps(data)
detail = json.dumps(scrubbed_data)
user_id = pecan.request.headers.get('X-User-Id')
project_id = pecan.request.headers.get('X-Project-Id')
severity = scrubbed_data.get('severity')
payload = dict(event_id=uuidutils.generate_uuid(),
alarm_id=alarm_id,
type=type,
detail=detail,
user_id=user_id,
project_id=project_id,
on_behalf_of=project_id,
timestamp=now,
severity=severity)
try:
conn.record_alarm_change(payload)
except aodh.NotImplementedError:
pass
# Revert to the pre-json'ed details ...
payload['detail'] = scrubbed_data
_send_notification(type, payload)
@wsme_pecan.wsexpose(Alarm, body=Alarm, status_code=201)
def post(self, data):
"""Create a new alarm.
:param data: an alarm within the request body.
"""
rbac.enforce('create_alarm', pecan.request.headers,
pecan.request.enforcer, {})
conn = pecan.request.storage
now = timeutils.utcnow()
data.alarm_id = uuidutils.generate_uuid()
user_limit, project_limit = rbac.get_limited_to(pecan.request.headers,
pecan.request.enforcer)
def _set_ownership(aspect, owner_limitation, header):
attr = '%s_id' % aspect
requested_owner = getattr(data, attr)
explicit_owner = requested_owner != wtypes.Unset
caller = pecan.request.headers.get(header)
if (owner_limitation and explicit_owner
and requested_owner != caller):
raise base.ProjectNotAuthorized(requested_owner, aspect)
actual_owner = (owner_limitation or
requested_owner if explicit_owner else caller)
setattr(data, attr, actual_owner)
_set_ownership('user', user_limit, 'X-User-Id')
_set_ownership('project', project_limit, 'X-Project-Id')
# Check if there's room for one more alarm
if is_over_quota(conn, data.project_id, data.user_id):
raise OverQuota(data)
data.timestamp = now
data.state_timestamp = now
data.state_reason = ALARM_REASON_DEFAULT
ALARMS_RULES[data.type].plugin.create_hook(data)
change = data.as_dict(models.Alarm)
data.update_actions()
try:
alarm_in = models.Alarm(**change)
except Exception:
LOG.exception("Error while posting alarm: %s", change)
raise base.ClientSideError(_("Alarm incorrect"))
alarm = conn.create_alarm(alarm_in)
self._record_creation(conn, change, alarm.alarm_id, now)
v2_utils.set_resp_location_hdr("/alarms/" + alarm.alarm_id)
return Alarm.from_db_model(alarm)
@wsme_pecan.wsexpose([Alarm], [base.Query], [str], int, str)
def get_all(self, q=None, sort=None, limit=None, marker=None):
"""Return all alarms, based on the query provided.
:param q: Filter rules for the alarms to be returned.
:param sort: A list of pairs of sort key and sort dir.
:param limit: The maximum number of items to be return.
:param marker: The pagination query marker.
"""
target = rbac.target_from_segregation_rule(
pecan.request.headers, pecan.request.enforcer)
rbac.enforce('get_alarms', pecan.request.headers,
pecan.request.enforcer, target)
q = q or []
# Timestamp is not supported field for Simple Alarm queries
kwargs = v2_utils.query_to_kwargs(
q, pecan.request.storage.get_alarms,
allow_timestamps=False)
if sort or limit or marker:
kwargs['pagination'] = v2_utils.get_pagination_options(
sort, limit, marker, models.Alarm)
return [Alarm.from_db_model(m)
for m in pecan.request.storage.get_alarms(**kwargs)]

View File

@@ -1,233 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ast
import datetime
import functools
import inspect
from oslo_utils import strutils
from oslo_utils import timeutils
import pecan
import six
import wsme
from wsme import types as wtypes
from aodh.i18n import _
operation_kind = ('lt', 'le', 'eq', 'ne', 'ge', 'gt')
operation_kind_enum = wtypes.Enum(str, *operation_kind)
class ClientSideError(wsme.exc.ClientSideError):
def __init__(self, error, status_code=400):
pecan.response.translatable_error = error
super(ClientSideError, self).__init__(error, status_code)
class ProjectNotAuthorized(ClientSideError):
def __init__(self, id, aspect='project'):
params = dict(aspect=aspect, id=id)
super(ProjectNotAuthorized, self).__init__(
_("Not Authorized to access %(aspect)s %(id)s") % params,
status_code=401)
class AdvEnum(wtypes.wsproperty):
"""Handle default and mandatory for wtypes.Enum."""
def __init__(self, name, *args, **kwargs):
self._name = '_advenum_%s' % name
self._default = kwargs.pop('default', None)
mandatory = kwargs.pop('mandatory', False)
enum = wtypes.Enum(*args, **kwargs)
super(AdvEnum, self).__init__(datatype=enum, fget=self._get,
fset=self._set, mandatory=mandatory)
def _get(self, parent):
if hasattr(parent, self._name):
value = getattr(parent, self._name)
return value or self._default
return self._default
def _set(self, parent, value):
try:
if self.datatype.validate(value):
setattr(parent, self._name, value)
except ValueError as e:
raise wsme.exc.InvalidInput(self._name.replace('_advenum_', '', 1),
value, e)
class Base(wtypes.DynamicBase):
@classmethod
def from_db_model(cls, m):
return cls(**(m.as_dict()))
@classmethod
def from_db_and_links(cls, m, links):
return cls(links=links, **(m.as_dict()))
def as_dict(self, db_model):
valid_keys = inspect.getargspec(db_model.__init__)[0]
if 'self' in valid_keys:
valid_keys.remove('self')
return self.as_dict_from_keys(valid_keys)
def as_dict_from_keys(self, keys):
return dict((k, getattr(self, k))
for k in keys
if hasattr(self, k) and
getattr(self, k) != wsme.Unset)
class Query(Base):
"""Query filter."""
# The data types supported by the query.
_supported_types = ['integer', 'float', 'string', 'boolean', 'datetime']
# Functions to convert the data field to the correct type.
_type_converters = {'integer': int,
'float': float,
'boolean': functools.partial(
strutils.bool_from_string, strict=True),
'string': six.text_type,
'datetime': timeutils.parse_isotime}
_op = None # provide a default
def get_op(self):
return self._op or 'eq'
def set_op(self, value):
self._op = value
field = wsme.wsattr(wtypes.text, mandatory=True)
"The name of the field to test"
# op = wsme.wsattr(operation_kind, default='eq')
# this ^ doesn't seem to work.
op = wsme.wsproperty(operation_kind_enum, get_op, set_op)
"The comparison operator. Defaults to 'eq'."
value = wsme.wsattr(wtypes.text, mandatory=True)
"The value to compare against the stored data"
type = wtypes.text
"The data type of value to compare against the stored data"
def __repr__(self):
# for logging calls
return '<Query %r %s %r %s>' % (self.field,
self.op,
self.value,
self.type)
@classmethod
def sample(cls):
return cls(field='resource_id',
op='eq',
value='bd9431c1-8d69-4ad3-803a-8d4a6b89fd36',
type='string'
)
def as_dict(self):
return self.as_dict_from_keys(['field', 'op', 'type', 'value'])
def _get_value_as_type(self, forced_type=None):
"""Convert metadata value to the specified data type.
This method is called during metadata query to help convert the
querying metadata to the data type specified by user. If there is no
data type given, the metadata will be parsed by ast.literal_eval to
try to do a smart converting.
NOTE (flwang) Using "_" as prefix to avoid an InvocationError raised
from wsmeext/sphinxext.py. It's OK to call it outside the Query class.
Because the "public" side of that class is actually the outside of the
API, and the "private" side is the API implementation. The method is
only used in the API implementation, so it's OK.
:returns: metadata value converted with the specified data type.
"""
type = forced_type or self.type
try:
converted_value = self.value
if not type:
try:
converted_value = ast.literal_eval(self.value)
except (ValueError, SyntaxError):
# Unable to convert the metadata value automatically
# let it default to self.value
pass
else:
if type not in self._supported_types:
# Types must be explicitly declared so the
# correct type converter may be used. Subclasses
# of Query may define _supported_types and
# _type_converters to define their own types.
raise TypeError()
converted_value = self._type_converters[type](self.value)
if isinstance(converted_value, datetime.datetime):
converted_value = timeutils.normalize_time(converted_value)
except ValueError:
msg = (_('Unable to convert the value %(value)s'
' to the expected data type %(type)s.') %
{'value': self.value, 'type': type})
raise ClientSideError(msg)
except TypeError:
msg = (_('The data type %(type)s is not supported. The supported'
' data type list is: %(supported)s') %
{'type': type, 'supported': self._supported_types})
raise ClientSideError(msg)
except Exception:
msg = (_('Unexpected exception converting %(value)s to'
' the expected data type %(type)s.') %
{'value': self.value, 'type': type})
raise ClientSideError(msg)
return converted_value
class AlarmNotFound(ClientSideError):
def __init__(self, alarm, auth_project):
if not auth_project:
msg = _('Alarm %s not found') % alarm
else:
msg = _('Alarm %(alarm_id)s not found in project %'
'(project)s') % {
'alarm_id': alarm, 'project': auth_project}
super(AlarmNotFound, self).__init__(msg, status_code=404)
class AlarmRule(Base):
"""Base class Alarm Rule extension and wsme.types."""
@staticmethod
def validate_alarm(alarm):
pass
@staticmethod
def create_hook(alarm):
pass
@staticmethod
def update_hook(alarm):
pass

View File

@@ -1,111 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from pecan import rest
import six
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from aodh.api.controllers.v2 import base
def _decode_unicode(input):
"""Decode the unicode of the message, and encode it into utf-8."""
if isinstance(input, dict):
temp = {}
# If the input data is a dict, create an equivalent dict with a
# predictable insertion order to avoid inconsistencies in the
# message signature computation for equivalent payloads modulo
# ordering
for key, value in sorted(six.iteritems(input)):
temp[_decode_unicode(key)] = _decode_unicode(value)
return temp
elif isinstance(input, (tuple, list)):
# When doing a pair of JSON encode/decode operations to the tuple,
# the tuple would become list. So we have to generate the value as
# list here.
return [_decode_unicode(element) for element in input]
elif isinstance(input, six.text_type):
return input.encode('utf-8')
else:
return input
def _recursive_keypairs(d, separator=':'):
"""Generator that produces sequence of keypairs for nested dictionaries."""
for name, value in sorted(six.iteritems(d)):
if isinstance(value, dict):
for subname, subvalue in _recursive_keypairs(value, separator):
yield ('%s%s%s' % (name, separator, subname), subvalue)
elif isinstance(value, (tuple, list)):
yield name, _decode_unicode(value)
else:
yield name, value
def _flatten_capabilities(capabilities):
return dict((k, v) for k, v in _recursive_keypairs(capabilities))
class Capabilities(base.Base):
"""A representation of the API and storage capabilities.
Usually constrained by restrictions imposed by the storage driver.
"""
api = {wtypes.text: bool}
"A flattened dictionary of API capabilities"
alarm_storage = {wtypes.text: bool}
"A flattened dictionary of alarm storage capabilities"
@classmethod
def sample(cls):
return cls(
api=_flatten_capabilities({
'alarms': {'query': {'simple': True,
'complex': True},
'history': {'query': {'simple': True,
'complex': True}}},
}),
alarm_storage=_flatten_capabilities(
{'storage': {'production_ready': True}}),
)
class CapabilitiesController(rest.RestController):
"""Manages capabilities queries."""
@wsme_pecan.wsexpose(Capabilities)
def get(self):
"""Returns a flattened dictionary of API capabilities.
Capabilities supported by the currently configured storage driver.
"""
# variation in API capabilities is effectively determined by
# the lack of strict feature parity across storage drivers
alarm_conn = pecan.request.storage
driver_capabilities = {
'alarms': alarm_conn.get_capabilities()['alarms'],
}
alarm_driver_perf = alarm_conn.get_storage_capabilities()
return Capabilities(api=_flatten_capabilities(driver_capabilities),
alarm_storage=_flatten_capabilities(
alarm_driver_perf))

View File

@@ -1,395 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import jsonschema
from oslo_log import log
from oslo_utils import timeutils
import pecan
from pecan import rest
from wsme import types as wtypes
import wsmeext.pecan as wsme_pecan
from aodh.api.controllers.v2 import alarms
from aodh.api.controllers.v2 import base
from aodh.api import rbac
from aodh.i18n import _
from aodh.storage import models
LOG = log.getLogger(__name__)
class ComplexQuery(base.Base):
"""Holds a sample query encoded in json."""
filter = wtypes.text
"The filter expression encoded in json."
orderby = wtypes.text
"List of single-element dicts for specifing the ordering of the results."
limit = int
"The maximum number of results to be returned."
@classmethod
def sample(cls):
return cls(filter='{"and": [{"and": [{"=": ' +
'{"counter_name": "cpu_util"}}, ' +
'{">": {"counter_volume": 0.23}}, ' +
'{"<": {"counter_volume": 0.26}}]}, ' +
'{"or": [{"and": [{">": ' +
'{"timestamp": "2013-12-01T18:00:00"}}, ' +
'{"<": ' +
'{"timestamp": "2013-12-01T18:15:00"}}]}, ' +
'{"and": [{">": ' +
'{"timestamp": "2013-12-01T18:30:00"}}, ' +
'{"<": ' +
'{"timestamp": "2013-12-01T18:45:00"}}]}]}]}',
orderby='[{"counter_volume": "ASC"}, ' +
'{"timestamp": "DESC"}]',
limit=42
)
def _list_to_regexp(items, regexp_prefix=""):
regexp = ["^%s$" % item for item in items]
regexp = regexp_prefix + "|".join(regexp)
return regexp
class ValidatedComplexQuery(object):
complex_operators = ["and", "or"]
order_directions = ["asc", "desc"]
simple_ops = ["=", "!=", "<", ">", "<=", "=<", ">=", "=>", "=~"]
regexp_prefix = "(?i)"
complex_ops = _list_to_regexp(complex_operators, regexp_prefix)
simple_ops = _list_to_regexp(simple_ops, regexp_prefix)
order_directions = _list_to_regexp(order_directions, regexp_prefix)
timestamp_fields = ["timestamp", "state_timestamp"]
def __init__(self, query, db_model, additional_name_mapping=None,
metadata_allowed=False):
additional_name_mapping = additional_name_mapping or {}
self.name_mapping = {"user": "user_id",
"project": "project_id"}
self.name_mapping.update(additional_name_mapping)
valid_keys = db_model.get_field_names()
valid_keys = list(valid_keys) + list(self.name_mapping.keys())
valid_fields = _list_to_regexp(valid_keys)
if metadata_allowed:
valid_filter_fields = valid_fields + "|^metadata\.[\S]+$"
else:
valid_filter_fields = valid_fields
schema_value = {
"oneOf": [{"type": "string"},
{"type": "number"},
{"type": "boolean"}],
"minProperties": 1,
"maxProperties": 1}
schema_value_in = {
"type": "array",
"items": {"oneOf": [{"type": "string"},
{"type": "number"}]},
"minItems": 1}
schema_field = {
"type": "object",
"patternProperties": {valid_filter_fields: schema_value},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_field_in = {
"type": "object",
"patternProperties": {valid_filter_fields: schema_value_in},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_leaf_in = {
"type": "object",
"patternProperties": {"(?i)^in$": schema_field_in},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_leaf_simple_ops = {
"type": "object",
"patternProperties": {self.simple_ops: schema_field},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_and_or_array = {
"type": "array",
"items": {"$ref": "#"},
"minItems": 2}
schema_and_or = {
"type": "object",
"patternProperties": {self.complex_ops: schema_and_or_array},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
schema_not = {
"type": "object",
"patternProperties": {"(?i)^not$": {"$ref": "#"}},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}
self.schema = {
"oneOf": [{"$ref": "#/definitions/leaf_simple_ops"},
{"$ref": "#/definitions/leaf_in"},
{"$ref": "#/definitions/and_or"},
{"$ref": "#/definitions/not"}],
"minProperties": 1,
"maxProperties": 1,
"definitions": {"leaf_simple_ops": schema_leaf_simple_ops,
"leaf_in": schema_leaf_in,
"and_or": schema_and_or,
"not": schema_not}}
self.orderby_schema = {
"type": "array",
"items": {
"type": "object",
"patternProperties":
{valid_fields:
{"type": "string",
"pattern": self.order_directions}},
"additionalProperties": False,
"minProperties": 1,
"maxProperties": 1}}
self.original_query = query
def validate(self, visibility_field):
"""Validates the query content and does the necessary conversions."""
if self.original_query.filter is wtypes.Unset:
self.filter_expr = None
else:
try:
self.filter_expr = json.loads(self.original_query.filter)
self._validate_filter(self.filter_expr)
except (ValueError, jsonschema.exceptions.ValidationError) as e:
raise base.ClientSideError(
_("Filter expression not valid: %s") % str(e))
self._replace_isotime_with_datetime(self.filter_expr)
self._convert_operator_to_lower_case(self.filter_expr)
self._normalize_field_names_for_db_model(self.filter_expr)
self._force_visibility(visibility_field)
if self.original_query.orderby is wtypes.Unset:
self.orderby = None
else:
try:
self.orderby = json.loads(self.original_query.orderby)
self._validate_orderby(self.orderby)
except (ValueError, jsonschema.exceptions.ValidationError) as e:
raise base.ClientSideError(
_("Order-by expression not valid: %s") % e)
self._convert_orderby_to_lower_case(self.orderby)
self._normalize_field_names_in_orderby(self.orderby)
if self.original_query.limit is wtypes.Unset:
self.limit = None
else:
self.limit = self.original_query.limit
if self.limit is not None and self.limit <= 0:
msg = _('Limit should be positive')
raise base.ClientSideError(msg)
@staticmethod
def lowercase_values(mapping):
"""Converts the values in the mapping dict to lowercase."""
items = mapping.items()
for key, value in items:
mapping[key] = value.lower()
def _convert_orderby_to_lower_case(self, orderby):
for orderby_field in orderby:
self.lowercase_values(orderby_field)
def _normalize_field_names_in_orderby(self, orderby):
for orderby_field in orderby:
self._replace_field_names(orderby_field)
def _traverse_postorder(self, tree, visitor):
op = list(tree.keys())[0]
if op.lower() in self.complex_operators:
for i, operand in enumerate(tree[op]):
self._traverse_postorder(operand, visitor)
if op.lower() == "not":
self._traverse_postorder(tree[op], visitor)
visitor(tree)
def _check_cross_project_references(self, own_project_id,
visibility_field):
"""Do not allow other than own_project_id."""
def check_project_id(subfilter):
op, value = list(subfilter.items())[0]
if (op.lower() not in self.complex_operators
and list(value.keys())[0] == visibility_field
and value[visibility_field] != own_project_id):
raise base.ProjectNotAuthorized(value[visibility_field])
self._traverse_postorder(self.filter_expr, check_project_id)
def _force_visibility(self, visibility_field):
"""Force visibility field.
If the tenant is not admin insert an extra
"and <visibility_field>=<tenant's project_id>" clause to the query.
"""
authorized_project = rbac.get_limited_to_project(
pecan.request.headers, pecan.request.enforcer)
is_admin = authorized_project is None
if not is_admin:
self._restrict_to_project(authorized_project, visibility_field)
self._check_cross_project_references(authorized_project,
visibility_field)
def _restrict_to_project(self, project_id, visibility_field):
restriction = {"=": {visibility_field: project_id}}
if self.filter_expr is None:
self.filter_expr = restriction
else:
self.filter_expr = {"and": [restriction, self.filter_expr]}
def _replace_isotime_with_datetime(self, filter_expr):
def replace_isotime(subfilter):
op, value = list(subfilter.items())[0]
if op.lower() not in self.complex_operators:
field = list(value.keys())[0]
if field in self.timestamp_fields:
date_time = self._convert_to_datetime(subfilter[op][field])
subfilter[op][field] = date_time
self._traverse_postorder(filter_expr, replace_isotime)
def _normalize_field_names_for_db_model(self, filter_expr):
def _normalize_field_names(subfilter):
op, value = list(subfilter.items())[0]
if op.lower() not in self.complex_operators:
self._replace_field_names(value)
self._traverse_postorder(filter_expr,
_normalize_field_names)
def _replace_field_names(self, subfilter):
field, value = list(subfilter.items())[0]
if field in self.name_mapping:
del subfilter[field]
subfilter[self.name_mapping[field]] = value
if field.startswith("metadata."):
del subfilter[field]
subfilter["resource_" + field] = value
@staticmethod
def lowercase_keys(mapping):
"""Converts the values of the keys in mapping to lowercase."""
items = mapping.items()
for key, value in items:
del mapping[key]
mapping[key.lower()] = value
def _convert_operator_to_lower_case(self, filter_expr):
self._traverse_postorder(filter_expr, self.lowercase_keys)
@staticmethod
def _convert_to_datetime(isotime):
try:
date_time = timeutils.parse_isotime(isotime)
date_time = date_time.replace(tzinfo=None)
return date_time
except ValueError:
LOG.exception("String %s is not a valid isotime", isotime)
msg = _('Failed to parse the timestamp value %s') % isotime
raise base.ClientSideError(msg)
def _validate_filter(self, filter_expr):
jsonschema.validate(filter_expr, self.schema)
def _validate_orderby(self, orderby_expr):
jsonschema.validate(orderby_expr, self.orderby_schema)
class QueryAlarmHistoryController(rest.RestController):
"""Provides complex query possibilities for alarm history."""
@wsme_pecan.wsexpose([alarms.AlarmChange], body=ComplexQuery)
def post(self, body):
"""Define query for retrieving AlarmChange data.
:param body: Query rules for the alarm history to be returned.
"""
target = rbac.target_from_segregation_rule(
pecan.request.headers, pecan.request.enforcer)
rbac.enforce('query_alarm_history', pecan.request.headers,
pecan.request.enforcer, target)
query = ValidatedComplexQuery(body,
models.AlarmChange)
query.validate(visibility_field="on_behalf_of")
conn = pecan.request.storage
return [alarms.AlarmChange.from_db_model(s)
for s in conn.query_alarm_history(query.filter_expr,
query.orderby,
query.limit)]
class QueryAlarmsController(rest.RestController):
"""Provides complex query possibilities for alarms."""
history = QueryAlarmHistoryController()
@wsme_pecan.wsexpose([alarms.Alarm], body=ComplexQuery)
def post(self, body):
"""Define query for retrieving Alarm data.
:param body: Query rules for the alarms to be returned.
"""
target = rbac.target_from_segregation_rule(
pecan.request.headers, pecan.request.enforcer)
rbac.enforce('query_alarm', pecan.request.headers,
pecan.request.enforcer, target)
query = ValidatedComplexQuery(body,
models.Alarm)
query.validate(visibility_field="project_id")
conn = pecan.request.storage
return [alarms.Alarm.from_db_model(s)
for s in conn.query_alarms(query.filter_expr,
query.orderby,
query.limit)]
class QueryController(rest.RestController):
alarms = QueryAlarmsController()

View File

@@ -1,31 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from aodh.api.controllers.v2 import alarms
from aodh.api.controllers.v2 import capabilities
from aodh.api.controllers.v2 import query
class V2Controller(object):
"""Version 2 API controller root."""
alarms = alarms.AlarmsController()
query = query.QueryController()
capabilities = capabilities.CapabilitiesController()

View File

@@ -1,322 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2013 IBM Corp.
# Copyright 2013 eNovance <licensing@enovance.com>
# Copyright Ericsson AB 2013. All rights reserved
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import datetime
import inspect
from oslo_utils import timeutils
import pecan
import six
from six.moves.urllib import parse as urllib_parse
import wsme
from aodh.api.controllers.v2 import base
from aodh.api import rbac
def get_auth_project(on_behalf_of=None):
# when an alarm is created by an admin on behalf of another tenant
# we must ensure for:
# - threshold alarm, that an implicit query constraint on project_id is
# added so that admin-level visibility on statistics is not leaked
# Hence, for null auth_project (indicating admin-ness) we check if
# the creating tenant differs from the tenant on whose behalf the
# alarm is being created
auth_project = rbac.get_limited_to_project(pecan.request.headers,
pecan.request.enforcer)
created_by = pecan.request.headers.get('X-Project-Id')
is_admin = auth_project is None
if is_admin and on_behalf_of != created_by:
auth_project = on_behalf_of
return auth_project
def sanitize_query(query, db_func, on_behalf_of=None):
"""Check the query.
See if:
1) the request is coming from admin - then allow full visibility
2) non-admin - make sure that the query includes the requester's project.
"""
q = copy.copy(query)
auth_project = get_auth_project(on_behalf_of)
if auth_project:
_verify_query_segregation(q, auth_project)
proj_q = [i for i in q if i.field == 'project_id']
valid_keys = inspect.getargspec(db_func)[0]
if not proj_q and 'on_behalf_of' not in valid_keys:
# The user is restricted, but they didn't specify a project
# so add it for them.
q.append(base.Query(field='project_id',
op='eq',
value=auth_project))
return q
def _verify_query_segregation(query, auth_project=None):
"""Ensure non-admin queries are not constrained to another project."""
auth_project = (auth_project or
rbac.get_limited_to_project(pecan.request.headers,
pecan.request.enforcer))
if not auth_project:
return
for q in query:
if q.field in ('project', 'project_id') and auth_project != q.value:
raise base.ProjectNotAuthorized(q.value)
def validate_query(query, db_func, internal_keys=None,
allow_timestamps=True):
"""Validates the syntax of the query and verifies the query.
Verification check if the query request is authorized for the included
project.
:param query: Query expression that should be validated
:param db_func: the function on the storage level, of which arguments
will form the valid_keys list, which defines the valid fields for a
query expression
:param internal_keys: internally used field names, that should not be
used for querying
:param allow_timestamps: defines whether the timestamp-based constraint is
applicable for this query or not
:raises InvalidInput: if an operator is not supported for a given field
:raises InvalidInput: if timestamp constraints are allowed, but
search_offset was included without timestamp constraint
:raises UnknownArgument: if a field name is not a timestamp field, nor
in the list of valid keys
"""
internal_keys = internal_keys or []
_verify_query_segregation(query)
valid_keys = inspect.getargspec(db_func)[0]
if 'alarm_type' in valid_keys:
valid_keys.remove('alarm_type')
valid_keys.append('type')
if 'pagination' in valid_keys:
valid_keys.remove('pagination')
internal_timestamp_keys = ['end_timestamp', 'start_timestamp',
'end_timestamp_op', 'start_timestamp_op']
if 'start_timestamp' in valid_keys:
internal_keys += internal_timestamp_keys
valid_keys += ['timestamp', 'search_offset']
internal_keys.append('self')
internal_keys.append('metaquery')
valid_keys = set(valid_keys) - set(internal_keys)
translation = {'user_id': 'user',
'project_id': 'project',
'resource_id': 'resource'}
has_timestamp_query = _validate_timestamp_fields(query,
'timestamp',
('lt', 'le', 'gt', 'ge'),
allow_timestamps)
has_search_offset_query = _validate_timestamp_fields(query,
'search_offset',
'eq',
allow_timestamps)
if has_search_offset_query and not has_timestamp_query:
raise wsme.exc.InvalidInput('field', 'search_offset',
"search_offset cannot be used without " +
"timestamp")
def _is_field_metadata(field):
return (field.startswith('metadata.') or
field.startswith('resource_metadata.'))
for i in query:
if i.field not in ('timestamp', 'search_offset'):
key = translation.get(i.field, i.field)
operator = i.op
if key in valid_keys or _is_field_metadata(i.field):
if operator == 'eq':
if key == 'enabled':
i._get_value_as_type('boolean')
elif _is_field_metadata(key):
i._get_value_as_type()
else:
raise wsme.exc.InvalidInput('op', i.op,
'unimplemented operator for '
'%s' % i.field)
else:
msg = ("unrecognized field in query: %s, "
"valid keys: %s") % (query, sorted(valid_keys))
raise wsme.exc.UnknownArgument(key, msg)
def _validate_timestamp_fields(query, field_name, operator_list,
allow_timestamps):
"""Validates the timestamp related constraints in a query if there are any.
:param query: query expression that may contain the timestamp fields
:param field_name: timestamp name, which should be checked (timestamp,
search_offset)
:param operator_list: list of operators that are supported for that
timestamp, which was specified in the parameter field_name
:param allow_timestamps: defines whether the timestamp-based constraint is
applicable to this query or not
:returns: True, if there was a timestamp constraint, containing
a timestamp field named as defined in field_name, in the query and it
was allowed and syntactically correct.
:returns: False, if there wasn't timestamp constraint, containing a
timestamp field named as defined in field_name, in the query
:raises InvalidInput: if an operator is unsupported for a given timestamp
field
:raises UnknownArgument: if the timestamp constraint is not allowed in
the query
"""
for item in query:
if item.field == field_name:
# If *timestamp* or *search_offset* field was specified in the
# query, but timestamp is not supported on that resource, on
# which the query was invoked, then raise an exception.
if not allow_timestamps:
raise wsme.exc.UnknownArgument(field_name,
"not valid for " +
"this resource")
if item.op not in operator_list:
raise wsme.exc.InvalidInput('op', item.op,
'unimplemented operator for %s' %
item.field)
return True
return False
def query_to_kwargs(query, db_func, internal_keys=None,
allow_timestamps=True):
validate_query(query, db_func, internal_keys=internal_keys,
allow_timestamps=allow_timestamps)
query = sanitize_query(query, db_func)
translation = {'user_id': 'user',
'project_id': 'project',
'resource_id': 'resource',
'type': 'alarm_type'}
stamp = {}
kwargs = {}
for i in query:
if i.field == 'timestamp':
if i.op in ('lt', 'le'):
stamp['end_timestamp'] = i.value
stamp['end_timestamp_op'] = i.op
elif i.op in ('gt', 'ge'):
stamp['start_timestamp'] = i.value
stamp['start_timestamp_op'] = i.op
else:
if i.op == 'eq':
if i.field == 'search_offset':
stamp['search_offset'] = i.value
elif i.field == 'enabled':
kwargs[i.field] = i._get_value_as_type('boolean')
else:
key = translation.get(i.field, i.field)
kwargs[key] = i.value
if stamp:
kwargs.update(_get_query_timestamps(stamp))
return kwargs
def _get_query_timestamps(args=None):
"""Return any optional timestamp information in the request.
Determine the desired range, if any, from the GET arguments. Set
up the query range using the specified offset.
[query_start ... start_timestamp ... end_timestamp ... query_end]
Returns a dictionary containing:
start_timestamp: First timestamp to use for query
start_timestamp_op: First timestamp operator to use for query
end_timestamp: Final timestamp to use for query
end_timestamp_op: Final timestamp operator to use for query
"""
if args is None:
return {}
search_offset = int(args.get('search_offset', 0))
def _parse_timestamp(timestamp):
if not timestamp:
return None
try:
iso_timestamp = timeutils.parse_isotime(timestamp)
iso_timestamp = iso_timestamp.replace(tzinfo=None)
except ValueError:
raise wsme.exc.InvalidInput('timestamp', timestamp,
'invalid timestamp format')
return iso_timestamp
start_timestamp = _parse_timestamp(args.get('start_timestamp'))
end_timestamp = _parse_timestamp(args.get('end_timestamp'))
start_timestamp = start_timestamp - datetime.timedelta(
minutes=search_offset) if start_timestamp else None
end_timestamp = end_timestamp + datetime.timedelta(
minutes=search_offset) if end_timestamp else None
return {'start_timestamp': start_timestamp,
'end_timestamp': end_timestamp,
'start_timestamp_op': args.get('start_timestamp_op'),
'end_timestamp_op': args.get('end_timestamp_op')}
def set_resp_location_hdr(location):
location = '%s%s' % (pecan.request.script_name, location)
# NOTE(sileht): according the pep-3333 the headers must be
# str in py2 and py3 even this is not the same thing in both
# version
# see: http://legacy.python.org/dev/peps/pep-3333/#unicode-issues
if six.PY2 and isinstance(location, six.text_type):
location = location.encode('utf-8')
location = urllib_parse.quote(location)
pecan.response.headers['Location'] = location
def get_pagination_options(sort, limit, marker, api_model):
sorts = list()
if limit and limit <= 0:
raise wsme.exc.InvalidInput('limit', limit,
'it should be a positive integer.')
for s in sort or []:
sort_key, __, sort_dir = s.partition(':')
if sort_key not in api_model.SUPPORT_SORT_KEYS:
raise wsme.exc.InvalidInput(
'sort', s, "the sort parameter should be a pair of sort "
"key and sort dir combined with ':', or only"
" sort key specified and sort dir will be default "
"'asc', the supported sort keys are: %s" %
str(api_model.SUPPORT_SORT_KEYS))
# the default sort direction is 'asc'
sorts.append((sort_key, sort_dir or 'asc'))
return {'limit': limit,
'marker': marker,
'sort': sorts}

View File

@@ -1,53 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
from pecan import hooks
class ConfigHook(hooks.PecanHook):
"""Attach the configuration and policy enforcer object to the request.
That allows controllers to get it.
"""
def __init__(self, conf):
self.conf = conf
self.enforcer = policy.Enforcer(conf, default_rule="default")
def before(self, state):
state.request.cfg = self.conf
state.request.enforcer = self.enforcer
class DBHook(hooks.PecanHook):
def __init__(self, alarm_conn):
self.storage = alarm_conn
def before(self, state):
state.request.storage = self.storage
class TranslationHook(hooks.PecanHook):
def after(self, state):
# After a request has been done, we need to see if
# ClientSideError has added an error onto the response.
# If it has we need to get it info the thread-safe WSGI
# environ to be used by the ParsableErrorMiddleware.
if hasattr(state.response, 'translatable_error'):
state.request.environ['translatable_error'] = (
state.response.translatable_error)

View File

@@ -1,127 +0,0 @@
#
# Copyright 2013 IBM Corp.
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Middleware to replace the plain text message body of an error
response with one formatted so the client can parse it.
Based on pecan.middleware.errordocument
"""
import json
from lxml import etree
from oslo_log import log
import six
import webob
from aodh import i18n
LOG = log.getLogger(__name__)
class ParsableErrorMiddleware(object):
"""Replace error body with something the client can parse."""
@staticmethod
def best_match_language(accept_language):
"""Determines best available locale from the Accept-Language header.
:returns: the best language match or None if the 'Accept-Language'
header was not available in the request.
"""
if not accept_language:
return None
all_languages = i18n.get_available_languages()
return accept_language.best_match(all_languages)
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
# Request for this state, modified by replace_start_response()
# and used when an error is being reported.
state = {}
def replacement_start_response(status, headers, exc_info=None):
"""Overrides the default response to make errors parsable."""
try:
status_code = int(status.split(' ')[0])
state['status_code'] = status_code
except (ValueError, TypeError): # pragma: nocover
raise Exception((
'ErrorDocumentMiddleware received an invalid '
'status %s' % status
))
else:
if (state['status_code'] // 100) not in (2, 3):
# Remove some headers so we can replace them later
# when we have the full error message and can
# compute the length.
headers = [(h, v)
for (h, v) in headers
if h not in ('Content-Length', 'Content-Type')
]
# Save the headers in case we need to modify them.
state['headers'] = headers
return start_response(status, headers, exc_info)
app_iter = self.app(environ, replacement_start_response)
if (state['status_code'] // 100) not in (2, 3):
req = webob.Request(environ)
error = environ.get('translatable_error')
user_locale = self.best_match_language(req.accept_language)
if (req.accept.best_match(['application/json', 'application/xml'])
== 'application/xml'):
content_type = 'application/xml'
try:
# simple check xml is valid
fault = etree.fromstring(b'\n'.join(app_iter))
# Add the translated error to the xml data
if error is not None:
for fault_string in fault.findall('faultstring'):
fault_string.text = i18n.translate(error,
user_locale)
error_message = etree.tostring(fault)
body = b''.join((b'<error_message>',
error_message,
b'</error_message>'))
except etree.XMLSyntaxError as err:
LOG.error('Error parsing HTTP response: %s', err)
error_message = state['status_code']
body = '<error_message>%s</error_message>' % error_message
if six.PY3:
body = body.encode('utf-8')
else:
content_type = 'application/json'
app_data = b'\n'.join(app_iter)
if six.PY3:
app_data = app_data.decode('utf-8')
try:
fault = json.loads(app_data)
if error is not None and 'faultstring' in fault:
fault['faultstring'] = i18n.translate(error,
user_locale)
except ValueError as err:
fault = app_data
body = json.dumps({'error_message': fault})
if six.PY3:
body = body.encode('utf-8')
state['headers'].append(('Content-Length', str(len(body))))
state['headers'].append(('Content-Type', content_type))
body = [body]
else:
body = app_iter
return body

View File

@@ -1,20 +0,0 @@
{
"context_is_admin": "role:admin",
"segregation": "rule:context_is_admin",
"admin_or_owner": "rule:context_is_admin or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"telemetry:get_alarm": "rule:admin_or_owner",
"telemetry:get_alarms": "rule:admin_or_owner",
"telemetry:query_alarm": "rule:admin_or_owner",
"telemetry:create_alarm": "",
"telemetry:change_alarm": "rule:admin_or_owner",
"telemetry:delete_alarm": "rule:admin_or_owner",
"telemetry:get_alarm_state": "rule:admin_or_owner",
"telemetry:change_alarm_state": "rule:admin_or_owner",
"telemetry:alarm_history": "rule:admin_or_owner",
"telemetry:query_alarm_history": "rule:admin_or_owner"
}

View File

@@ -1,107 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2014 Hewlett-Packard Company
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Access Control Lists (ACL's) control access the API server."""
import pecan
def target_from_segregation_rule(headers, enforcer):
"""Return a target that corresponds of an alarm returned by segregation rule
This allows to use project_id: in an oslo_policy rule for query/listing.
:param headers: HTTP headers dictionary
:param enforcer: policy enforcer
:returns: target
"""
project_id = get_limited_to_project(headers, enforcer)
if project_id is not None:
return {'project_id': project_id}
return {}
def enforce(policy_name, headers, enforcer, target):
"""Return the user and project the request should be limited to.
:param policy_name: the policy name to validate authz against.
:param headers: HTTP headers dictionary
:param enforcer: policy enforcer
:param target: the alarm or "auto" to
"""
rule_method = "telemetry:" + policy_name
credentials = {
'roles': headers.get('X-Roles', "").split(","),
'user_id': headers.get('X-User-Id'),
'project_id': headers.get('X-Project-Id'),
}
# TODO(sileht): add deprecation warning to be able to remove this:
# maintain backward compat with Juno and previous by allowing the action if
# there is no rule defined for it
rules = enforcer.rules.keys()
if rule_method not in rules:
return
if not enforcer.enforce(rule_method, target, credentials):
pecan.core.abort(status_code=403,
detail='RBAC Authorization Failed')
# TODO(fabiog): these methods are still used because the scoping part is really
# convoluted and difficult to separate out.
def get_limited_to(headers, enforcer):
"""Return the user and project the request should be limited to.
:param headers: HTTP headers dictionary
:param enforcer: policy enforcer
:return: A tuple of (user, project), set to None if there's no limit on
one of these.
"""
# TODO(sileht): Only filtering on role work currently for segregation
# oslo.policy expects the target to be the alarm. That will allow
# creating more enhanced rbac. But for now we enforce the
# scoping of request to the project-id, so...
target = {}
credentials = {
'roles': headers.get('X-Roles', "").split(","),
}
# maintain backward compat with Juno and previous by using context_is_admin
# rule if the segregation rule (added in Kilo) is not defined
rules = enforcer.rules.keys()
rule_name = 'segregation' if 'segregation' in rules else 'context_is_admin'
if not enforcer.enforce(rule_name, target, credentials):
return headers.get('X-User-Id'), headers.get('X-Project-Id')
return None, None
def get_limited_to_project(headers, enforcer):
"""Return the project the request should be limited to.
:param headers: HTTP headers dictionary
:param enforcer: policy enforcer
:return: A project, or None if there's no limit on it.
"""
return get_limited_to(headers, enforcer)[1]

View File

@@ -1,29 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import sys
def config_generator():
try:
from oslo_config import generator
generator.main(
['--config-file',
'%s/aodh-config-generator.conf' % os.path.dirname(__file__)]
+ sys.argv[1:])
except Exception as e:
print("Unable to build sample configuration file: %s" % e)
return 1

View File

@@ -1,47 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014 OpenStack Foundation
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import cotyledon
from aodh import evaluator as evaluator_svc
from aodh import event as event_svc
from aodh import notifier as notifier_svc
from aodh import service
def notifier():
conf = service.prepare_service()
sm = cotyledon.ServiceManager()
sm.add(notifier_svc.AlarmNotifierService,
workers=conf.notifier.workers, args=(conf,))
sm.run()
def evaluator():
conf = service.prepare_service()
sm = cotyledon.ServiceManager()
sm.add(evaluator_svc.AlarmEvaluationService,
workers=conf.evaluator.workers, args=(conf,))
sm.run()
def listener():
conf = service.prepare_service()
sm = cotyledon.ServiceManager()
sm.add(event_svc.EventAlarmEvaluationService,
workers=conf.listener.workers, args=(conf,))
sm.run()

View File

@@ -1,12 +0,0 @@
[DEFAULT]
wrap_width = 79
namespace = aodh
namespace = aodh-auth
namespace = oslo.db
namespace = oslo.log
namespace = oslo.messaging
namespace = oslo.middleware.cors
namespace = oslo.middleware.healthcheck
namespace = oslo.middleware.http_proxy_to_wsgi
namespace = oslo.policy
namespace = keystonemiddleware.auth_token

View File

@@ -1,41 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log
from aodh import service
from aodh import storage
LOG = log.getLogger(__name__)
def dbsync():
conf = service.prepare_service()
storage.get_connection_from_config(conf).upgrade()
def expirer():
conf = service.prepare_service()
if conf.database.alarm_history_time_to_live > 0:
LOG.debug("Clearing expired alarm history data")
storage_conn = storage.get_connection_from_config(conf)
storage_conn.clear_expired_alarm_history_data(
conf.database.alarm_history_time_to_live)
else:
LOG.info("Nothing to clean, database alarm history time to live "
"is disabled")

View File

View File

@@ -1,32 +0,0 @@
# Copyright 2016 Hewlett Packard Enterprise Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_middleware import cors
def set_cors_middleware_defaults():
"""Update default configuration options for oslo.middleware."""
cors.set_defaults(
allow_headers=['X-Auth-Token',
'X-Openstack-Request-Id',
'X-Subject-Token'],
expose_headers=['X-Auth-Token',
'X-Openstack-Request-Id',
'X-Subject-Token'],
allow_methods=['GET',
'PUT',
'POST',
'DELETE',
'PATCH']
)

View File

@@ -1,246 +0,0 @@
#
# Copyright 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import bisect
import hashlib
import struct
from oslo_config import cfg
from oslo_log import log
from oslo_utils import uuidutils
import six
import tenacity
import tooz.coordination
LOG = log.getLogger(__name__)
OPTS = [
cfg.StrOpt('backend_url',
help='The backend URL to use for distributed coordination. If '
'left empty, per-deployment central agent and per-host '
'compute agent won\'t do workload '
'partitioning and will only function correctly if a '
'single instance of that service is running.'),
cfg.FloatOpt('heartbeat',
default=1.0,
help='Number of seconds between heartbeats for distributed '
'coordination.'),
cfg.FloatOpt('check_watchers',
default=10.0,
help='Number of seconds between checks to see if group '
'membership has changed'),
cfg.IntOpt('retry_backoff',
default=1,
help='Retry backoff factor when retrying to connect with'
' coordination backend'),
cfg.IntOpt('max_retry_interval',
default=30,
help='Maximum number of seconds between retry to join '
'partitioning group')
]
class ErrorJoiningPartitioningGroup(Exception):
def __init__(self):
super(ErrorJoiningPartitioningGroup, self).__init__((
'Error occurred when joining partitioning group'))
class MemberNotInGroupError(Exception):
def __init__(self, group_id, members, my_id):
super(MemberNotInGroupError, self).__init__((
'Group ID: %(group_id)s, Members: %(members)s, Me: %(me)s: '
'Current agent is not part of group and cannot take tasks') %
{'group_id': group_id, 'members': members, 'me': my_id})
class HashRing(object):
def __init__(self, nodes, replicas=100):
self._ring = dict()
self._sorted_keys = []
for node in nodes:
for r in six.moves.range(replicas):
hashed_key = self._hash('%s-%s' % (node, r))
self._ring[hashed_key] = node
self._sorted_keys.append(hashed_key)
self._sorted_keys.sort()
@staticmethod
def _hash(key):
return struct.unpack_from('>I',
hashlib.md5(str(key).encode()).digest())[0]
def _get_position_on_ring(self, key):
hashed_key = self._hash(key)
position = bisect.bisect(self._sorted_keys, hashed_key)
return position if position < len(self._sorted_keys) else 0
def get_node(self, key):
if not self._ring:
return None
pos = self._get_position_on_ring(key)
return self._ring[self._sorted_keys[pos]]
class PartitionCoordinator(object):
"""Workload partitioning coordinator.
This class uses the `tooz` library to manage group membership.
To ensure that the other agents know this agent is still alive,
the `heartbeat` method should be called periodically.
Coordination errors and reconnects are handled under the hood, so the
service using the partition coordinator need not care whether the
coordination backend is down. The `extract_my_subset` will simply return an
empty iterable in this case.
"""
def __init__(self, conf, my_id=None):
self.conf = conf
self.backend_url = self.conf.coordination.backend_url
self._coordinator = None
self._groups = set()
self._my_id = my_id or uuidutils.generate_uuid()
def start(self):
if self.backend_url:
try:
self._coordinator = tooz.coordination.get_coordinator(
self.backend_url, self._my_id)
self._coordinator.start()
LOG.info('Coordination backend started successfully.')
except tooz.coordination.ToozError:
LOG.exception('Error connecting to coordination backend.')
def stop(self):
if not self._coordinator:
return
for group in list(self._groups):
self.leave_group(group)
try:
self._coordinator.stop()
except tooz.coordination.ToozError:
LOG.exception('Error connecting to coordination backend.')
finally:
self._coordinator = None
def is_active(self):
return self._coordinator is not None
def heartbeat(self):
if self._coordinator:
if not self._coordinator.is_started:
# re-connect
self.start()
try:
self._coordinator.heartbeat()
except tooz.coordination.ToozError:
LOG.exception('Error sending a heartbeat to coordination '
'backend.')
def join_group(self, group_id):
if (not self._coordinator or not self._coordinator.is_started
or not group_id):
return
@tenacity.retry(
wait=tenacity.wait_exponential(
multiplier=self.conf.coordination.retry_backoff,
max=self.conf.coordination.max_retry_interval),
retry=tenacity.retry_if_exception_type(
ErrorJoiningPartitioningGroup))
def _inner():
try:
join_req = self._coordinator.join_group(group_id)
join_req.get()
LOG.info('Joined partitioning group %s', group_id)
except tooz.coordination.MemberAlreadyExist:
return
except tooz.coordination.GroupNotCreated:
create_grp_req = self._coordinator.create_group(group_id)
try:
create_grp_req.get()
except tooz.coordination.GroupAlreadyExist:
pass
raise ErrorJoiningPartitioningGroup()
except tooz.coordination.ToozError:
LOG.exception('Error joining partitioning group %s,'
' re-trying', group_id)
raise ErrorJoiningPartitioningGroup()
self._groups.add(group_id)
return _inner()
def leave_group(self, group_id):
if group_id not in self._groups:
return
if self._coordinator:
self._coordinator.leave_group(group_id)
self._groups.remove(group_id)
LOG.info('Left partitioning group %s', group_id)
def _get_members(self, group_id):
if not self._coordinator:
return [self._my_id]
while True:
get_members_req = self._coordinator.get_members(group_id)
try:
return get_members_req.get()
except tooz.coordination.GroupNotCreated:
self.join_group(group_id)
@tenacity.retry(
wait=tenacity.wait_random(max=2),
stop=tenacity.stop_after_attempt(5),
retry=tenacity.retry_if_exception_type(MemberNotInGroupError),
reraise=True)
def extract_my_subset(self, group_id, universal_set):
"""Filters an iterable, returning only objects assigned to this agent.
We have a list of objects and get a list of active group members from
`tooz`. We then hash all the objects into buckets and return only
the ones that hashed into *our* bucket.
"""
if not group_id:
return universal_set
if group_id not in self._groups:
self.join_group(group_id)
try:
members = self._get_members(group_id)
LOG.debug('Members of group: %s, Me: %s', members, self._my_id)
if self._my_id not in members:
LOG.warning('Cannot extract tasks because agent failed to '
'join group properly. Rejoining group.')
self.join_group(group_id)
members = self._get_members(group_id)
if self._my_id not in members:
raise MemberNotInGroupError(group_id, members, self._my_id)
LOG.debug('Members of group: %s, Me: %s', members, self._my_id)
hr = HashRing(members)
LOG.debug('Universal set: %s', universal_set)
my_subset = [v for v in universal_set
if hr.get_node(str(v)) == self._my_id]
LOG.debug('My subset: %s', my_subset)
return my_subset
except tooz.coordination.ToozError:
LOG.exception('Error getting group membership info from '
'coordination backend.')
return []

View File

@@ -1,277 +0,0 @@
#
# Copyright 2013-2015 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import datetime
import json
import threading
from concurrent import futures
import cotyledon
import croniter
from futurist import periodics
from oslo_config import cfg
from oslo_log import log
from oslo_utils import timeutils
from oslo_utils import uuidutils
import pytz
import six
from stevedore import extension
import aodh
from aodh import coordination
from aodh import keystone_client
from aodh import messaging
from aodh import queue
from aodh import storage
from aodh.storage import models
LOG = log.getLogger(__name__)
UNKNOWN = 'insufficient data'
OK = 'ok'
ALARM = 'alarm'
OPTS = [
cfg.BoolOpt('record_history',
default=True,
help='Record alarm change events.'
),
]
@six.add_metaclass(abc.ABCMeta)
class Evaluator(object):
"""Base class for alarm rule evaluator plugins."""
def __init__(self, conf):
self.conf = conf
self.notifier = queue.AlarmNotifier(self.conf)
self.storage_conn = None
self._ks_client = None
self._alarm_change_notifier = None
@property
def ks_client(self):
if self._ks_client is None:
self._ks_client = keystone_client.get_client(self.conf)
return self._ks_client
@property
def _storage_conn(self):
if not self.storage_conn:
self.storage_conn = storage.get_connection_from_config(self.conf)
return self.storage_conn
@property
def alarm_change_notifier(self):
if not self._alarm_change_notifier:
transport = messaging.get_transport(self.conf)
self._alarm_change_notifier = messaging.get_notifier(
transport, publisher_id="aodh.evaluator")
return self._alarm_change_notifier
def _record_change(self, alarm, reason):
if not self.conf.record_history:
return
type = models.AlarmChange.STATE_TRANSITION
detail = json.dumps({'state': alarm.state,
'transition_reason': reason})
user_id, project_id = self.ks_client.user_id, self.ks_client.project_id
on_behalf_of = alarm.project_id
now = timeutils.utcnow()
payload = dict(event_id=uuidutils.generate_uuid(),
alarm_id=alarm.alarm_id,
type=type,
detail=detail,
user_id=user_id,
project_id=project_id,
on_behalf_of=on_behalf_of,
timestamp=now)
try:
self._storage_conn.record_alarm_change(payload)
except aodh.NotImplementedError:
pass
notification = "alarm.state_transition"
self.alarm_change_notifier.info({},
notification, payload)
def _refresh(self, alarm, state, reason, reason_data, always_record=False):
"""Refresh alarm state."""
try:
previous = alarm.state
alarm.state = state
alarm.state_reason = reason
if previous != state or always_record:
LOG.info('alarm %(id)s transitioning to %(state)s because '
'%(reason)s', {'id': alarm.alarm_id,
'state': state,
'reason': reason})
try:
self._storage_conn.update_alarm(alarm)
except storage.AlarmNotFound:
LOG.warning("Skip updating this alarm's state, the"
"alarm: %s has been deleted",
alarm.alarm_id)
else:
self._record_change(alarm, reason)
self.notifier.notify(alarm, previous, reason, reason_data)
elif alarm.repeat_actions:
self.notifier.notify(alarm, previous, reason, reason_data)
except Exception:
# retry will occur naturally on the next evaluation
# cycle (unless alarm state reverts in the meantime)
LOG.exception('alarm state update failed')
@classmethod
def within_time_constraint(cls, alarm):
"""Check whether the alarm is within at least one of its time limits.
If there are none, then the answer is yes.
"""
if not alarm.time_constraints:
return True
now_utc = timeutils.utcnow().replace(tzinfo=pytz.utc)
for tc in alarm.time_constraints:
tz = pytz.timezone(tc['timezone']) if tc['timezone'] else None
now_tz = now_utc.astimezone(tz) if tz else now_utc
start_cron = croniter.croniter(tc['start'], now_tz)
if cls._is_exact_match(start_cron, now_tz):
return True
# start_cron.cur has changed in _is_exact_match(),
# croniter cannot recover properly in some corner case.
start_cron = croniter.croniter(tc['start'], now_tz)
latest_start = start_cron.get_prev(datetime.datetime)
duration = datetime.timedelta(seconds=tc['duration'])
if latest_start <= now_tz <= latest_start + duration:
return True
return False
@staticmethod
def _is_exact_match(cron, ts):
"""Handle edge in case when both parameters are equal.
Handle edge case where if the timestamp is the same as the
cron point in time to the minute, croniter returns the previous
start, not the current. We can check this by first going one
step back and then one step forward and check if we are
at the original point in time.
"""
cron.get_prev()
diff = (ts - cron.get_next(datetime.datetime)).total_seconds()
return abs(diff) < 60 # minute precision
@abc.abstractmethod
def evaluate(self, alarm):
"""Interface definition.
evaluate an alarm
alarm Alarm: an instance of the Alarm
"""
class AlarmEvaluationService(cotyledon.Service):
PARTITIONING_GROUP_NAME = "alarm_evaluator"
EVALUATOR_EXTENSIONS_NAMESPACE = "aodh.evaluator"
def __init__(self, worker_id, conf):
super(AlarmEvaluationService, self).__init__(worker_id)
self.conf = conf
ef = lambda: futures.ThreadPoolExecutor(max_workers=10)
self.periodic = periodics.PeriodicWorker.create(
[], executor_factory=ef)
self.evaluators = extension.ExtensionManager(
namespace=self.EVALUATOR_EXTENSIONS_NAMESPACE,
invoke_on_load=True,
invoke_args=(self.conf,)
)
self.storage_conn = storage.get_connection_from_config(self.conf)
self.partition_coordinator = coordination.PartitionCoordinator(
self.conf)
self.partition_coordinator.start()
self.partition_coordinator.join_group(self.PARTITIONING_GROUP_NAME)
# allow time for coordination if necessary
delay_start = self.partition_coordinator.is_active()
if self.evaluators:
@periodics.periodic(spacing=self.conf.evaluation_interval,
run_immediately=not delay_start)
def evaluate_alarms():
self._evaluate_assigned_alarms()
self.periodic.add(evaluate_alarms)
if self.partition_coordinator.is_active():
heartbeat_interval = min(self.conf.coordination.heartbeat,
self.conf.evaluation_interval / 4)
@periodics.periodic(spacing=heartbeat_interval,
run_immediately=True)
def heartbeat():
self.partition_coordinator.heartbeat()
self.periodic.add(heartbeat)
t = threading.Thread(target=self.periodic.start)
t.daemon = True
t.start()
def terminate(self):
self.periodic.stop()
self.partition_coordinator.stop()
self.periodic.wait()
def _evaluate_assigned_alarms(self):
try:
alarms = self._assigned_alarms()
LOG.info('initiating evaluation cycle on %d alarms',
len(alarms))
for alarm in alarms:
self._evaluate_alarm(alarm)
except Exception:
LOG.exception('alarm evaluation cycle failed')
def _evaluate_alarm(self, alarm):
"""Evaluate the alarms assigned to this evaluator."""
if alarm.type not in self.evaluators:
LOG.debug('skipping alarm %s: type unsupported', alarm.alarm_id)
return
LOG.debug('evaluating alarm %s', alarm.alarm_id)
try:
self.evaluators[alarm.type].obj.evaluate(alarm)
except Exception:
LOG.exception('Failed to evaluate alarm %s', alarm.alarm_id)
def _assigned_alarms(self):
# NOTE(r-mibu): The 'event' type alarms will be evaluated by the
# event-driven alarm evaluator, so this periodical evaluator skips
# those alarms.
all_alarms = self.storage_conn.get_alarms(enabled=True,
exclude=dict(type='event'))
all_alarms = list(all_alarms)
all_alarm_ids = [a.alarm_id for a in all_alarms]
selected = self.partition_coordinator.extract_my_subset(
self.PARTITIONING_GROUP_NAME, all_alarm_ids)
return list(filter(lambda a: a.alarm_id in selected, all_alarms))

View File

@@ -1,245 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from oslo_log import log
import six
import stevedore
from aodh import evaluator
from aodh.evaluator import threshold
from aodh.i18n import _
LOG = log.getLogger(__name__)
STATE_CHANGE = {evaluator.ALARM: 'outside their threshold.',
evaluator.OK: 'inside their threshold.',
evaluator.UNKNOWN: 'state evaluated to unknown.'}
class RuleTarget(object):
def __init__(self, rule, rule_evaluator, rule_name):
self.rule = rule
self.type = rule.get('type')
self.rule_evaluator = rule_evaluator
self.rule_name = rule_name
self.state = None
self.trending_state = None
self.statistics = None
self.evaluated = False
def evaluate(self):
# Evaluate a sub-rule of composite rule
if not self.evaluated:
LOG.debug('Evaluating %(type)s rule: %(rule)s',
{'type': self.type, 'rule': self.rule})
try:
self.state, self.trending_state, self.statistics, __, __ = \
self.rule_evaluator.evaluate_rule(self.rule)
except threshold.InsufficientDataError as e:
self.state = evaluator.UNKNOWN
self.trending_state = None
self.statistics = e.statistics
self.evaluated = True
class RuleEvaluationBase(object):
def __init__(self, rule_target):
self.rule_target = rule_target
def __str__(self):
return self.rule_target.rule_name
class OkEvaluation(RuleEvaluationBase):
def __bool__(self):
self.rule_target.evaluate()
return self.rule_target.state == evaluator.OK
__nonzero__ = __bool__
class AlarmEvaluation(RuleEvaluationBase):
def __bool__(self):
self.rule_target.evaluate()
return self.rule_target.state == evaluator.ALARM
__nonzero__ = __bool__
class AndOp(object):
def __init__(self, rule_targets):
self.rule_targets = rule_targets
def __bool__(self):
return all(self.rule_targets)
def __str__(self):
return '(' + ' and '.join(six.moves.map(str, self.rule_targets)) + ')'
__nonzero__ = __bool__
class OrOp(object):
def __init__(self, rule_targets):
self.rule_targets = rule_targets
def __bool__(self):
return any(self.rule_targets)
def __str__(self):
return '(' + ' or '.join(six.moves.map(str, self.rule_targets)) + ')'
__nonzero__ = __bool__
class CompositeEvaluator(evaluator.Evaluator):
def __init__(self, conf):
super(CompositeEvaluator, self).__init__(conf)
self.conf = conf
self._threshold_evaluators = None
self.rule_targets = []
self.rule_name_prefix = 'rule'
self.rule_num = 0
@property
def threshold_evaluators(self):
if not self._threshold_evaluators:
threshold_types = ('threshold', 'gnocchi_resources_threshold',
'gnocchi_aggregation_by_metrics_threshold',
'gnocchi_aggregation_by_resources_threshold')
self._threshold_evaluators = stevedore.NamedExtensionManager(
'aodh.evaluator', threshold_types, invoke_on_load=True,
invoke_args=(self.conf,))
return self._threshold_evaluators
def _parse_composite_rule(self, alarm_rule):
"""Parse the composite rule.
The composite rule is assembled by sub threshold rules with 'and',
'or', the form can be nested. e.g. the form of composite rule can be
like this:
{
"and": [threshold_rule0, threshold_rule1,
{'or': [threshold_rule2, threshold_rule3,
threshold_rule4, threshold_rule5]}]
}
"""
if (isinstance(alarm_rule, dict) and len(alarm_rule) == 1
and list(alarm_rule)[0] in ('and', 'or')):
and_or_key = list(alarm_rule)[0]
if and_or_key == 'and':
rules = (self._parse_composite_rule(r) for r in
alarm_rule['and'])
rules_alarm, rules_ok = zip(*rules)
return AndOp(rules_alarm), OrOp(rules_ok)
else:
rules = (self._parse_composite_rule(r) for r in
alarm_rule['or'])
rules_alarm, rules_ok = zip(*rules)
return OrOp(rules_alarm), AndOp(rules_ok)
else:
rule_evaluator = self.threshold_evaluators[alarm_rule['type']].obj
self.rule_num += 1
name = self.rule_name_prefix + str(self.rule_num)
rule = RuleTarget(alarm_rule, rule_evaluator, name)
self.rule_targets.append(rule)
return AlarmEvaluation(rule), OkEvaluation(rule)
def _reason(self, alarm, new_state, rule_target_alarm):
transition = alarm.state != new_state
reason_data = {
'type': 'composite',
'composition_form': str(rule_target_alarm)}
root_cause_rules = {}
for rule in self.rule_targets:
if rule.state == new_state:
root_cause_rules.update({rule.rule_name: rule.rule})
reason_data.update(causative_rules=root_cause_rules)
params = {'state': new_state,
'expression': str(rule_target_alarm),
'rules': ', '.join(sorted(root_cause_rules)),
'description': STATE_CHANGE[new_state]}
if transition:
reason = (_('Composite rule alarm with composition form: '
'%(expression)s transition to %(state)s, due to '
'rules: %(rules)s %(description)s') % params)
else:
reason = (_('Composite rule alarm with composition form: '
'%(expression)s remaining as %(state)s, due to '
'rules: %(rules)s %(description)s') % params)
return reason, reason_data
def _evaluate_sufficient(self, alarm, rule_target_alarm, rule_target_ok):
# Some of evaluated rules are unknown states or trending states.
for rule in self.rule_targets:
if rule.trending_state is not None:
if alarm.state == evaluator.UNKNOWN:
rule.state = rule.trending_state
elif rule.trending_state == evaluator.ALARM:
rule.state = evaluator.OK
elif rule.trending_state == evaluator.OK:
rule.state = evaluator.ALARM
else:
rule.state = alarm.state
alarm_triggered = bool(rule_target_alarm)
if alarm_triggered:
reason, reason_data = self._reason(alarm, evaluator.ALARM,
rule_target_alarm)
self._refresh(alarm, evaluator.ALARM, reason, reason_data)
return True
ok_result = bool(rule_target_ok)
if ok_result:
reason, reason_data = self._reason(alarm, evaluator.OK,
rule_target_alarm)
self._refresh(alarm, evaluator.OK, reason, reason_data)
return True
return False
def evaluate(self, alarm):
if not self.within_time_constraint(alarm):
LOG.debug('Attempted to evaluate alarm %s, but it is not '
'within its time constraint.', alarm.alarm_id)
return
LOG.debug("Evaluating composite rule alarm %s ...", alarm.alarm_id)
self.rule_targets = []
self.rule_num = 0
rule_target_alarm, rule_target_ok = self._parse_composite_rule(
alarm.rule)
sufficient = self._evaluate_sufficient(alarm, rule_target_alarm,
rule_target_ok)
if not sufficient:
for rule in self.rule_targets:
rule.evaluate()
sufficient = self._evaluate_sufficient(alarm, rule_target_alarm,
rule_target_ok)
if not sufficient:
# The following unknown situations is like these:
# 1. 'unknown' and 'alarm'
# 2. 'unknown' or 'ok'
reason, reason_data = self._reason(alarm, evaluator.UNKNOWN,
rule_target_alarm)
if alarm.state != evaluator.UNKNOWN:
self._refresh(alarm, evaluator.UNKNOWN, reason, reason_data)
else:
LOG.debug(reason)

View File

@@ -1,275 +0,0 @@
#
# Copyright 2015 NEC Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import operator
from oslo_config import cfg
from oslo_log import log
from oslo_serialization import jsonutils
from oslo_utils import fnmatch
from oslo_utils import timeutils
import six
from aodh import evaluator
from aodh.i18n import _
LOG = log.getLogger(__name__)
COMPARATORS = {
'gt': operator.gt,
'lt': operator.lt,
'ge': operator.ge,
'le': operator.le,
'eq': operator.eq,
'ne': operator.ne,
}
OPTS = [
cfg.IntOpt('event_alarm_cache_ttl',
default=60,
help='TTL of event alarm caches, in seconds. '
'Set to 0 to disable caching.'),
]
def _sanitize_trait_value(value, trait_type):
if trait_type in (2, 'integer'):
return int(value)
elif trait_type in (3, 'float'):
return float(value)
elif trait_type in (4, 'datetime'):
return timeutils.normalize_time(timeutils.parse_isotime(value))
else:
return six.text_type(value)
class InvalidEvent(Exception):
"""Error raised when the received event is missing mandatory fields."""
class Event(object):
"""Wrapped event object to hold converted values for this evaluator."""
TRAIT_FIELD = 0
TRAIT_TYPE = 1
TRAIT_VALUE = 2
def __init__(self, event):
self.obj = event
self._validate()
self.id = event.get('message_id')
self._parse_traits()
def _validate(self):
"""Validate received event has mandatory parameters."""
if not self.obj:
LOG.error('Received invalid event (empty or None)')
raise InvalidEvent()
if not self.obj.get('event_type'):
LOG.error('Failed to extract event_type from event = %s',
self.obj)
raise InvalidEvent()
if not self.obj.get('message_id'):
LOG.error('Failed to extract message_id from event = %s',
self.obj)
raise InvalidEvent()
def _parse_traits(self):
self.traits = {}
self.project = ''
for t in self.obj.get('traits', []):
k = t[self.TRAIT_FIELD]
v = _sanitize_trait_value(t[self.TRAIT_VALUE], t[self.TRAIT_TYPE])
self.traits[k] = v
if k in ('tenant_id', 'project_id'):
self.project = v
def get_value(self, field):
if field.startswith('traits.'):
key = field.split('.', 1)[-1]
return self.traits.get(key)
v = self.obj
for f in field.split('.'):
if hasattr(v, 'get'):
v = v.get(f)
else:
return None
return v
class Alarm(object):
"""Wrapped alarm object to hold converted values for this evaluator."""
TRAIT_TYPES = {
'none': 0,
'string': 1,
'integer': 2,
'float': 3,
'datetime': 4,
}
def __init__(self, alarm):
self.obj = alarm
self.id = alarm.alarm_id
self._parse_query()
def _parse_query(self):
self.query = []
for q in self.obj.rule.get('query', []):
if not q['field'].startswith('traits.'):
self.query.append(q)
continue
type_num = self.TRAIT_TYPES[q.get('type') or 'string']
field = q['field']
value = _sanitize_trait_value(q.get('value'), type_num)
op = COMPARATORS[q.get('op', 'eq')]
self.query.append({'field': field, 'value': value, 'op': op})
def fired_and_no_repeat(self):
return (not self.obj.repeat_actions and
self.obj.state == evaluator.ALARM)
def event_type_to_watch(self, event_type):
return fnmatch.fnmatch(event_type, self.obj.rule['event_type'])
class EventAlarmEvaluator(evaluator.Evaluator):
def __init__(self, conf):
super(EventAlarmEvaluator, self).__init__(conf)
self.caches = {}
def evaluate_events(self, events):
"""Evaluate the events by referring related alarms."""
if not isinstance(events, list):
events = [events]
LOG.debug('Starting event alarm evaluation: #events = %d',
len(events))
for e in events:
LOG.debug('Evaluating event: event = %s', e)
try:
event = Event(e)
except InvalidEvent:
LOG.warning('Event <%s> is invalid, aborting evaluation '
'for it.', e)
continue
for id, alarm in six.iteritems(
self._get_project_alarms(event.project)):
try:
self._evaluate_alarm(alarm, event)
except Exception:
LOG.exception('Failed to evaluate alarm (id=%(a)s) '
'triggered by event = %(e)s.',
{'a': id, 'e': e})
LOG.debug('Finished event alarm evaluation.')
def _get_project_alarms(self, project):
if self.conf.event_alarm_cache_ttl and project in self.caches:
if timeutils.is_older_than(self.caches[project]['updated'],
self.conf.event_alarm_cache_ttl):
del self.caches[project]
else:
return self.caches[project]['alarms']
# TODO(r-mibu): Implement "changes-since" at the storage API and make
# this function update only alarms changed from the last access.
alarms = {a.alarm_id: Alarm(a) for a in
self._storage_conn.get_alarms(enabled=True,
alarm_type='event',
project=project)}
if self.conf.event_alarm_cache_ttl:
self.caches[project] = {
'alarms': alarms,
'updated': timeutils.utcnow()
}
return alarms
def _evaluate_alarm(self, alarm, event):
"""Evaluate the alarm by referring the received event.
This function compares each condition of the alarm on the assumption
that all conditions are combined by AND operator.
When the received event met conditions defined in alarm 'event_type'
and 'query', the alarm will be fired and updated to state='alarm'
(alarmed).
Note: by this evaluator, the alarm won't be changed to state='ok'
nor state='insufficient data'.
"""
LOG.debug('Evaluating alarm (id=%(a)s) triggered by event '
'(message_id=%(e)s).', {'a': alarm.id, 'e': event.id})
if alarm.fired_and_no_repeat():
LOG.debug('Skip evaluation of the alarm id=%s which have already '
'fired.', alarm.id)
return
if not alarm.event_type_to_watch(event.obj['event_type']):
LOG.debug('Aborting evaluation of the alarm (id=%s) since '
'event_type is not matched.', alarm.id)
return
def _compare(condition):
v = event.get_value(condition['field'])
LOG.debug('Comparing value=%(v)s against condition=%(c)s .',
{'v': v, 'c': condition})
return condition['op'](v, condition['value'])
for condition in alarm.query:
if not _compare(condition):
LOG.debug('Aborting evaluation of the alarm due to '
'unmet condition=%s .', condition)
return
self._fire_alarm(alarm, event)
def _fire_alarm(self, alarm, event):
"""Update alarm state and fire alarm via alarm notifier."""
state = evaluator.ALARM
reason = (_('Event <id=%(id)s,event_type=%(event_type)s> hits the '
'query <query=%(alarm_query)s>.') %
{'id': event.id,
'event_type': event.get_value('event_type'),
'alarm_query': jsonutils.dumps(alarm.obj.rule['query'],
sort_keys=True)})
reason_data = {'type': 'event', 'event': event.obj}
always_record = alarm.obj.repeat_actions
self._refresh(alarm.obj, state, reason, reason_data, always_record)
def _refresh(self, alarm, state, reason, reason_data, always_record):
super(EventAlarmEvaluator, self)._refresh(alarm, state,
reason, reason_data,
always_record)
project = alarm.project_id
if self.conf.event_alarm_cache_ttl and project in self.caches:
self.caches[project]['alarms'][alarm.alarm_id].obj.state = state
# NOTE(r-mibu): This method won't be used, but we have to define here in
# order to overwrite the abstract method in the super class.
# TODO(r-mibu): Change the base (common) class design for evaluators.
def evaluate(self, alarm):
pass

View File

@@ -1,159 +0,0 @@
#
# Copyright 2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from gnocchiclient import client
from gnocchiclient import exceptions
from oslo_log import log
from oslo_serialization import jsonutils
from aodh.evaluator import threshold
from aodh import keystone_client
LOG = log.getLogger(__name__)
# The list of points that Gnocchi API returned is composed
# of tuples with (timestamp, granularity, value)
GRANULARITY = 1
VALUE = 2
class GnocchiBase(threshold.ThresholdEvaluator):
def __init__(self, conf):
super(GnocchiBase, self).__init__(conf)
self._gnocchi_client = client.Client(
'1', keystone_client.get_session(conf),
interface=conf.service_credentials.interface,
region_name=conf.service_credentials.region_name)
@staticmethod
def _sanitize(rule, statistics):
"""Return the datapoints that correspond to the alarm granularity"""
# TODO(sileht): if there's no direct match, but there is an archive
# policy with granularity that's an even divisor or the period,
# we could potentially do a mean-of-means (or max-of-maxes or whatever,
# but not a stddev-of-stddevs).
# TODO(sileht): support alarm['exclude_outliers']
LOG.debug('sanitize stats %s', statistics)
statistics = [stats[VALUE] for stats in statistics
if stats[GRANULARITY] == rule['granularity']]
if not statistics:
raise threshold.InsufficientDataError(
"No datapoint for granularity %s" % rule['granularity'], [])
statistics = statistics[-rule['evaluation_periods']:]
LOG.debug('pruned statistics to %d', len(statistics))
return statistics
class GnocchiResourceThresholdEvaluator(GnocchiBase):
def _statistics(self, rule, start, end):
try:
return self._gnocchi_client.metric.get_measures(
metric=rule['metric'],
start=start, stop=end,
resource_id=rule['resource_id'],
aggregation=rule['aggregation_method'])
except exceptions.MetricNotFound:
raise threshold.InsufficientDataError(
'metric %s for resource %s does not exists' %
(rule['metric'], rule['resource_id']), [])
except exceptions.ResourceNotFound:
raise threshold.InsufficientDataError(
'resource %s does not exists' % rule['resource_id'], [])
except exceptions.NotFound:
# TODO(sileht): gnocchiclient should raise a explicit
# exception for AggregationNotFound, this API endpoint
# can only raise 3 different 404, so we are safe to
# assume this is an AggregationNotFound for now.
raise threshold.InsufficientDataError(
'aggregation %s does not exist for '
'metric %s of resource %s' % (rule['aggregation_method'],
rule['metric'],
rule['resource_id']),
[])
except Exception as e:
msg = 'alarm statistics retrieval failed: %s' % e
LOG.warning(msg)
raise threshold.InsufficientDataError(msg, [])
class GnocchiAggregationMetricsThresholdEvaluator(GnocchiBase):
def _statistics(self, rule, start, end):
try:
# FIXME(sileht): In case of a heat autoscaling stack decide to
# delete an instance, the gnocchi metrics associated to this
# instance will be no more updated and when the alarm will ask
# for the aggregation, gnocchi will raise a 'No overlap'
# exception.
# So temporary set 'needed_overlap' to 0 to disable the
# gnocchi checks about missing points. For more detail see:
# https://bugs.launchpad.net/gnocchi/+bug/1479429
return self._gnocchi_client.metric.aggregation(
metrics=rule['metrics'],
start=start, stop=end,
aggregation=rule['aggregation_method'],
needed_overlap=0)
except exceptions.MetricNotFound:
raise threshold.InsufficientDataError(
'At least of metrics in %s does not exist' %
rule['metrics'], [])
except exceptions.NotFound:
# TODO(sileht): gnocchiclient should raise a explicit
# exception for AggregationNotFound, this API endpoint
# can only raise 3 different 404, so we are safe to
# assume this is an AggregationNotFound for now.
raise threshold.InsufficientDataError(
'aggregation %s does not exist for at least one '
'metrics in %s' % (rule['aggregation_method'],
rule['metrics']), [])
except Exception as e:
msg = 'alarm statistics retrieval failed: %s' % e
LOG.warning(msg)
raise threshold.InsufficientDataError(msg, [])
class GnocchiAggregationResourcesThresholdEvaluator(GnocchiBase):
def _statistics(self, rule, start, end):
# FIXME(sileht): In case of a heat autoscaling stack decide to
# delete an instance, the gnocchi metrics associated to this
# instance will be no more updated and when the alarm will ask
# for the aggregation, gnocchi will raise a 'No overlap'
# exception.
# So temporary set 'needed_overlap' to 0 to disable the
# gnocchi checks about missing points. For more detail see:
# https://bugs.launchpad.net/gnocchi/+bug/1479429
try:
return self._gnocchi_client.metric.aggregation(
metrics=rule['metric'],
query=jsonutils.loads(rule['query']),
resource_type=rule["resource_type"],
start=start, stop=end,
aggregation=rule['aggregation_method'],
needed_overlap=0,
)
except exceptions.MetricNotFound:
raise threshold.InsufficientDataError(
'metric %s does not exists' % rule['metric'], [])
except exceptions.NotFound:
# TODO(sileht): gnocchiclient should raise a explicit
# exception for AggregationNotFound, this API endpoint
# can only raise 3 different 404, so we are safe to
# assume this is an AggregationNotFound for now.
raise threshold.InsufficientDataError(
'aggregation %s does not exist for at least one '
'metric of the query' % rule['aggregation_method'], [])
except Exception as e:
msg = 'alarm statistics retrieval failed: %s' % e
LOG.warning(msg)
raise threshold.InsufficientDataError(msg, [])

View File

@@ -1,247 +0,0 @@
#
# Copyright 2013-2015 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import datetime
import operator
import six
from ceilometerclient import client as ceiloclient
from ceilometerclient import exc as ceiloexc
from oslo_config import cfg
from oslo_log import log
from oslo_utils import timeutils
from aodh import evaluator
from aodh.evaluator import utils
from aodh.i18n import _
from aodh import keystone_client
LOG = log.getLogger(__name__)
COMPARATORS = {
'gt': operator.gt,
'lt': operator.lt,
'ge': operator.ge,
'le': operator.le,
'eq': operator.eq,
'ne': operator.ne,
}
OPTS = [
cfg.IntOpt('additional_ingestion_lag',
min=0,
default=0,
help='The number of seconds to extend the evaluation windows '
'to compensate the reporting/ingestion lag.')
]
class InsufficientDataError(Exception):
def __init__(self, reason, statistics):
self.reason = reason
self.statistics = statistics
super(InsufficientDataError, self).__init__(reason)
class ThresholdEvaluator(evaluator.Evaluator):
# the sliding evaluation window is extended to allow
# the reporting/ingestion lag this can be increased
# with 'additional_ingestion_lag' seconds if needed.
look_back = 1
def __init__(self, conf):
super(ThresholdEvaluator, self).__init__(conf)
self._cm_client = None
@property
def cm_client(self):
if self._cm_client is None:
auth_config = self.conf.service_credentials
self._cm_client = ceiloclient.get_client(
version=2,
session=keystone_client.get_session(self.conf),
# ceiloclient adapter options
region_name=auth_config.region_name,
interface=auth_config.interface,
)
return self._cm_client
def _bound_duration(self, rule):
"""Bound the duration of the statistics query."""
now = timeutils.utcnow()
# when exclusion of weak datapoints is enabled, we extend
# the look-back period so as to allow a clearer sample count
# trend to be established
look_back = (self.look_back if not rule.get('exclude_outliers')
else rule['evaluation_periods'])
window = ((rule.get('period', None) or rule['granularity'])
* (rule['evaluation_periods'] + look_back) +
self.conf.additional_ingestion_lag)
start = now - datetime.timedelta(seconds=window)
LOG.debug('query stats from %(start)s to '
'%(now)s', {'start': start, 'now': now})
return start.isoformat(), now.isoformat()
@staticmethod
def _sanitize(rule, statistics):
"""Sanitize statistics."""
LOG.debug('sanitize stats %s', statistics)
if rule.get('exclude_outliers'):
key = operator.attrgetter('count')
mean = utils.mean(statistics, key)
stddev = utils.stddev(statistics, key, mean)
lower = mean - 2 * stddev
upper = mean + 2 * stddev
inliers, outliers = utils.anomalies(statistics, key, lower, upper)
if outliers:
LOG.debug('excluded weak datapoints with sample counts %s',
[s.count for s in outliers])
statistics = inliers
else:
LOG.debug('no excluded weak datapoints')
# in practice statistics are always sorted by period start, not
# strictly required by the API though
statistics = statistics[-rule['evaluation_periods']:]
result_statistics = [getattr(stat, rule['statistic'])
for stat in statistics]
LOG.debug('pruned statistics to %d', len(statistics))
return result_statistics
def _statistics(self, rule, start, end):
"""Retrieve statistics over the current window."""
after = dict(field='timestamp', op='ge', value=start)
before = dict(field='timestamp', op='le', value=end)
query = copy.copy(rule['query'])
query.extend([before, after])
LOG.debug('stats query %s', query)
try:
return self.cm_client.statistics.list(
meter_name=rule['meter_name'], q=query,
period=rule['period'])
except Exception as e:
if isinstance(e, ceiloexc.HTTPException) and e.code == 410:
LOG.warning("This telemetry installation is not configured to "
"support alarm of type 'threshold', they should "
"be disabled or removed.")
else:
LOG.exception(_('alarm stats retrieval failed'))
return []
@staticmethod
def _reason_data(disposition, count, most_recent):
"""Create a reason data dictionary for this evaluator type."""
return {'type': 'threshold', 'disposition': disposition,
'count': count, 'most_recent': most_recent}
@classmethod
def _reason(cls, alarm, statistics, state, count):
"""Fabricate reason string."""
if state == evaluator.OK:
disposition = 'inside'
count = len(statistics) - count
else:
disposition = 'outside'
last = statistics[-1] if statistics else None
transition = alarm.state != state
reason_data = cls._reason_data(disposition, count, last)
if transition:
return ('Transition to %(state)s due to %(count)d samples'
' %(disposition)s threshold, most recent:'
' %(most_recent)s' % dict(reason_data, state=state),
reason_data)
return ('Remaining as %(state)s due to %(count)d samples'
' %(disposition)s threshold, most recent: %(most_recent)s'
% dict(reason_data, state=state), reason_data)
def evaluate_rule(self, alarm_rule):
"""Evaluate alarm rule.
:returns: state, trending state and statistics.
"""
start, end = self._bound_duration(alarm_rule)
statistics = self._statistics(alarm_rule, start, end)
statistics = self._sanitize(alarm_rule, statistics)
sufficient = len(statistics) >= alarm_rule['evaluation_periods']
if not sufficient:
raise InsufficientDataError(
'%d datapoints are unknown' % alarm_rule['evaluation_periods'],
statistics)
def _compare(value):
op = COMPARATORS[alarm_rule['comparison_operator']]
limit = alarm_rule['threshold']
LOG.debug('comparing value %(value)s against threshold'
' %(limit)s', {'value': value, 'limit': limit})
return op(value, limit)
compared = list(six.moves.map(_compare, statistics))
distilled = all(compared)
unequivocal = distilled or not any(compared)
number_outside = len([c for c in compared if c])
if unequivocal:
state = evaluator.ALARM if distilled else evaluator.OK
return state, None, statistics, number_outside, None
else:
trending_state = evaluator.ALARM if compared[-1] else evaluator.OK
return None, trending_state, statistics, number_outside, None
def _transition_alarm(self, alarm, state, trending_state, statistics,
outside_count, unknown_reason):
unknown = alarm.state == evaluator.UNKNOWN
continuous = alarm.repeat_actions
if trending_state:
if unknown or continuous:
state = trending_state if unknown else alarm.state
reason, reason_data = self._reason(alarm, statistics, state,
outside_count)
self._refresh(alarm, state, reason, reason_data)
return
if state == evaluator.UNKNOWN and not unknown:
LOG.warning('Expecting %(expected)d datapoints but only get '
'%(actual)d'
% {'expected': alarm.rule['evaluation_periods'],
'actual': len(statistics)})
# Reason is not same as log message because we want to keep
# consistent since thirdparty software may depend on old format.
last = None if not statistics else statistics[-1]
reason_data = self._reason_data('unknown',
alarm.rule['evaluation_periods'],
last)
self._refresh(alarm, state, unknown_reason, reason_data)
elif state and (alarm.state != state or continuous):
reason, reason_data = self._reason(alarm, statistics, state,
outside_count)
self._refresh(alarm, state, reason, reason_data)
def evaluate(self, alarm):
if not self.within_time_constraint(alarm):
LOG.debug('Attempted to evaluate alarm %s, but it is not '
'within its time constraint.', alarm.alarm_id)
return
try:
evaluation = self.evaluate_rule(alarm.rule)
except InsufficientDataError as e:
evaluation = (evaluator.UNKNOWN, None, e.statistics, 0,
e.reason)
self._transition_alarm(alarm, *evaluation)

View File

@@ -1,58 +0,0 @@
#
# Copyright 2014 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import math
def mean(s, key=lambda x: x):
"""Calculate the mean of a numeric list."""
count = float(len(s))
if count:
return math.fsum(map(key, s)) / count
return 0.0
def deltas(s, key, m=None):
"""Calculate the squared distances from mean for a numeric list."""
m = m or mean(s, key)
return [(key(i) - m) ** 2 for i in s]
def variance(s, key, m=None):
"""Calculate the variance of a numeric list."""
return mean(deltas(s, key, m))
def stddev(s, key, m=None):
"""Calculate the standard deviation of a numeric list."""
return math.sqrt(variance(s, key, m))
def outside(s, key, lower=0.0, upper=0.0):
"""Determine if value falls outside upper and lower bounds."""
v = key(s)
return v < lower or v > upper
def anomalies(s, key, lower=0.0, upper=0.0):
"""Separate anomalous data points from the in-liers."""
inliers = []
outliers = []
for i in s:
if outside(i, key, lower, upper):
outliers.append(i)
else:
inliers.append(i)
return inliers, outliers

View File

@@ -1,70 +0,0 @@
#
# Copyright 2015 NEC Corporation.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import cotyledon
from oslo_config import cfg
from oslo_log import log
import oslo_messaging
from aodh.evaluator import event
from aodh import messaging
from aodh import storage
LOG = log.getLogger(__name__)
OPTS = [
cfg.StrOpt('event_alarm_topic',
default='alarm.all',
deprecated_group='DEFAULT',
help='The topic that aodh uses for event alarm evaluation.'),
cfg.IntOpt('batch_size',
default=1,
help='Number of notification messages to wait before '
'dispatching them.'),
cfg.IntOpt('batch_timeout',
help='Number of seconds to wait before dispatching samples '
'when batch_size is not reached (None means indefinitely).'),
]
class EventAlarmEndpoint(object):
def __init__(self, evaluator):
self.evaluator = evaluator
def sample(self, notifications):
LOG.debug('Received %s messages in batch.', len(notifications))
for notification in notifications:
self.evaluator.evaluate_events(notification['payload'])
class EventAlarmEvaluationService(cotyledon.Service):
def __init__(self, worker_id, conf):
super(EventAlarmEvaluationService, self).__init__(worker_id)
self.conf = conf
self.storage_conn = storage.get_connection_from_config(self.conf)
self.evaluator = event.EventAlarmEvaluator(self.conf)
self.listener = messaging.get_batch_notification_listener(
messaging.get_transport(self.conf),
[oslo_messaging.Target(
topic=self.conf.listener.event_alarm_topic)],
[EventAlarmEndpoint(self.evaluator)], False,
self.conf.listener.batch_size,
self.conf.listener.batch_timeout)
self.listener.start()
def terminate(self):
self.listener.stop()
self.listener.wait()

View File

@@ -1,36 +0,0 @@
# Copyright 2014 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""oslo.i18n integration module.
See https://docs.openstack.org/oslo.i18n/latest/user/usage.html
"""
import oslo_i18n
DOMAIN = 'aodh'
_translators = oslo_i18n.TranslatorFactory(domain=DOMAIN)
# The primary translation function using the well-known name "_"
_ = _translators.primary
def translate(value, user_locale):
return oslo_i18n.translate(value, user_locale)
def get_available_languages():
return oslo_i18n.get_available_languages(DOMAIN)

View File

@@ -1,115 +0,0 @@
#
# Copyright 2015 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from keystoneauth1 import exceptions as ka_exception
from keystoneauth1.identity.generic import password
from keystoneauth1 import loading as ka_loading
from keystoneauth1 import session
from keystoneclient.v3 import client as ks_client_v3
from oslo_config import cfg
CFG_GROUP = "service_credentials"
def get_session(conf):
"""Get an aodh service credentials auth session."""
auth_plugin = ka_loading.load_auth_from_conf_options(conf, CFG_GROUP)
return ka_loading.load_session_from_conf_options(
conf, CFG_GROUP, auth=auth_plugin
)
def get_client(conf):
"""Return a client for keystone v3 endpoint."""
sess = get_session(conf)
return ks_client_v3.Client(session=sess)
def get_trusted_client(conf, trust_id):
# Ideally we would use load_session_from_conf_options, but we can't do that
# *and* specify a trust, so let's create the object manually.
auth_plugin = password.Password(
username=conf[CFG_GROUP].username,
password=conf[CFG_GROUP].password,
auth_url=conf[CFG_GROUP].auth_url,
user_domain_id=conf[CFG_GROUP].user_domain_id,
trust_id=trust_id)
sess = session.Session(auth=auth_plugin)
return ks_client_v3.Client(session=sess)
def get_auth_token(client):
return client.session.auth.get_access(client.session).auth_token
def get_client_on_behalf_user(auth_plugin):
"""Return a client for keystone v3 endpoint."""
sess = session.Session(auth=auth_plugin)
return ks_client_v3.Client(session=sess)
def create_trust_id(conf, trustor_user_id, trustor_project_id, roles,
auth_plugin):
"""Create a new trust using the aodh service user."""
admin_client = get_client(conf)
trustee_user_id = admin_client.session.get_user_id()
client = get_client_on_behalf_user(auth_plugin)
trust = client.trusts.create(trustor_user=trustor_user_id,
trustee_user=trustee_user_id,
project=trustor_project_id,
impersonation=True,
role_names=roles)
return trust.id
def delete_trust_id(trust_id, auth_plugin):
"""Delete a trust previously setup for the aodh user."""
client = get_client_on_behalf_user(auth_plugin)
try:
client.trusts.delete(trust_id)
except ka_exception.NotFound:
pass
OPTS = [
cfg.StrOpt('region-name',
default=os.environ.get('OS_REGION_NAME'),
deprecated_name="os-region-name",
help='Region name to use for OpenStack service endpoints.'),
cfg.StrOpt('interface',
default=os.environ.get(
'OS_INTERFACE', os.environ.get('OS_ENDPOINT_TYPE',
'public')),
deprecated_name="os-endpoint-type",
choices=('public', 'internal', 'admin', 'auth', 'publicURL',
'internalURL', 'adminURL'),
help='Type of endpoint in Identity service catalog to use for '
'communication with OpenStack services.'),
]
def register_keystoneauth_opts(conf):
ka_loading.register_auth_conf_options(conf, CFG_GROUP)
ka_loading.register_session_conf_options(
conf, CFG_GROUP,
deprecated_opts={'cacert': [
cfg.DeprecatedOpt('os-cacert', group=CFG_GROUP),
cfg.DeprecatedOpt('os-cacert', group="DEFAULT")]
})

View File

@@ -1,148 +0,0 @@
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# Robert Simai <robert.simai@suse.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-10-07 06:30+0000\n"
"Last-Translator: Robert Simai <robert.simai@suse.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(name)s count exceeds maximum value %(maximum)d"
msgstr "%(name)s Anzahl überschreitet den Maximalwert %(maximum)d"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(rule)s muss für den Alarmtyp %(type)s gesetzt sein"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s und %(rule2)s können nicht gleichzeitig festgelegt werden"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s ist nicht JSON-serialisierbar"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "Alarm %(alarm_id)s nicht gefunden in Projekt %(project)s"
#, python-format
msgid "Alarm %s not found"
msgstr "Alarm %s nicht gefunden"
msgid "Alarm incorrect"
msgstr "Alaram inkorrekt"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "Alarmquote überschritten für Benutzer %(u)s bei Projekt %(p)s"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"Alarm, wenn %(meter_name)s %(comparison_operator)s ein %(statistic)s von "
"%(threshold)s über %(period)s Sekunden ist"
#, python-format
msgid "Alarm when %s event occurred."
msgstr "Alarm wenn %s Ereignis auftritt."
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "Zeitmarkenwert %s konnte nicht analysiert werden"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "Filterausdruck nicht gültig: %s"
msgid "Limit should be positive"
msgstr "Begrenzung muss positiv sein"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "Nicht berechtigt für den Zugriff auf %(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"Benachrichtigung von Alarm %(alarm_name)s %(alarm_id)s mit Priorität "
"%(severity)s von %(previous)s in %(current)s mit Aktion %(action)s wegen "
"%(reason)s."
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "Ausdruck für 'Sortieren nach' nicht gültig: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"Der Datentyp %(type)s wird nicht unterstützt. Die Liste der unterstützten "
"Datentypen lautet: %(supported)s"
msgid "Threshold rules should be combined with \"and\" or \"or\""
msgstr "Schwellenregeln sollten mit \"und\" oder \"oder\" kombiniert werden."
msgid "Time constraint names must be unique for a given alarm."
msgstr "Zeitvorgabennamen müssen für einen angegebenen Alarm eindeutig sein."
#, python-format
msgid "Timezone %s is not valid"
msgstr "Zeitzone %s ist nicht gültig"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr ""
"Wert %(value)s kann nicht in den erwarteten Datentyp %(type)s umgewandelt "
"werden."
#, python-format
msgid "Unable to parse action %s"
msgstr "Aktion %s konnte nicht analysiert werden"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"Unerwartete Ausnahme beim Konvertieren von %(value)s in den erwarteten "
"Datentyp %(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "Nicht unterstützte Aktion %s"
#, python-format
msgid "You are not authorized to create action: %s"
msgstr "Sie sind nicht zur Erstellung der Aktion berechtigt: %s"
msgid "alarm stats retrieval failed"
msgstr "Abrufen der Alarmstatistiken ist fehlgeschlagen"
msgid "state invalid"
msgstr "Zustand ungültig"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp sollte ein datetime-Objekt sein"
msgid "timestamp should be datetime object"
msgstr "timestamp sollte ein datetime-Objekt sein"
msgid "type must be set in every rule"
msgstr "Typ muss in jeder Regel gesetzt werden"

View File

@@ -1,186 +0,0 @@
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
# Andi Chandler <andi@gowling.com>, 2016. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-05-20 10:23+0000\n"
"Last-Translator: Andi Chandler <andi@gowling.com>\n"
"Language-Team: English (United Kingdom)\n"
"Language: en-GB\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(name)s count exceeds maximum value %(maximum)d"
msgstr "%(name)s count exceeds maximum value %(maximum)d"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(rule)s must be set for %(type)s type alarm"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s and %(rule2)s cannot be set at the same time"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s is not JSON serialisable"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "Alarm %(alarm_id)s not found in project %(project)s"
#, python-format
msgid "Alarm %s not found"
msgstr "Alarm %s not found"
msgid "Alarm incorrect"
msgstr "Alarm incorrect"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "Alarm quota exceeded for user %(u)s on project %(p)s"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
#, python-format
msgid "Alarm when %s event occurred."
msgstr "Alarm when %s event occurred."
#, python-format
msgid ""
"Composite rule alarm with composition form: %(expression)s remaining as "
"%(state)s, due to rules: %(rules)s %(description)s"
msgstr ""
"Composite rule alarm with composition form: %(expression)s remaining as "
"%(state)s, due to rules: %(rules)s %(description)s"
#, python-format
msgid ""
"Composite rule alarm with composition form: %(expression)s transition to "
"%(state)s, due to rules: %(rules)s %(description)s"
msgstr ""
"Composite rule alarm with composition form: %(expression)s transition to "
"%(state)s, due to rules: %(rules)s %(description)s"
#, python-format
msgid ""
"Event <id=%(id)s,event_type=%(event_type)s> hits the query <query="
"%(alarm_query)s>."
msgstr ""
"Event <id=%(id)s,event_type=%(event_type)s> hits the query <query="
"%(alarm_query)s>."
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "Failed to parse the timestamp value %s"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "Filter expression not valid: %s"
#, python-format
msgid ""
"Invalid input composite rule: %s, it should be a dict with an \"and\" or \"or"
"\" as key, and the value of dict should be a list of basic threshold rules "
"or sub composite rules, can be nested."
msgstr ""
"Invalid input composite rule: %s, it should be a dict with an \"and\" or \"or"
"\" as key, and the value of dict should be a list of basic threshold rules "
"or sub composite rules, can be nested."
msgid "Limit should be positive"
msgstr "Limit should be positive"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "Not Authorised to access %(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "Order-by expression not valid: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgid "Threshold rules should be combined with \"and\" or \"or\""
msgstr "Threshold rules should be combined with \"and\" or \"or\""
msgid "Time constraint names must be unique for a given alarm."
msgstr "Time constraint names must be unique for a given alarm."
#, python-format
msgid "Timezone %s is not valid"
msgstr "Timezone %s is not valid"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
#, python-format
msgid "Unable to parse action %s"
msgstr "Unable to parse action %s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "Unsupported action %s"
#, python-format
msgid ""
"Unsupported sub-rule type :%(rule)s in composite rule, should be one of: "
"%(plugins)s"
msgstr ""
"Unsupported sub-rule type :%(rule)s in composite rule, should be one of: "
"%(plugins)s"
#, python-format
msgid "You are not authorized to create action: %s"
msgstr "You are not authorised to create action: %s"
msgid "alarm stats retrieval failed"
msgstr "alarm stats retrieval failed"
msgid "state invalid"
msgstr "state invalid"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp should be datetime object"
msgid "timestamp should be datetime object"
msgstr "timestamp should be datetime object"
msgid "type must be set in every rule"
msgstr "type must be set in every rule"

View File

@@ -1,128 +0,0 @@
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-12 04:26+0000\n"
"Last-Translator: Copied by Zanata <copied-by-zanata@zanata.org>\n"
"Language-Team: Spanish\n"
"Language: es\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(rule)s debe establecerse para la alarma de tipo %(type)s"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s y %(rule2)s no se pueden establecer al mismo tiempo"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s no es serializable en JSON"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "La alarma %(alarm_id)s no se ha encontrado en el proyecto %(project)s"
#, python-format
msgid "Alarm %s not found"
msgstr "No se ha encontrado la alarma %s"
msgid "Alarm incorrect"
msgstr "Alarma incorrecta"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr ""
"La cuota de alarma se ha excedido para el usuario %(u)s en el proyecto %(p)s"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"Alarma cuando %(meter_name)s es %(comparison_operator)s un %(statistic)s de "
"%(threshold)s por encima de %(period)s segundos"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "No se ha podido analizar el valor de indicación de fecha y hora %s"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "Expresión de filtro no válida: %s"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "No está autorizado para acceder a %(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"Notificando la alarma %(alarm_name)s %(alarm_id)s de prioridad %(severity)s "
"de %(previous)s a %(current)s con la acción %(action)s debido a %(reason)s."
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "Expresión de ordenar por no válida: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"El tipo de datos %(type)s no es compatible. La lista de tipo de datos "
"admitido es: %(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr ""
"Los nombres de restricción de tiempo deben ser exclusivos para una "
"determinada alarma."
#, python-format
msgid "Timezone %s is not valid"
msgstr "El huso horario %s no es válido"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr ""
"No se ha podido convertir el valor %(value)s al tipo de datos esperado "
"%(type)s."
#, python-format
msgid "Unable to parse action %s"
msgstr "No se puede analizar la acción %s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"Excepción inesperada al convertir %(value)s al tipo de dato esperado "
"%(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "Acción %s no admitida"
msgid "alarm stats retrieval failed"
msgstr "ha fallado la recuperación de estadísticas de la alarma"
msgid "state invalid"
msgstr "estado no válido"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp debe ser el objeto datetime"
msgid "timestamp should be datetime object"
msgstr ""
"La indicación de fecha y hora debe ser el objeto datetime (fecha y hora)"

View File

@@ -1,127 +0,0 @@
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-12 04:26+0000\n"
"Last-Translator: Copied by Zanata <copied-by-zanata@zanata.org>\n"
"Language-Team: French\n"
"Language: fr\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n > 1)\n"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(rule)s doit être défini pour l'alarme de type %(type)s"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s et %(rule2)s ne peuvent pas être définis en même temps"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s n'est pas sérialisable en JSON"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "Alarme %(alarm_id)s introuvable dans le projet %(project)s"
#, python-format
msgid "Alarm %s not found"
msgstr "Alarme: %s non trouvé"
msgid "Alarm incorrect"
msgstr "Alarme incorrecte"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "Quota d'alarme dépassé pour l'utilisateur %(u)s sur le projet %(p)s"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"Alarme lorsque %(meter_name)s est %(comparison_operator)s à une "
"%(statistic)s de %(threshold)s de plus de %(period)s secondes"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "Echec de l'analyse syntaxique de la valeur d'horodatage %s"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "Filtre de l'expression n'est pas valide: %s"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "Non autorisé à accéder %(aspect)s %(id)s "
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"Notification de l'alarme %(alarm_name)s %(alarm_id)s de priorité "
"%(severity)s de %(previous)s à %(current)s avec l'action %(action)s. Cause : "
"%(reason)s."
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "L'expression de tri n'est pas valide : %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"Le type de données %(type)s n'est pas supporté. Les types de données "
"supportés sont: %(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr ""
"Les noms de contrainte de temps doivent être uniques pour une alarme donnée."
#, python-format
msgid "Timezone %s is not valid"
msgstr "La timezone %s n'est pas valide"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr ""
"Impossible de convertir la valeur %(value)s vers le type de données attendu "
"%(type)s."
#, python-format
msgid "Unable to parse action %s"
msgstr "Impossible d'analyser l'action %s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"Exception inattendue lors de la conversion de %(value)s dans le type de "
"donnée attendue %(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "Action non supporté %s"
msgid "alarm stats retrieval failed"
msgstr "Échec de la récupération de l'état de l'alerte"
msgid "state invalid"
msgstr "Etat non valide"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp doit correspondre à l'objet date-heure"
msgid "timestamp should be datetime object"
msgstr "timestamp doit correspondre à l'objet date-heure"

View File

@@ -1,126 +0,0 @@
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-03 06:58+0000\n"
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
"Language-Team: Italian\n"
"Language: it\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(rule)s deve essere impostata per la segnalazione di tipo %(type)s"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s e %(rule2)s non possono essere impostate contemporaneamente"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s non è serializzabile mediante JSON"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "Segnalazione %(alarm_id)s non trovata nel progetto %(project)s"
#, python-format
msgid "Alarm %s not found"
msgstr "Segnalazione %s non trovata"
msgid "Alarm incorrect"
msgstr "Segnalazione non corretta"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "Quota di segnalazione superata per l'utente %(u)s nel progetto %(p)s"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"Segnalazione quando %(meter_name)s è %(comparison_operator)s un "
"%(statistic)s di %(threshold)s in %(period)s secondi"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "Impossibile analizzare il valore data/ora %s"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "Espressione del filtro non valida: %s"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "Non autorizzato ad accedere %(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"Notifica dell'allarme %(alarm_name)s %(alarm_id)s di priorità %(severity)s "
"da %(previous)s a %(current)s con azione %(action)s a causa di %(reason)s."
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "L'espressione ordina per non è valida: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"Il tipo di dati %(type)s non è supportato. L'elenco dei tipi di dati "
"supportati è: %(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr ""
"I nomi dei limiti di tempo devono essere univoci per una data segnalazione."
#, python-format
msgid "Timezone %s is not valid"
msgstr "Fuso orario %s non valido"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr ""
"Impossibile convertire il valore %(value)s nel tipo di dati previsto "
"%(type)s."
#, python-format
msgid "Unable to parse action %s"
msgstr "Impossibile analizzare l'azione %s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"Eccezione non prevista durante la conversione di %(value)s per il tipo di "
"dati previsto %(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "Azione non supportata %s"
msgid "alarm stats retrieval failed"
msgstr "segnalazione richiamo statistiche non riuscito"
msgid "state invalid"
msgstr "stato non valido"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp deve essere un oggetto data/ora"
msgid "timestamp should be datetime object"
msgstr "timestamp deve essere un oggetto data/ora"

View File

@@ -1,135 +0,0 @@
# Akihiro Motoki <amotoki@gmail.com>, 2015. #zanata
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2015. #zanata
# Tom Cocozzello <tjcocozz@us.ibm.com>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# Shinichi Take <chantake33@gmail.com>, 2016. #zanata
# Yuta Hono <yuta.hono@ntt.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-12 04:26+0000\n"
"Last-Translator: Copied by Zanata <copied-by-zanata@zanata.org>\n"
"Language-Team: Japanese\n"
"Language: ja\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=1; plural=0\n"
#, python-format
msgid "%(name)s count exceeds maximum value %(maximum)d"
msgstr "%(name)s が最大値 %(maximum)d を超えています"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(type)s タイプのアラームに %(rule)s を設定する必要があります"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s と %(rule2)s を同時に設定することはできません"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s が JSON シリアライズ可能ではありません"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "アラーム %(alarm_id)s がプロジェクト %(project)s には見つかりません"
#, python-format
msgid "Alarm %s not found"
msgstr "アラーム %s が見つかりません"
msgid "Alarm incorrect"
msgstr "アラームが正しくありません"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "プロジェクト %(p)s のユーザー %(u)s のアラームクォータを超過しました"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"%(period)s 秒にわたる %(meter_name)s と %(threshold)s の %(statistic)s の比較"
"が %(comparison_operator)s である場合のアラーム"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "タイムスタンプ値 %s を解析できませんでした"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "フィルター式が無効です: %s"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "%(aspect)s %(id)s にアクセスする権限がありません"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"優先順位 %(severity)s のアラーム %(alarm_name)s %(alarm_id)s をアクション "
"%(action)s によって %(previous)s から %(current)s へ通知中。理由: "
"%(reason)s。"
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "order-by 式が無効です: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"データ型 %(type)s はサポートされていません。サポートされているデータ型のリス"
"ト: %(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr "時間制約の名前は、指定されたアラームで一意でなければなりません。"
#, python-format
msgid "Timezone %s is not valid"
msgstr "タイムゾーン %s が無効です"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr "値 %(value)s を、想定されるデータ型 %(type)s に変換できません。"
#, python-format
msgid "Unable to parse action %s"
msgstr "アクション %s を解析できません"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"%(value)s を想定されるデータ型 %(type)s に変換する際に、想定しない例外が発生"
"しました。"
#, python-format
msgid "Unsupported action %s"
msgstr "サポートされないアクション %s"
#, python-format
msgid "You are not authorized to create action: %s"
msgstr "アクションの作成を許可されていません: %s"
msgid "alarm stats retrieval failed"
msgstr "アラーム統計の取得に失敗しました"
msgid "state invalid"
msgstr "状態が無効です"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp は datetime オブジェクトでなければなりません"
msgid "timestamp should be datetime object"
msgstr "タイムスタンプは datetime オブジェクトでなければなりません"

View File

@@ -1,128 +0,0 @@
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# SEOKJAE BARK <sj.bark@samsung.com>, 2017. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2017-07-06 01:10+0000\n"
"Last-Translator: SEOKJAE BARK <sj.bark@samsung.com>\n"
"Language-Team: Korean (South Korea)\n"
"Language: ko-KR\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=1; plural=0\n"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(type)s 유형 알람에 %(rule)s을(를) 설정해야 함"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s 및 %(rule2)s을(를) 동시에 설정할 수 없음"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s은(는) JSON 직렬화 할 수 없음"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "%(alarm_id)s 알람이 %(project)s 프로젝트에 없음"
#, python-format
msgid "Alarm %s not found"
msgstr "%s 알람을 찾을 수 없음"
msgid "Alarm incorrect"
msgstr "알림이 올바르지 않습니다"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "%(p)s 프로젝트의 %(u)s 사용자에 대한 알람 할당량 초과"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"%(meter_name)s이(가) %(comparison_operator)s %(statistic)s %(threshold)s인 경"
"우 알람(%(period)s초 동안)"
#, python-format
msgid "Alarm when %s event occurred."
msgstr "%s의 event가 발생했을 때 알람을 발생"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "시간소인 값 %s 구문 분석 실패"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "필터 표현식이 올바르지 않음: %s"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "%(aspect)s %(id)s에 대한 액세스 권한이 부여되지 않음"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"%(severity)s 우선순위에 대한 알람 %(alarm_name)s %(alarm_id)s 알림, "
"%(previous)s부터 %(current)s까지, 조치 %(action)s 사용. 이유: %(reason)s."
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "Order-by 표현식이 올바르지 않음: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"데이터 유형 %(type)s이(가) 지원되지 않습니다. 지원되는 데이터 유형 목록은 "
"%(supported)s입니다."
msgid "Time constraint names must be unique for a given alarm."
msgstr "시간 제한조건 이름은 지정된 알람에 고유해야 합니다."
#, python-format
msgid "Timezone %s is not valid"
msgstr "시간대 %s이(가) 올바르지 않음"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr "%(value)s 값을 예상 데이터 유형 %(type)s(으)로 변환할 수 없습니다."
#, python-format
msgid "Unable to parse action %s"
msgstr "%s 조치를 구문 분석할 수 없음"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"%(value)s을(를) 예상된 데이터 유형으로 변환하는 중에 예상치 않은 예외 발생 "
"%(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "지원되지 않는 조치 %s"
msgid "alarm stats retrieval failed"
msgstr "알람 통계 검색에 실패했습니다. "
msgid "state invalid"
msgstr "상태가 잘못되었습니다"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp는 Datetime 오브젝트여야 함"
msgid "timestamp should be datetime object"
msgstr "시간소인은 Datetime 오브젝트여야 함"

View File

@@ -1,142 +0,0 @@
# Translations template for aodh.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the aodh project.
#
# Translators:
# AnaFonseca <anafonseca.mobile@gmail.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-12 03:59+0000\n"
"Last-Translator: Copied by Zanata <copied-by-zanata@zanata.org>\n"
"Language: pt\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.9.6\n"
"Language-Team: Portuguese\n"
#, python-format
msgid "%(name)s count exceeds maximum value %(maximum)d"
msgstr "a contagem %(name)s excede o valor máximo %(maximum)d"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(rule)s devem ser definidas para o tipo de aviso %(type)s"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s e %(rule2)s não podem ser programadas ao mesmo tempo"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "Alarme %(alarm_id)s não encontrado no projeto %(project)s"
#, python-format
msgid "Alarm %s not found"
msgstr "Alarme %s não encontrado"
msgid "Alarm incorrect"
msgstr "Alarme incorreto"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "Aviso de quota excedida para o utilizador %(u)s no projeto %(p)s"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"Alarme quando %(meter_name)s é %(comparison_operator)s uma %(statistic)s de "
"%(threshold)s em %(period)s segundos"
#, python-format
msgid "Alarm when %s event occurred."
msgstr "Alarme quando evento %s ocorreu."
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "Erro ao analisar o valor data/hora %s"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "Expressão filtro inválida: %s"
msgid "Limit should be positive"
msgstr "O limite deve ser positivo"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "Não Autorizado o acesso a %(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"Notificar alarme %(alarm_name)s %(alarm_id)s de %(severity)s prioridade de "
"%(previous)s a %(current)s com a ação %(action)s devido a %(reason)s."
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "Expressão ordenar por inválida: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"O tipo de dados %(type)s não é suportado. A lista do tipo de dados "
"suportados é: %(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr ""
"Os nomes das restrições de tempo deve ser únicos para um determinado aviso."
#, python-format
msgid "Timezone %s is not valid"
msgstr "Fuso horário %s inválido"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr ""
"Incapaz de converter o valor %(value)s para o tipo de dados esperados "
"%(type)s."
#, python-format
msgid "Unable to parse action %s"
msgstr "Incapaz de analisar a ação %s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"Exceção inesperada ao converter %(value)s para o tipo de dados esperado "
"%(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "Ação não suportada %s"
#, python-format
msgid "You are not authorized to create action: %s"
msgstr "Não tem permissão para criar a ação: %s"
msgid "alarm stats retrieval failed"
msgstr "a extração da estatística do alarme falhou"
msgid "state invalid"
msgstr "estato inválido"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp deve ser um objeto data/hora"
msgid "timestamp should be datetime object"
msgstr "o timestamp deve ser um objeto data/hora"

View File

@@ -1,127 +0,0 @@
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-03 06:59+0000\n"
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
"Language-Team: Portuguese (Brazil)\n"
"Language: pt-BR\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(rule)s deve ser definido para alarme de tipo %(type)s"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s e %(rule2)s não podem ser configurados ao mesmo tempo"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s não é JSON serializável"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "Alarme%(alarm_id)s não localizado no projeto%(project)s"
#, python-format
msgid "Alarm %s not found"
msgstr "Alarme %s não localizado"
msgid "Alarm incorrect"
msgstr "Alarme incorreto"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "Cota de alarme excedida para usuário %(u)s no projeto %(p)s"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"Alarma quando %(meter_name)s é %(comparison_operator)s que %(statistic)s de "
"%(threshold)s durante %(period)s segundos"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "Falha ao analisar o valor do registro de data e hora %s"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "Expressão de filtro inválida: %s"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "Não Autorizado a acessar %(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"Notificando alarme %(alarm_name)s %(alarm_id)s da prioridade %(severity)s do "
"%(previous)s para %(current)s com ação %(action)s porque %(reason)s."
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "Expressão solicitada inválida: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"O tipo de dados %(type)s não é suportado. A lista de tipos de dados "
"suportados é: %(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr ""
"Nomes de restrição de tempo devem ser exclusivos para um determinado alarme."
#, python-format
msgid "Timezone %s is not valid"
msgstr "Fuso horário %s não é válido"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr ""
"Não é possível converter o valor %(value)s para o tipo de dados esperado "
"%(type)s."
#, python-format
msgid "Unable to parse action %s"
msgstr "Não é possível analisar ação %s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"Exceção inesperada convertendo %(value)s para o tipo de dado esperado "
"%(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "Ação não suportada %s"
msgid "alarm stats retrieval failed"
msgstr "recuperação das estatísticas de alarme falhou"
msgid "state invalid"
msgstr "estado inválido"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp precisa ser objeto de data/hora"
msgid "timestamp should be datetime object"
msgstr "registro de data e hora precisa ser objeto de data/hora"

View File

@@ -1,146 +0,0 @@
# Translations template for aodh.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the aodh project.
#
# Translators:
# Altinbek <altinbek.089@mail.ru>, 2015
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-04-12 03:59+0000\n"
"Last-Translator: Copied by Zanata <copied-by-zanata@zanata.org>\n"
"Language: ru\n"
"Plural-Forms: nplurals=4; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n"
"%10<=4 && (n%100<12 || n%100>14) ? 1 : n%10==0 || (n%10>=5 && n%10<=9) || (n"
"%100>=11 && n%100<=14)? 2 : 3);\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.9.6\n"
"Language-Team: Russian\n"
#, python-format
msgid "%(name)s count exceeds maximum value %(maximum)d"
msgstr "контент %(name)s превышает количество символов в %(maximum)d"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "%(rule)s должны быть установлены для %(type)s сигналов тревоги"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s и %(rule2)s не могут работать одновременно"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s не является сериализуемым с помощью JSON"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "Сигнал %(alarm_id)s не найдены в проекте %(project)s"
#, python-format
msgid "Alarm %s not found"
msgstr "Сигнал %s не найден"
msgid "Alarm incorrect"
msgstr "Сигнализация неисправна"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "количество ошибок пользователем %(u)s превысила норму %(p)s"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"При срабатываемости сигналатревоги %(meter_name)s как "
"%(comparison_operator)s a %(statistic)s в %(threshold)s срабатывает за "
"%(period)s секунду"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "Не удалось разобрать значение временной метки %s"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "Фильтр ввода не действует: %s"
msgid "Limit should be positive"
msgstr "Лимит должен быть точным"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "Нет доступа к %(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"Сигнал тревоги %(alarm_name)s %(alarm_id)s не работает потому что "
"%(reason)s в %(severity)s приоритетом на %(previous)s %(current)s "
"влияние на действие %(action)s"
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "вызов значения не активна: %s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr ""
"Тип данных %(type)s не поддерживается. Список поддерживаемых типов данных: "
"%(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr "Название временного контента должна отличаться для сигнала превоги"
#, python-format
msgid "Timezone %s is not valid"
msgstr "таймер %s не актевирован"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr ""
"Невозможно преобразовать значение %(value)s с ожидаемым типом данных "
"%(type)s."
#, python-format
msgid "Unable to parse action %s"
msgstr "Невозможно разобрать действий %s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr ""
"мгновенное преобразования значения %(value)s с ожидаемым типом данных "
"%(type)s."
#, python-format
msgid "Unsupported action %s"
msgstr "не поддерживается действие %s"
#, python-format
msgid "You are not authorized to create action: %s"
msgstr "Вы не авторизованы, чтобы деиствовать: %s"
msgid "alarm stats retrieval failed"
msgstr "Статистика сигнал оповещения не получен"
msgid "state invalid"
msgstr "Неправильное состояние"
msgid "state_timestamp should be datetime object"
msgstr "В state_timestamp должен быть указан дата объекта"
msgid "timestamp should be datetime object"
msgstr "должна быть указана дата вывода объекта"

View File

@@ -1,128 +0,0 @@
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
# OpenStack Infra <zanata@openstack.org>, 2015. #zanata
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-17 02:24+0000\n"
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
"Language-Team: Chinese (China)\n"
"Language: zh-CN\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=1; plural=0\n"
#, python-format
msgid "%(name)s count exceeds maximum value %(maximum)d"
msgstr "%(name)s数量超过最大值%(maximum)d"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "类型为%(type)s的告警必须设置%(rule)s"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "%(rule1)s和%(rule2)s无法同时被设置"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s 不是可序列化 JSON"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "告警%(alarm_id)s在项目%(project)s中未找到"
#, python-format
msgid "Alarm %s not found"
msgstr "告警%s没有找到"
msgid "Alarm incorrect"
msgstr "警报不正确"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "用户%(u)s在项目%(p)s中的告警配额已溢出"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"请在 %(meter_name)s 是 %(comparison_operator)s%(threshold)s 的 "
"%(statistic)s的时间超过 %(period)s 秒时发出警报"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "解析时间戳%s失败"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "过滤表达式不合法:%s"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "权限不足以访问%(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"正在通知警报%(alarm_name)s %(alarm_id)s警报级别%(severity)s状态"
"从%(previous)s变为%(current)s动作为%(action)s原因是%(reason)s。"
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "orderby表达式不合法%s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr "数据类型%(type)s不被支持。支持的数据类型列表%(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr "一个指定的告警的时间约束名称必须唯一"
#, python-format
msgid "Timezone %s is not valid"
msgstr "时区%s不合法"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr "无法转换%(value)s到预期的数据类型%(type)s。"
#, python-format
msgid "Unable to parse action %s"
msgstr "无法解析动作%s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr "在转换%(value)s到预期的数据类型%(type)s时发生了未预料的异常。"
#, python-format
msgid "Unsupported action %s"
msgstr "动作%s不支持"
#, python-format
msgid "You are not authorized to create action: %s"
msgstr "你没有权限创建动作:%s"
msgid "alarm stats retrieval failed"
msgstr "警报统计信息获取失败"
msgid "state invalid"
msgstr "状态无效"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp必须是datetime对象"
msgid "timestamp should be datetime object"
msgstr "timestamp必须是datatime对象"

View File

@@ -1,119 +0,0 @@
# Lucas Palm <lapalm@us.ibm.com>, 2015. #zanata
# Jennifer <cristxu@tw.ibm.com>, 2016. #zanata
# KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: aodh 4.0.1.dev87\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2017-07-13 18:01+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-06-03 07:04+0000\n"
"Last-Translator: KATO Tomoyuki <kato.tomoyuki@jp.fujitsu.com>\n"
"Language-Team: Chinese (Taiwan)\n"
"Language: zh-TW\n"
"X-Generator: Zanata 3.9.6\n"
"Plural-Forms: nplurals=1; plural=0\n"
#, python-format
msgid "%(rule)s must be set for %(type)s type alarm"
msgstr "必須為 %(type)s 類型警示設定 %(rule)s"
#, python-format
msgid "%(rule1)s and %(rule2)s cannot be set at the same time"
msgstr "無法同時設定 %(rule1)s 和 %(rule2)s"
#, python-format
msgid "%s is not JSON serializable"
msgstr "%s 不可進行 JSON 序列化"
#, python-format
msgid "Alarm %(alarm_id)s not found in project %(project)s"
msgstr "在專案 %(project)s 中找不到警示 %(alarm_id)s"
#, python-format
msgid "Alarm %s not found"
msgstr "找不到警示 %s"
msgid "Alarm incorrect"
msgstr "警示不正確"
#, python-format
msgid "Alarm quota exceeded for user %(u)s on project %(p)s"
msgstr "在專案 %(p)s 上,針對使用者 %(u)s 已超出的警示配額"
#, python-format
msgid ""
"Alarm when %(meter_name)s is %(comparison_operator)s a %(statistic)s of "
"%(threshold)s over %(period)s seconds"
msgstr ""
"如果 %(meter_name)s 在 %(period)s 秒內 %(comparison_operator)s %(threshold)s "
"的%(statistic)s則會出現警示"
#, python-format
msgid "Failed to parse the timestamp value %s"
msgstr "無法剖析時間戳記值 %s"
#, python-format
msgid "Filter expression not valid: %s"
msgstr "過濾表示式無效:%s"
#, python-format
msgid "Not Authorized to access %(aspect)s %(id)s"
msgstr "未獲授權來存取 %(aspect)s %(id)s"
#, python-format
msgid ""
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s priority from "
"%(previous)s to %(current)s with action %(action)s because %(reason)s."
msgstr ""
"正在以動作 %(action)s 通知優先順序為 %(severity)s 的警示 %(alarm_name)s "
"%(alarm_id)s從 %(previous)s 至 %(current)s因為 %(reason)s。"
#, python-format
msgid "Order-by expression not valid: %s"
msgstr "排序方式表示式無效:%s"
#, python-format
msgid ""
"The data type %(type)s is not supported. The supported data type list is: "
"%(supported)s"
msgstr "不支援資料類型 %(type)s。支援的資料類型清單為%(supported)s"
msgid "Time constraint names must be unique for a given alarm."
msgstr "針對給定的警示,時間限制名稱必須是唯一的。"
#, python-format
msgid "Timezone %s is not valid"
msgstr "時區 %s 無效"
#, python-format
msgid ""
"Unable to convert the value %(value)s to the expected data type %(type)s."
msgstr "無法將值 %(value)s 轉換成預期的資料類型 %(type)s。"
#, python-format
msgid "Unable to parse action %s"
msgstr "無法剖析動作 %s"
#, python-format
msgid ""
"Unexpected exception converting %(value)s to the expected data type %(type)s."
msgstr "將 %(value)s 轉換為預期的資料類型%(type)s 時發生非預期的異常狀況。"
#, python-format
msgid "Unsupported action %s"
msgstr "不受支援的動作 %s"
msgid "alarm stats retrieval failed"
msgstr "警示統計資料擷取失敗"
msgid "state invalid"
msgstr "狀態無效"
msgid "state_timestamp should be datetime object"
msgstr "state_timestamp 應該為日期時間物件"
msgid "timestamp should be datetime object"
msgstr "時間戳記應該為日期時間物件"

View File

@@ -1,62 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013-2015 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_messaging
from oslo_messaging import serializer as oslo_serializer
DEFAULT_URL = "__default__"
TRANSPORTS = {}
_SERIALIZER = oslo_serializer.JsonPayloadSerializer()
def setup():
oslo_messaging.set_transport_defaults('aodh')
def get_transport(conf, url=None, optional=False, cache=True):
"""Initialise the oslo_messaging layer."""
global TRANSPORTS, DEFAULT_URL
cache_key = url or DEFAULT_URL
transport = TRANSPORTS.get(cache_key)
if not transport or not cache:
try:
transport = oslo_messaging.get_notification_transport(conf, url)
except (oslo_messaging.InvalidTransportURL,
oslo_messaging.DriverLoadFailure):
if not optional or url:
# NOTE(sileht): oslo_messaging is configured but unloadable
# so reraise the exception
raise
return None
else:
if cache:
TRANSPORTS[cache_key] = transport
return transport
def get_batch_notification_listener(transport, targets, endpoints,
allow_requeue=False,
batch_size=1, batch_timeout=None):
"""Return a configured oslo_messaging notification listener."""
return oslo_messaging.get_batch_notification_listener(
transport, targets, endpoints, executor='threading',
allow_requeue=allow_requeue,
batch_size=batch_size, batch_timeout=batch_timeout)
def get_notifier(transport, publisher_id):
"""Return a configured oslo_messaging notifier."""
notifier = oslo_messaging.Notifier(transport, serializer=_SERIALIZER)
return notifier.prepare(publisher_id=publisher_id)

View File

@@ -1,164 +0,0 @@
#
# Copyright 2013-2015 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import cotyledon
from oslo_config import cfg
from oslo_log import log
import oslo_messaging
from oslo_utils import netutils
import six
from stevedore import extension
from aodh import messaging
LOG = log.getLogger(__name__)
OPTS = [
cfg.IntOpt('batch_size',
default=1,
help='Number of notification messages to wait before '
'dispatching them.'),
cfg.IntOpt('batch_timeout',
help='Number of seconds to wait before dispatching samples '
'when batch_size is not reached (None means indefinitely).'
),
]
@six.add_metaclass(abc.ABCMeta)
class AlarmNotifier(object):
"""Base class for alarm notifier plugins."""
@staticmethod
def __init__(conf):
pass
@abc.abstractmethod
def notify(self, action, alarm_id, alarm_name, severity, previous,
current, reason, reason_data):
"""Notify that an alarm has been triggered.
:param action: The action that is being attended, as a parsed URL.
:param alarm_id: The triggered alarm.
:param alarm_name: The name of triggered alarm.
:param severity: The level of triggered alarm
:param previous: The previous state of the alarm.
:param current: The current state of the alarm.
:param reason: The reason the alarm changed its state.
:param reason_data: A dict representation of the reason.
"""
class AlarmNotifierService(cotyledon.Service):
NOTIFIER_EXTENSIONS_NAMESPACE = "aodh.notifier"
def __init__(self, worker_id, conf):
super(AlarmNotifierService, self).__init__(worker_id)
self.conf = conf
transport = messaging.get_transport(self.conf)
self.notifiers = extension.ExtensionManager(
self.NOTIFIER_EXTENSIONS_NAMESPACE,
invoke_on_load=True,
invoke_args=(self.conf,))
target = oslo_messaging.Target(topic=self.conf.notifier_topic)
self.listener = messaging.get_batch_notification_listener(
transport, [target], [AlarmEndpoint(self.notifiers)], False,
self.conf.notifier.batch_size, self.conf.notifier.batch_timeout)
self.listener.start()
def terminate(self):
self.listener.stop()
self.listener.wait()
class AlarmEndpoint(object):
def __init__(self, notifiers):
self.notifiers = notifiers
def sample(self, notifications):
"""Endpoint for alarm notifications"""
LOG.debug('Received %s messages in batch.', len(notifications))
for notification in notifications:
self._process_alarm(self.notifiers, notification['payload'])
@staticmethod
def _handle_action(notifiers, action, alarm_id, alarm_name, severity,
previous, current, reason, reason_data):
"""Process action on alarm
:param notifiers: list of possible notifiers.
:param action: The action that is being attended, as a parsed URL.
:param alarm_id: The triggered alarm.
:param alarm_name: The name of triggered alarm.
:param severity: The level of triggered alarm
:param previous: The previous state of the alarm.
:param current: The current state of the alarm.
:param reason: The reason the alarm changed its state.
:param reason_data: A dict representation of the reason.
"""
try:
action = netutils.urlsplit(action)
except Exception:
LOG.error(
("Unable to parse action %(action)s for alarm "
"%(alarm_id)s"), {'action': action, 'alarm_id': alarm_id})
return
try:
notifier = notifiers[action.scheme].obj
except KeyError:
scheme = action.scheme
LOG.error(
("Action %(scheme)s for alarm %(alarm_id)s is unknown, "
"cannot notify"),
{'scheme': scheme, 'alarm_id': alarm_id})
return
try:
LOG.debug("Notifying alarm %(id)s with action %(act)s",
{'id': alarm_id, 'act': action})
notifier.notify(action, alarm_id, alarm_name, severity,
previous, current, reason, reason_data)
except Exception:
LOG.exception("Unable to notify alarm %s", alarm_id)
@staticmethod
def _process_alarm(notifiers, data):
"""Notify that alarm has been triggered.
:param notifiers: list of possible notifiers
:param data: (dict): alarm data
"""
actions = data.get('actions')
if not actions:
LOG.error("Unable to notify for an alarm with no action")
return
for action in actions:
AlarmEndpoint._handle_action(notifiers, action,
data.get('alarm_id'),
data.get('alarm_name'),
data.get('severity'),
data.get('previous'),
data.get('current'),
data.get('reason'),
data.get('reason_data'))

View File

@@ -1,40 +0,0 @@
#
# Copyright 2013 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Log alarm notifier."""
from oslo_log import log
from aodh.i18n import _
from aodh import notifier
LOG = log.getLogger(__name__)
class LogAlarmNotifier(notifier.AlarmNotifier):
"Log alarm notifier."""
@staticmethod
def notify(action, alarm_id, alarm_name, severity, previous, current,
reason, reason_data):
LOG.info(_(
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s "
"priority from %(previous)s to %(current)s with action %(action)s"
" because %(reason)s.") % ({'alarm_name': alarm_name,
'alarm_id': alarm_id,
'severity': severity,
'previous': previous,
'current': current,
'action': action.geturl(),
'reason': reason}))

View File

@@ -1,109 +0,0 @@
#
# Copyright 2013-2014 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Rest alarm notifier."""
from oslo_config import cfg
from oslo_log import log
from oslo_serialization import jsonutils
from oslo_utils import uuidutils
import requests
import six.moves.urllib.parse as urlparse
from aodh import notifier
LOG = log.getLogger(__name__)
OPTS = [
cfg.StrOpt('rest_notifier_certificate_file',
default='',
help='SSL Client certificate file for REST notifier.'
),
cfg.StrOpt('rest_notifier_certificate_key',
default='',
help='SSL Client private key file for REST notifier.'
),
cfg.StrOpt('rest_notifier_ca_bundle_certificate_path',
help='SSL CA_BUNDLE certificate for REST notifier',
),
cfg.BoolOpt('rest_notifier_ssl_verify',
default=True,
help='Whether to verify the SSL Server certificate when '
'calling alarm action.'
),
cfg.IntOpt('rest_notifier_max_retries',
default=0,
help='Number of retries for REST notifier',
),
]
class RestAlarmNotifier(notifier.AlarmNotifier):
"""Rest alarm notifier."""
def __init__(self, conf):
super(RestAlarmNotifier, self).__init__(conf)
self.conf = conf
def notify(self, action, alarm_id, alarm_name, severity, previous,
current, reason, reason_data, headers=None):
headers = headers or {}
if 'x-openstack-request-id' not in headers:
headers['x-openstack-request-id'] = b'req-' + \
uuidutils.generate_uuid().encode('ascii')
LOG.info(
"Notifying alarm %(alarm_name)s %(alarm_id)s with severity"
" %(severity)s from %(previous)s to %(current)s with action "
"%(action)s because %(reason)s. request-id: %(request_id)s " %
({'alarm_name': alarm_name, 'alarm_id': alarm_id,
'severity': severity, 'previous': previous,
'current': current, 'action': action, 'reason': reason,
'request_id': headers['x-openstack-request-id']}))
body = {'alarm_name': alarm_name, 'alarm_id': alarm_id,
'severity': severity, 'previous': previous,
'current': current, 'reason': reason,
'reason_data': reason_data}
headers['content-type'] = 'application/json'
kwargs = {'data': jsonutils.dumps(body),
'headers': headers}
if action.scheme == 'https':
default_verify = int(self.conf.rest_notifier_ssl_verify)
options = urlparse.parse_qs(action.query)
verify = bool(int(options.get('aodh-alarm-ssl-verify',
[default_verify])[-1]))
if verify and self.conf.rest_notifier_ca_bundle_certificate_path:
verify = self.conf.rest_notifier_ca_bundle_certificate_path
kwargs['verify'] = verify
cert = self.conf.rest_notifier_certificate_file
key = self.conf.rest_notifier_certificate_key
if cert:
kwargs['cert'] = (cert, key) if key else cert
# FIXME(rhonjo): Retries are automatically done by urllib3 in requests
# library. However, there's no interval between retries in urllib3
# implementation. It will be better to put some interval between
# retries (future work).
max_retries = self.conf.rest_notifier_max_retries
session = requests.Session()
session.mount(action.geturl(),
requests.adapters.HTTPAdapter(max_retries=max_retries))
resp = session.post(action.geturl(), **kwargs)
LOG.info('Notifying alarm <%(id)s> gets response: %(status_code)s '
'%(reason)s.', {'id': alarm_id,
'status_code': resp.status_code,
'reason': resp.reason})

View File

@@ -1,36 +0,0 @@
#
# Copyright 2013-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test alarm notifier."""
from aodh import notifier
class TestAlarmNotifier(notifier.AlarmNotifier):
"Test alarm notifier."""
def __init__(self, conf):
super(TestAlarmNotifier, self).__init__(conf)
self.notifications = []
def notify(self, action, alarm_id, alarm_name, severity,
previous, current, reason, reason_data):
self.notifications.append((action,
alarm_id,
alarm_name,
severity,
previous,
current,
reason,
reason_data))

View File

@@ -1,59 +0,0 @@
#
# Copyright 2014 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Rest alarm notifier with trusted authentication."""
from six.moves.urllib import parse
from aodh import keystone_client
from aodh.notifier import rest
class TrustAlarmNotifierMixin(object):
"""Mixin class to add Keystone trust support to an AlarmNotifier.
Provides a notify() method that interprets the trust ID and then calls
the parent class's notify(), passing the necessary authentication data in
the headers.
"""
def notify(self, action, alarm_id, alarm_name, severity, previous, current,
reason, reason_data):
trust_id = action.username
client = keystone_client.get_trusted_client(self.conf, trust_id)
# Remove the fake user
netloc = action.netloc.split("@")[1]
# Remove the trust prefix
scheme = action.scheme[6:]
action = parse.SplitResult(scheme, netloc, action.path, action.query,
action.fragment)
headers = {'X-Auth-Token': keystone_client.get_auth_token(client)}
super(TrustAlarmNotifierMixin, self).notify(
action, alarm_id, alarm_name, severity, previous, current, reason,
reason_data, headers)
class TrustRestAlarmNotifier(TrustAlarmNotifierMixin, rest.RestAlarmNotifier):
"""Notifier supporting keystone trust authentication.
This alarm notifier is intended to be used to call an endpoint using
keystone authentication. It uses the aodh service user to
authenticate using the trust ID provided.
The URL must be in the form ``trust+http://trust-id@host/action``.
"""

View File

@@ -1,227 +0,0 @@
#
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Zaqar alarm notifier."""
from oslo_config import cfg
from oslo_log import log
import six.moves.urllib.parse as urlparse
from aodh import keystone_client
from aodh import notifier
from aodh.notifier import trust
LOG = log.getLogger(__name__)
SERVICE_OPTS = [
cfg.StrOpt('zaqar',
default='messaging',
help='Message queue service type.'),
]
class ZaqarAlarmNotifier(notifier.AlarmNotifier):
"""Zaqar notifier.
This notifier posts alarm notifications either to a Zaqar subscription or
to an existing Zaqar queue with a pre-signed URL.
To create a new subscription in the service project, use a notification URL
of the form::
zaqar://?topic=example&subscriber=mailto%3A//test%40example.com&ttl=3600
Multiple subscribers are allowed. ``ttl`` is the time to live of the
subscription. The queue will be created automatically, in the service
project, with a name based on the topic and the alarm ID.
To use a pre-signed URL for an existing queue, use a notification URL with
the scheme ``zaqar://`` and the pre-signing data from Zaqar in the query
string::
zaqar://?queue_name=example&project_id=foo&
paths=/messages&methods=POST&expires=1970-01-01T00:00Z&
signature=abcdefg
"""
def __init__(self, conf):
super(ZaqarAlarmNotifier, self).__init__(conf)
self.conf = conf
self._zclient = None
self._zendpoint = None
def _get_endpoint(self):
if self._zendpoint is None:
try:
ks_client = keystone_client.get_client(self.conf)
z_srv = ks_client.services.find(
type=self.conf.service_types.zaqar)
endpoint_type = self.conf.service_credentials.interface
z_endpoint = ks_client.endpoints.find(service_id=z_srv.id,
interface=endpoint_type)
self._zendpoint = z_endpoint.url
except Exception:
LOG.error("Aodh was configured to use zaqar:// action,"
" but Zaqar endpoint could not be found in"
" Keystone service catalog.")
return self._zendpoint
def _get_client_conf(self):
conf = self.conf.service_credentials
return {
'auth_opts': {
'backend': 'keystone',
'options': {
'os_username': conf.os_username,
'os_password': conf.os_password,
'os_project_name': conf.os_tenant_name,
'os_auth_url': conf.os_auth_url,
'insecure': ''
}
}
}
def get_zaqar_client(self, conf):
try:
from zaqarclient.queues import client as zaqar_client
return zaqar_client.Client(self._get_endpoint(),
version=2, conf=conf)
except Exception:
LOG.error("Failed to connect to Zaqar service ",
exc_info=True)
def _get_presigned_client_conf(self, queue_info):
queue_name = queue_info.get('queue_name', [''])[0]
if not queue_name:
return None, None
signature = queue_info.get('signature', [''])[0]
expires = queue_info.get('expires', [''])[0]
paths = queue_info.get('paths', [''])[0].split(',')
methods = queue_info.get('methods', [''])[0].split(',')
project_id = queue_info.get('project_id', [''])[0]
conf = {
'auth_opts': {
'backend': 'signed-url',
'options': {
'signature': signature,
'expires': expires,
'methods': methods,
'paths': paths,
'os_project_id': project_id
}
}
}
return conf, queue_name
def notify(self, action, alarm_id, alarm_name, severity, previous,
current, reason, reason_data, headers=None):
LOG.info(
"Notifying alarm %(alarm_name)s %(alarm_id)s of %(severity)s "
"priority from %(previous)s to %(current)s with action %(action)s"
" because %(reason)s." % ({'alarm_name': alarm_name,
'alarm_id': alarm_id,
'severity': severity,
'previous': previous,
'current': current,
'action': action,
'reason': reason}))
body = {'alarm_name': alarm_name, 'alarm_id': alarm_id,
'severity': severity, 'previous': previous,
'current': current, 'reason': reason,
'reason_data': reason_data}
message = dict(body=body)
self.notify_zaqar(action, message, headers)
@property
def client(self):
if self._zclient is None:
self._zclient = self.get_zaqar_client(self._get_client_conf())
return self._zclient
def notify_zaqar(self, action, message, headers=None):
queue_info = urlparse.parse_qs(action.query)
try:
# NOTE(flwang): Try to get build a pre-signed client if user has
# provide enough information about that. Otherwise, go to build
# a client with service account and queue name for this alarm.
conf, queue_name = self._get_presigned_client_conf(queue_info)
if conf is not None:
zaqar_client = self.get_zaqar_client(conf)
if conf is None or queue_name is None or zaqar_client is None:
zaqar_client = self.client
# queue_name is a combination of <alarm-id>-<topic>
queue_name = "%s-%s" % (message['body']['alarm_id'],
queue_info.get('topic')[-1])
# create a queue in zaqar
queue = zaqar_client.queue(queue_name)
subscriber_list = queue_info.get('subscriber', [])
ttl = int(queue_info.get('ttl', ['3600'])[-1])
for subscriber in subscriber_list:
# add subscriber to the zaqar queue
subscription_data = dict(subscriber=subscriber,
ttl=ttl)
zaqar_client.subscription(queue_name, **subscription_data)
# post the message to the queue
queue.post(message)
except IndexError:
LOG.error("Required query option missing in action %s",
action)
except Exception:
LOG.error("Unknown error occurred; Failed to post message to"
" Zaqar queue",
exc_info=True)
class TrustZaqarAlarmNotifier(trust.TrustAlarmNotifierMixin,
ZaqarAlarmNotifier):
"""Zaqar notifier using a Keystone trust to post to user-defined queues.
The URL must be in the form ``trust+zaqar://trust_id@?queue_name=example``.
"""
def _get_client_conf(self, auth_token):
return {
'auth_opts': {
'backend': 'keystone',
'options': {
'os_auth_token': auth_token,
}
}
}
def notify_zaqar(self, action, message, headers):
queue_info = urlparse.parse_qs(action.query)
try:
queue_name = queue_info.get('queue_name')[-1]
except IndexError:
LOG.error("Required 'queue_name' query option missing in"
" action %s",
action)
return
try:
conf = self._get_client_conf(headers['X-Auth-Token'])
client = self.get_zaqar_client(conf)
queue = client.queue(queue_name)
queue.post(message)
except Exception:
LOG.error("Unknown error occurred; Failed to post message to"
" Zaqar queue",
exc_info=True)

View File

@@ -1,66 +0,0 @@
# Copyright 2014-2015 eNovance
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from keystoneauth1 import loading
import aodh.api
import aodh.api.controllers.v2.alarm_rules.gnocchi
import aodh.api.controllers.v2.alarms
import aodh.coordination
import aodh.evaluator
import aodh.evaluator.event
import aodh.evaluator.gnocchi
import aodh.event
import aodh.keystone_client
import aodh.notifier.rest
import aodh.notifier.zaqar
import aodh.service
import aodh.storage
def list_opts():
return [
('DEFAULT',
itertools.chain(
aodh.evaluator.OPTS,
aodh.evaluator.event.OPTS,
aodh.evaluator.threshold.OPTS,
aodh.notifier.rest.OPTS,
aodh.queue.OPTS,
aodh.service.OPTS)),
('api',
itertools.chain(
aodh.api.OPTS,
aodh.api.controllers.v2.alarm_rules.gnocchi.GNOCCHI_OPTS,
aodh.api.controllers.v2.alarms.ALARM_API_OPTS)),
('coordination', aodh.coordination.OPTS),
('database', aodh.storage.OPTS),
('evaluator', aodh.service.EVALUATOR_OPTS),
('listener', itertools.chain(aodh.service.LISTENER_OPTS,
aodh.event.OPTS)),
('notifier', aodh.service.NOTIFIER_OPTS),
('service_credentials', aodh.keystone_client.OPTS),
('service_types', aodh.notifier.zaqar.SERVICE_OPTS),
('notifier', aodh.notifier.OPTS),
]
def list_keystoneauth_opts():
# NOTE(sileht): the configuration file contains only the options
# for the password plugin that handles keystone v2 and v3 API
# with discovery. But other options are possible.
return [('service_credentials', (
loading.get_auth_common_conf_options() +
loading.get_auth_plugin_conf_options('password')))]

View File

@@ -1,58 +0,0 @@
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from oslo_log import log
import oslo_messaging
import six
from aodh import messaging
from aodh.storage import models
OPTS = [
cfg.StrOpt('notifier_topic',
default='alarming',
help='The topic that aodh uses for alarm notifier '
'messages.'),
]
LOG = log.getLogger(__name__)
class AlarmNotifier(object):
def __init__(self, conf):
self.notifier = oslo_messaging.Notifier(
messaging.get_transport(conf),
driver='messagingv2',
publisher_id="alarming.evaluator",
topics=[conf.notifier_topic])
def notify(self, alarm, previous, reason, reason_data):
actions = getattr(alarm, models.Alarm.ALARM_ACTIONS_MAP[alarm.state])
if not actions:
LOG.debug('alarm %(alarm_id)s has no action configured '
'for state transition from %(previous)s to '
'state %(state)s, skipping the notification.',
{'alarm_id': alarm.alarm_id,
'previous': previous,
'state': alarm.state})
return
payload = {'actions': actions,
'alarm_id': alarm.alarm_id,
'alarm_name': alarm.name,
'severity': alarm.severity,
'previous': previous,
'current': alarm.state,
'reason': six.text_type(reason),
'reason_data': reason_data}
self.notifier.sample({}, 'alarm.update', payload)

View File

@@ -1,92 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2013-2017 Red Hat, Inc
# Copyright 2012-2015 eNovance <licensing@enovance.com>
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from keystoneauth1 import loading as ka_loading
from oslo_config import cfg
from oslo_db import options as db_options
import oslo_i18n
from oslo_log import log
from oslo_policy import opts as policy_opts
from aodh.conf import defaults
from aodh import keystone_client
from aodh import messaging
OPTS = [
cfg.IntOpt('http_timeout',
default=600,
help='Timeout seconds for HTTP requests. Set it to None to '
'disable timeout.'),
cfg.IntOpt('evaluation_interval',
default=60,
help='Period of evaluation cycle, should'
' be >= than configured pipeline interval for'
' collection of underlying meters.'),
]
EVALUATOR_OPTS = [
cfg.IntOpt('workers',
default=1,
min=1,
help='Number of workers for evaluator service. '
'default value is 1.')
]
NOTIFIER_OPTS = [
cfg.IntOpt('workers',
default=1,
min=1,
help='Number of workers for notifier service. '
'default value is 1.')
]
LISTENER_OPTS = [
cfg.IntOpt('workers',
default=1,
min=1,
help='Number of workers for listener service. '
'default value is 1.')
]
def prepare_service(argv=None, config_files=None):
conf = cfg.ConfigOpts()
oslo_i18n.enable_lazy()
log.register_options(conf)
log_levels = (conf.default_log_levels +
['futurist=INFO', 'keystoneclient=INFO'])
log.set_defaults(default_log_levels=log_levels)
defaults.set_cors_middleware_defaults()
db_options.set_defaults(conf)
policy_opts.set_defaults(conf, policy_file=os.path.abspath(
os.path.join(os.path.dirname(__file__), "api", "policy.json")))
from aodh import opts
# Register our own Aodh options
for group, options in opts.list_opts():
conf.register_opts(list(options),
group=None if group == "DEFAULT" else group)
keystone_client.register_keystoneauth_opts(conf)
conf(argv, project='aodh', validate_default_values=True,
default_config_files=config_files)
ka_loading.load_auth_from_conf_options(conf, "service_credentials")
log.setup(conf, 'aodh')
messaging.setup()
return conf

View File

@@ -1,139 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Storage backend management
"""
import datetime
from oslo_config import cfg
from oslo_log import log
from oslo_utils import timeutils
import six.moves.urllib.parse as urlparse
from stevedore import driver
import tenacity
_NAMESPACE = 'aodh.storage'
LOG = log.getLogger(__name__)
OPTS = [
cfg.IntOpt('alarm_history_time_to_live',
default=-1,
help=("Number of seconds that alarm histories are kept "
"in the database for (<= 0 means forever).")),
]
class StorageBadVersion(Exception):
"""Error raised when the storage backend version is not good enough."""
class AlarmNotFound(Exception):
"""Error raised when the needed resource not found."""
def __init__(self, alarm_id):
self.alarm_id = alarm_id
super(AlarmNotFound, self).__init__("Alarm %s not found" % alarm_id)
class InvalidMarker(Exception):
"""Invalid pagination marker parameters"""
def get_connection_from_config(conf):
retries = conf.database.max_retries
url = conf.database.connection
connection_scheme = urlparse.urlparse(url).scheme
LOG.debug('looking for %(name)r driver in %(namespace)r',
{'name': connection_scheme, 'namespace': _NAMESPACE})
mgr = driver.DriverManager(_NAMESPACE, connection_scheme)
@tenacity.retry(
wait=tenacity.wait_fixed(conf.database.retry_interval),
stop=tenacity.stop_after_attempt(retries if retries >= 0 else 5),
reraise=True)
def _get_connection():
"""Return an open connection to the database."""
return mgr.driver(conf, url)
return _get_connection()
class SampleFilter(object):
"""Holds the properties for building a query from a meter/sample filter.
:param user: The sample owner.
:param project: The sample project.
:param start_timestamp: Earliest time point in the request.
:param start_timestamp_op: Earliest timestamp operation in the request.
:param end_timestamp: Latest time point in the request.
:param end_timestamp_op: Latest timestamp operation in the request.
:param resource: Optional filter for resource id.
:param meter: Optional filter for meter type using the meter name.
:param source: Optional source filter.
:param message_id: Optional sample_id filter.
:param metaquery: Optional filter on the metadata
"""
def __init__(self, user=None, project=None,
start_timestamp=None, start_timestamp_op=None,
end_timestamp=None, end_timestamp_op=None,
resource=None, meter=None,
source=None, message_id=None,
metaquery=None):
self.user = user
self.project = project
self.start_timestamp = self.sanitize_timestamp(start_timestamp)
self.start_timestamp_op = start_timestamp_op
self.end_timestamp = self.sanitize_timestamp(end_timestamp)
self.end_timestamp_op = end_timestamp_op
self.resource = resource
self.meter = meter
self.source = source
self.metaquery = metaquery or {}
self.message_id = message_id
@staticmethod
def sanitize_timestamp(timestamp):
"""Return a naive utc datetime object."""
if not timestamp:
return timestamp
if not isinstance(timestamp, datetime.datetime):
timestamp = timeutils.parse_isotime(timestamp)
return timeutils.normalize_time(timestamp)
def __repr__(self):
return ("<SampleFilter(user: %s,"
" project: %s,"
" start_timestamp: %s,"
" start_timestamp_op: %s,"
" end_timestamp: %s,"
" end_timestamp_op: %s,"
" resource: %s,"
" meter: %s,"
" source: %s,"
" metaquery: %s,"
" message_id: %s)>" %
(self.user,
self.project,
self.start_timestamp,
self.start_timestamp_op,
self.end_timestamp,
self.end_timestamp_op,
self.resource,
self.meter,
self.source,
self.metaquery,
self.message_id))

View File

@@ -1,221 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Base classes for storage engines
"""
import copy
import inspect
import six
import aodh
def update_nested(original_dict, updates):
"""Updates the leaf nodes in a nest dict.
Updates occur without replacing entire sub-dicts.
"""
dict_to_update = copy.deepcopy(original_dict)
for key, value in six.iteritems(updates):
if isinstance(value, dict):
sub_dict = update_nested(dict_to_update.get(key, {}), value)
dict_to_update[key] = sub_dict
else:
dict_to_update[key] = updates[key]
return dict_to_update
class Model(object):
"""Base class for storage API models."""
def __init__(self, **kwds):
self.fields = list(kwds)
for k, v in six.iteritems(kwds):
setattr(self, k, v)
def as_dict(self):
d = {}
for f in self.fields:
v = getattr(self, f)
if isinstance(v, Model):
v = v.as_dict()
elif isinstance(v, list) and v and isinstance(v[0], Model):
v = [sub.as_dict() for sub in v]
d[f] = v
return d
def __eq__(self, other):
return self.as_dict() == other.as_dict()
def __ne__(self, other):
return not self.__eq__(other)
@classmethod
def get_field_names(cls):
fields = inspect.getargspec(cls.__init__)[0]
return set(fields) - set(["self"])
class Connection(object):
"""Base class for alarm storage system connections."""
# A dictionary representing the capabilities of this driver.
CAPABILITIES = {
'alarms': {'query': {'simple': False,
'complex': False},
'history': {'query': {'simple': False,
'complex': False}}},
}
STORAGE_CAPABILITIES = {
'storage': {'production_ready': False},
}
def __init__(self, conf, url):
pass
@staticmethod
def upgrade():
"""Migrate the database to `version` or the most recent version."""
@staticmethod
def get_alarms(name=None, user=None, state=None, meter=None,
project=None, enabled=None, alarm_id=None,
alarm_type=None, severity=None, exclude=None,
pagination=None):
"""Yields a lists of alarms that match filters.
:param name: Optional name for alarm.
:param user: Optional ID for user that owns the resource.
:param state: Optional string for alarm state.
:param meter: Optional string for alarms associated with meter.
:param project: Optional ID for project that owns the resource.
:param enabled: Optional boolean to list disable alarm.
:param alarm_id: Optional alarm_id to return one alarm.
:param alarm_type: Optional alarm type.
:param severity: Optional alarm severity.
:param exclude: Optional dict for inequality constraint.
:param pagination: Pagination parameters.
"""
raise aodh.NotImplementedError('Alarms not implemented')
@staticmethod
def create_alarm(alarm):
"""Create an alarm. Returns the alarm as created.
:param alarm: The alarm to create.
"""
raise aodh.NotImplementedError('Alarms not implemented')
@staticmethod
def update_alarm(alarm):
"""Update alarm."""
raise aodh.NotImplementedError('Alarms not implemented')
@staticmethod
def delete_alarm(alarm_id):
"""Delete an alarm and its history data."""
raise aodh.NotImplementedError('Alarms not implemented')
@staticmethod
def get_alarm_changes(alarm_id, on_behalf_of,
user=None, project=None, alarm_type=None,
severity=None, start_timestamp=None,
start_timestamp_op=None, end_timestamp=None,
end_timestamp_op=None, pagination=None):
"""Yields list of AlarmChanges describing alarm history
Changes are always sorted in reverse order of occurrence, given
the importance of currency.
Segregation for non-administrative users is done on the basis
of the on_behalf_of parameter. This allows such users to have
visibility on both the changes initiated by themselves directly
(generally creation, rule changes, or deletion) and also on those
changes initiated on their behalf by the alarming service (state
transitions after alarm thresholds are crossed).
:param alarm_id: ID of alarm to return changes for
:param on_behalf_of: ID of tenant to scope changes query (None for
administrative user, indicating all projects)
:param user: Optional ID of user to return changes for
:param project: Optional ID of project to return changes for
:param alarm_type: Optional change type
:param severity: Optional change severity
:param start_timestamp: Optional modified timestamp start range
:param start_timestamp_op: Optional timestamp start range operation
:param end_timestamp: Optional modified timestamp end range
:param end_timestamp_op: Optional timestamp end range operation
:param pagination: Pagination parameters.
"""
raise aodh.NotImplementedError('Alarm history not implemented')
@staticmethod
def record_alarm_change(alarm_change):
"""Record alarm change event."""
raise aodh.NotImplementedError('Alarm history not implemented')
@staticmethod
def clear():
"""Clear database."""
@staticmethod
def query_alarms(filter_expr=None, orderby=None, limit=None):
"""Return an iterable of model.Alarm objects.
:param filter_expr: Filter expression for query.
:param orderby: List of field name and direction pairs for order by.
:param limit: Maximum number of results to return.
"""
raise aodh.NotImplementedError('Complex query for alarms '
'is not implemented.')
@staticmethod
def query_alarm_history(filter_expr=None, orderby=None, limit=None):
"""Return an iterable of model.AlarmChange objects.
:param filter_expr: Filter expression for query.
:param orderby: List of field name and direction pairs for order by.
:param limit: Maximum number of results to return.
"""
raise aodh.NotImplementedError('Complex query for alarms '
'history is not implemented.')
@classmethod
def get_capabilities(cls):
"""Return an dictionary with the capabilities of each driver."""
return cls.CAPABILITIES
@classmethod
def get_storage_capabilities(cls):
"""Return a dictionary representing the performance capabilities.
This is needed to evaluate the performance of each driver.
"""
return cls.STORAGE_CAPABILITIES
@staticmethod
def clear_expired_alarm_history_data(alarm_history_ttl):
"""Clear expired alarm history data from the backend storage system.
Clearing occurs according to the time-to-live.
:param alarm_history_ttl: Number of seconds to keep alarm history
records for.
"""
raise aodh.NotImplementedError('Clearing alarm history '
'not implemented')

View File

@@ -1,68 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Simple logging storage backend.
"""
from oslo_log import log
from aodh.storage import base
LOG = log.getLogger(__name__)
class Connection(base.Connection):
"""Log the data."""
@staticmethod
def upgrade():
pass
@staticmethod
def clear():
pass
@staticmethod
def get_alarms(name=None, user=None, state=None, meter=None,
project=None, enabled=None, alarm_id=None,
alarm_type=None, severity=None, exclude=None,
pagination=None):
"""Yields a lists of alarms that match filters."""
return []
@staticmethod
def create_alarm(alarm):
"""Create alarm."""
return alarm
@staticmethod
def update_alarm(alarm):
"""Update alarm."""
return alarm
@staticmethod
def delete_alarm(alarm_id):
"""Delete an alarm and its history data."""
@staticmethod
def clear_expired_alarm_history_data(alarm_history_ttl):
"""Clear expired alarm history data from the backend storage system.
Clearing occurs according to the time-to-live.
:param alarm_history_ttl: Number of seconds to keep alarm history
records for.
"""
LOG.info('Dropping alarm history data with TTL %d',
alarm_history_ttl)

View File

@@ -1,407 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""SQLAlchemy storage backend."""
from __future__ import absolute_import
import copy
import datetime
import os.path
from alembic import command
from alembic import config
from alembic import migration
from oslo_db.sqlalchemy import session as db_session
from oslo_db.sqlalchemy import utils as oslo_sql_utils
from oslo_log import log
from oslo_utils import timeutils
import six
from sqlalchemy import asc
from sqlalchemy import desc
from sqlalchemy.engine import url as sqlalchemy_url
from sqlalchemy import func
from sqlalchemy.orm import exc
import aodh
from aodh import storage
from aodh.storage import base
from aodh.storage import models as alarm_api_models
from aodh.storage.sqlalchemy import models
from aodh.storage.sqlalchemy import utils as sql_utils
LOG = log.getLogger(__name__)
AVAILABLE_CAPABILITIES = {
'alarms': {'query': {'simple': True,
'complex': True},
'history': {'query': {'simple': True,
'complex': True}}},
}
AVAILABLE_STORAGE_CAPABILITIES = {
'storage': {'production_ready': True},
}
class Connection(base.Connection):
"""Put the data into a SQLAlchemy database. """
CAPABILITIES = base.update_nested(base.Connection.CAPABILITIES,
AVAILABLE_CAPABILITIES)
STORAGE_CAPABILITIES = base.update_nested(
base.Connection.STORAGE_CAPABILITIES,
AVAILABLE_STORAGE_CAPABILITIES,
)
def __init__(self, conf, url):
# Set max_retries to 0, since oslo.db in certain cases may attempt
# to retry making the db connection retried max_retries ^ 2 times
# in failure case and db reconnection has already been implemented
# in storage.__init__.get_connection_from_config function
options = dict(conf.database.items())
options['max_retries'] = 0
# oslo.db doesn't support options defined by Aodh
for opt in storage.OPTS:
options.pop(opt.name, None)
self._engine_facade = db_session.EngineFacade(self.dress_url(url),
**options)
self.conf = conf
@staticmethod
def dress_url(url):
# If no explicit driver has been set, we default to pymysql
if url.startswith("mysql://"):
url = sqlalchemy_url.make_url(url)
url.drivername = "mysql+pymysql"
return str(url)
return url
def disconnect(self):
self._engine_facade.get_engine().dispose()
def _get_alembic_config(self):
cfg = config.Config(
"%s/sqlalchemy/alembic/alembic.ini" % os.path.dirname(__file__))
cfg.set_main_option('sqlalchemy.url',
self.conf.database.connection)
return cfg
def upgrade(self, nocreate=False):
cfg = self._get_alembic_config()
cfg.conf = self.conf
if nocreate:
command.upgrade(cfg, "head")
else:
engine = self._engine_facade.get_engine()
ctxt = migration.MigrationContext.configure(engine.connect())
current_version = ctxt.get_current_revision()
if current_version is None:
models.Base.metadata.create_all(engine, checkfirst=False)
command.stamp(cfg, "head")
else:
command.upgrade(cfg, "head")
def clear(self):
engine = self._engine_facade.get_engine()
for table in reversed(models.Base.metadata.sorted_tables):
engine.execute(table.delete())
engine.dispose()
def _retrieve_data(self, filter_expr, orderby, limit, table):
if limit == 0:
return []
session = self._engine_facade.get_session()
engine = self._engine_facade.get_engine()
query = session.query(table)
transformer = sql_utils.QueryTransformer(table, query,
dialect=engine.dialect.name)
if filter_expr is not None:
transformer.apply_filter(filter_expr)
transformer.apply_options(orderby,
limit)
retrieve = {models.Alarm: self._retrieve_alarms,
models.AlarmChange: self._retrieve_alarm_history}
return retrieve[table](transformer.get_query())
@staticmethod
def _row_to_alarm_model(row):
return alarm_api_models.Alarm(alarm_id=row.alarm_id,
enabled=row.enabled,
type=row.type,
name=row.name,
description=row.description,
timestamp=row.timestamp,
user_id=row.user_id,
project_id=row.project_id,
state=row.state,
state_timestamp=row.state_timestamp,
state_reason=row.state_reason,
ok_actions=row.ok_actions,
alarm_actions=row.alarm_actions,
insufficient_data_actions=(
row.insufficient_data_actions),
rule=row.rule,
time_constraints=row.time_constraints,
repeat_actions=row.repeat_actions,
severity=row.severity)
def _retrieve_alarms(self, query):
return (self._row_to_alarm_model(x) for x in query.all())
@staticmethod
def _get_pagination_query(session, query, pagination, api_model, model):
if not pagination.get('sort'):
pagination['sort'] = api_model.DEFAULT_SORT
marker = None
if pagination.get('marker'):
key_attr = getattr(model, api_model.PRIMARY_KEY)
marker_query = copy.copy(query)
marker_query = marker_query.filter(
key_attr == pagination['marker'])
try:
marker = marker_query.limit(1).one()
except exc.NoResultFound:
raise storage.InvalidMarker(
'Marker %s not found.' % pagination['marker'])
limit = pagination.get('limit')
# we sort by "severity" by its semantic than its alphabetical
# order when "severity" specified in sorts.
for sort_key, sort_dir in pagination['sort'][::-1]:
if sort_key == 'severity':
engine = session.connection()
if engine.dialect.name != "mysql":
raise aodh.NotImplementedError
sort_dir_func = {'asc': asc, 'desc': desc}[sort_dir]
query = query.order_by(sort_dir_func(
func.field(getattr(model, sort_key), 'low',
'moderate', 'critical')))
pagination['sort'].remove((sort_key, sort_dir))
sort_keys = [s[0] for s in pagination['sort']]
sort_dirs = [s[1] for s in pagination['sort']]
return oslo_sql_utils.paginate_query(
query, model, limit, sort_keys, sort_dirs=sort_dirs, marker=marker)
def get_alarms(self, name=None, user=None, state=None, meter=None,
project=None, enabled=None, alarm_id=None,
alarm_type=None, severity=None, exclude=None,
pagination=None):
"""Yields a lists of alarms that match filters.
:param name: Optional name for alarm.
:param user: Optional ID for user that owns the resource.
:param state: Optional string for alarm state.
:param meter: Optional string for alarms associated with meter.
:param project: Optional ID for project that owns the resource.
:param enabled: Optional boolean to list disable alarm.
:param alarm_id: Optional alarm_id to return one alarm.
:param alarm_type: Optional alarm type.
:param severity: Optional alarm severity.
:param exclude: Optional dict for inequality constraint.
:param pagination: Pagination query parameters.
"""
pagination = pagination or {}
session = self._engine_facade.get_session()
query = session.query(models.Alarm)
if name is not None:
query = query.filter(models.Alarm.name == name)
if enabled is not None:
query = query.filter(models.Alarm.enabled == enabled)
if user is not None:
query = query.filter(models.Alarm.user_id == user)
if project is not None:
query = query.filter(models.Alarm.project_id == project)
if alarm_id is not None:
query = query.filter(models.Alarm.alarm_id == alarm_id)
if state is not None:
query = query.filter(models.Alarm.state == state)
if alarm_type is not None:
query = query.filter(models.Alarm.type == alarm_type)
if severity is not None:
query = query.filter(models.Alarm.severity == severity)
if exclude is not None:
for key, value in six.iteritems(exclude):
query = query.filter(getattr(models.Alarm, key) != value)
query = self._get_pagination_query(
session, query, pagination, alarm_api_models.Alarm, models.Alarm)
alarms = self._retrieve_alarms(query)
# TODO(cmart): improve this by using sqlalchemy.func factory
if meter is not None:
alarms = filter(lambda row:
row.rule.get('meter_name', None) == meter,
alarms)
return alarms
def create_alarm(self, alarm):
"""Create an alarm.
:param alarm: The alarm to create.
"""
session = self._engine_facade.get_session()
with session.begin():
alarm_row = models.Alarm(alarm_id=alarm.alarm_id)
alarm_row.update(alarm.as_dict())
session.add(alarm_row)
return self._row_to_alarm_model(alarm_row)
def update_alarm(self, alarm):
"""Update an alarm.
:param alarm: the new Alarm to update
"""
session = self._engine_facade.get_session()
with session.begin():
count = session.query(models.Alarm).filter(
models.Alarm.alarm_id == alarm.alarm_id).update(
alarm.as_dict())
if not count:
raise storage.AlarmNotFound(alarm.alarm_id)
return alarm
def delete_alarm(self, alarm_id):
"""Delete an alarm and its history data.
:param alarm_id: ID of the alarm to delete
"""
session = self._engine_facade.get_session()
with session.begin():
session.query(models.Alarm).filter(
models.Alarm.alarm_id == alarm_id).delete()
# FIXME(liusheng): we should use delete cascade
session.query(models.AlarmChange).filter(
models.AlarmChange.alarm_id == alarm_id).delete()
@staticmethod
def _row_to_alarm_change_model(row):
return alarm_api_models.AlarmChange(event_id=row.event_id,
alarm_id=row.alarm_id,
type=row.type,
detail=row.detail,
user_id=row.user_id,
project_id=row.project_id,
on_behalf_of=row.on_behalf_of,
timestamp=row.timestamp)
def query_alarms(self, filter_expr=None, orderby=None, limit=None):
"""Yields a lists of alarms that match filter."""
return self._retrieve_data(filter_expr, orderby, limit, models.Alarm)
def _retrieve_alarm_history(self, query):
return (self._row_to_alarm_change_model(x) for x in query.all())
def query_alarm_history(self, filter_expr=None, orderby=None, limit=None):
"""Return an iterable of model.AlarmChange objects."""
return self._retrieve_data(filter_expr,
orderby,
limit,
models.AlarmChange)
def get_alarm_changes(self, alarm_id, on_behalf_of,
user=None, project=None, alarm_type=None,
severity=None, start_timestamp=None,
start_timestamp_op=None, end_timestamp=None,
end_timestamp_op=None, pagination=None):
"""Yields list of AlarmChanges describing alarm history
Changes are always sorted in reverse order of occurrence, given
the importance of currency.
Segregation for non-administrative users is done on the basis
of the on_behalf_of parameter. This allows such users to have
visibility on both the changes initiated by themselves directly
(generally creation, rule changes, or deletion) and also on those
changes initiated on their behalf by the alarming service (state
transitions after alarm thresholds are crossed).
:param alarm_id: ID of alarm to return changes for
:param on_behalf_of: ID of tenant to scope changes query (None for
administrative user, indicating all projects)
:param user: Optional ID of user to return changes for
:param project: Optional ID of project to return changes for
:param alarm_type: Optional change type
:param severity: Optional alarm severity
:param start_timestamp: Optional modified timestamp start range
:param start_timestamp_op: Optional timestamp start range operation
:param end_timestamp: Optional modified timestamp end range
:param end_timestamp_op: Optional timestamp end range operation
:param pagination: Pagination query parameters.
"""
pagination = pagination or {}
session = self._engine_facade.get_session()
query = session.query(models.AlarmChange)
query = query.filter(models.AlarmChange.alarm_id == alarm_id)
if on_behalf_of is not None:
query = query.filter(
models.AlarmChange.on_behalf_of == on_behalf_of)
if user is not None:
query = query.filter(models.AlarmChange.user_id == user)
if project is not None:
query = query.filter(models.AlarmChange.project_id == project)
if alarm_type is not None:
query = query.filter(models.AlarmChange.type == alarm_type)
if severity is not None:
query = query.filter(models.AlarmChange.severity == severity)
if start_timestamp:
if start_timestamp_op == 'gt':
query = query.filter(
models.AlarmChange.timestamp > start_timestamp)
else:
query = query.filter(
models.AlarmChange.timestamp >= start_timestamp)
if end_timestamp:
if end_timestamp_op == 'le':
query = query.filter(
models.AlarmChange.timestamp <= end_timestamp)
else:
query = query.filter(
models.AlarmChange.timestamp < end_timestamp)
query = self._get_pagination_query(
session, query, pagination, alarm_api_models.AlarmChange,
models.AlarmChange)
return self._retrieve_alarm_history(query)
def record_alarm_change(self, alarm_change):
"""Record alarm change event."""
session = self._engine_facade.get_session()
with session.begin():
alarm_change_row = models.AlarmChange(
event_id=alarm_change['event_id'])
alarm_change_row.update(alarm_change)
session.add(alarm_change_row)
def clear_expired_alarm_history_data(self, alarm_history_ttl):
"""Clear expired alarm history data from the backend storage system.
Clearing occurs according to the time-to-live.
:param alarm_history_ttl: Number of seconds to keep alarm history
records for.
"""
session = self._engine_facade.get_session()
with session.begin():
valid_start = (timeutils.utcnow() -
datetime.timedelta(seconds=alarm_history_ttl))
deleted_rows = (session.query(models.AlarmChange)
.filter(models.AlarmChange.timestamp < valid_start)
.delete())
LOG.info("%d alarm histories are removed from database",
deleted_rows)

View File

@@ -1,150 +0,0 @@
#
# Copyright 2013 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Model classes for use in the storage API.
"""
import datetime
from aodh.i18n import _
from aodh.storage import base
class Alarm(base.Model):
ALARM_INSUFFICIENT_DATA = 'insufficient data'
ALARM_OK = 'ok'
ALARM_ALARM = 'alarm'
ALARM_ACTIONS_MAP = {
ALARM_INSUFFICIENT_DATA: 'insufficient_data_actions',
ALARM_OK: 'ok_actions',
ALARM_ALARM: 'alarm_actions',
}
ALARM_LEVEL_LOW = 'low'
ALARM_LEVEL_MODERATE = 'moderate'
ALARM_LEVEL_CRITICAL = 'critical'
SUPPORT_SORT_KEYS = (
'alarm_id', 'enabled', 'name', 'type', 'severity', 'timestamp',
'user_id', 'project_id', 'state', 'repeat_actions', 'state_timestamp')
DEFAULT_SORT = [('timestamp', 'desc')]
PRIMARY_KEY = 'alarm_id'
"""
An alarm to monitor.
:param alarm_id: UUID of the alarm
:param type: type of the alarm
:param name: The Alarm name
:param description: User friendly description of the alarm
:param enabled: Is the alarm enabled
:param state: Alarm state (ok/alarm/insufficient data)
:param state_reason: Alarm state reason
:param rule: A rule that defines when the alarm fires
:param user_id: the owner/creator of the alarm
:param project_id: the project_id of the creator
:param evaluation_periods: the number of periods
:param period: the time period in seconds
:param time_constraints: the list of the alarm's time constraints, if any
:param timestamp: the timestamp when the alarm was last updated
:param state_timestamp: the timestamp of the last state change
:param ok_actions: the list of webhooks to call when entering the ok state
:param alarm_actions: the list of webhooks to call when entering the
alarm state
:param insufficient_data_actions: the list of webhooks to call when
entering the insufficient data state
:param repeat_actions: Is the actions should be triggered on each
alarm evaluation.
:param severity: Alarm level (low/moderate/critical)
"""
def __init__(self, alarm_id, type, enabled, name, description,
timestamp, user_id, project_id, state, state_timestamp,
state_reason, ok_actions, alarm_actions,
insufficient_data_actions, repeat_actions, rule,
time_constraints, severity=None):
if not isinstance(timestamp, datetime.datetime):
raise TypeError(_("timestamp should be datetime object"))
if not isinstance(state_timestamp, datetime.datetime):
raise TypeError(_("state_timestamp should be datetime object"))
base.Model.__init__(
self,
alarm_id=alarm_id,
type=type,
enabled=enabled,
name=name,
description=description,
timestamp=timestamp,
user_id=user_id,
project_id=project_id,
state=state,
state_timestamp=state_timestamp,
state_reason=state_reason,
ok_actions=ok_actions,
alarm_actions=alarm_actions,
insufficient_data_actions=insufficient_data_actions,
repeat_actions=repeat_actions,
rule=rule,
time_constraints=time_constraints,
severity=severity)
class AlarmChange(base.Model):
"""Record of an alarm change.
:param event_id: UUID of the change event
:param alarm_id: UUID of the alarm
:param type: The type of change
:param severity: The severity of alarm
:param detail: JSON fragment describing change
:param user_id: the user ID of the initiating identity
:param project_id: the project ID of the initiating identity
:param on_behalf_of: the tenant on behalf of which the change
is being made
:param timestamp: the timestamp of the change
"""
CREATION = 'creation'
RULE_CHANGE = 'rule change'
STATE_TRANSITION = 'state transition'
DELETION = 'deletion'
SUPPORT_SORT_KEYS = (
'event_id', 'alarm_id', 'on_behalf_of', 'project_id', 'user_id',
'type', 'timestamp', 'severity')
DEFAULT_SORT = [('timestamp', 'desc')]
PRIMARY_KEY = 'event_id'
def __init__(self,
event_id,
alarm_id,
type,
detail,
user_id,
project_id,
on_behalf_of,
severity=None,
timestamp=None
):
base.Model.__init__(
self,
event_id=event_id,
alarm_id=alarm_id,
type=type,
severity=severity,
detail=detail,
user_id=user_id,
project_id=project_id,
on_behalf_of=on_behalf_of,
timestamp=timestamp)

View File

@@ -1,37 +0,0 @@
[alembic]
script_location = aodh.storage.sqlalchemy:alembic
sqlalchemy.url =
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = WARN
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@@ -1,92 +0,0 @@
#
# Copyright 2015 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import with_statement
from alembic import context
from logging.config import fileConfig
from aodh.storage import impl_sqlalchemy
from aodh.storage.sqlalchemy import models
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = models.Base.metadata
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
conf = config.conf
context.configure(url=conf.database.connection,
target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
conf = config.conf
conn = impl_sqlalchemy.Connection(conf, conf.database.connection)
connectable = conn._engine_facade.get_engine()
with connectable.connect() as connection:
context.configure(
connection=connection,
target_metadata=target_metadata
)
with context.begin_transaction():
context.run_migrations()
conn.disconnect()
if not hasattr(config, "conf"):
from aodh import service
config.conf = service.prepare_service([])
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@@ -1,35 +0,0 @@
# Copyright ${create_date.year} OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision | comma,n}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
branch_labels = ${repr(branch_labels)}
depends_on = ${repr(depends_on)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}

View File

@@ -1,107 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""initial base
Revision ID: 12fe8fac9fe4
Revises:
Create Date: 2015-07-28 17:38:37.022899
"""
# revision identifiers, used by Alembic.
revision = '12fe8fac9fe4'
down_revision = None
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
from sqlalchemy import types
import aodh.storage.sqlalchemy.models
class PreciseTimestamp(types.TypeDecorator):
"""Represents a timestamp precise to the microsecond."""
impl = sa.DateTime
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(
types.DECIMAL(precision=20,
scale=6,
asdecimal=True))
return dialect.type_descriptor(self.impl)
def upgrade():
op.create_table(
'alarm_history',
sa.Column('event_id', sa.String(length=128), nullable=False),
sa.Column('alarm_id', sa.String(length=128), nullable=True),
sa.Column('on_behalf_of', sa.String(length=128), nullable=True),
sa.Column('project_id', sa.String(length=128), nullable=True),
sa.Column('user_id', sa.String(length=128), nullable=True),
sa.Column('type', sa.String(length=20), nullable=True),
sa.Column('detail', sa.Text(), nullable=True),
sa.Column('timestamp',
PreciseTimestamp(),
nullable=True),
sa.PrimaryKeyConstraint('event_id')
)
op.create_index(
'ix_alarm_history_alarm_id', 'alarm_history', ['alarm_id'],
unique=False)
op.create_table(
'alarm',
sa.Column('alarm_id', sa.String(length=128), nullable=False),
sa.Column('enabled', sa.Boolean(), nullable=True),
sa.Column('name', sa.Text(), nullable=True),
sa.Column('type', sa.String(length=50), nullable=True),
sa.Column('severity', sa.String(length=50), nullable=True),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('timestamp',
PreciseTimestamp(),
nullable=True),
sa.Column('user_id', sa.String(length=128), nullable=True),
sa.Column('project_id', sa.String(length=128), nullable=True),
sa.Column('state', sa.String(length=255), nullable=True),
sa.Column('state_timestamp',
PreciseTimestamp(),
nullable=True),
sa.Column('ok_actions',
aodh.storage.sqlalchemy.models.JSONEncodedDict(),
nullable=True),
sa.Column('alarm_actions',
aodh.storage.sqlalchemy.models.JSONEncodedDict(),
nullable=True),
sa.Column('insufficient_data_actions',
aodh.storage.sqlalchemy.models.JSONEncodedDict(),
nullable=True),
sa.Column('repeat_actions', sa.Boolean(), nullable=True),
sa.Column('rule',
aodh.storage.sqlalchemy.models.JSONEncodedDict(),
nullable=True),
sa.Column('time_constraints',
aodh.storage.sqlalchemy.models.JSONEncodedDict(),
nullable=True),
sa.PrimaryKeyConstraint('alarm_id')
)
op.create_index(
'ix_alarm_project_id', 'alarm', ['project_id'], unique=False)
op.create_index(
'ix_alarm_user_id', 'alarm', ['user_id'], unique=False)

View File

@@ -1,68 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""precisetimestamp_to_datetime
Revision ID: 367aadf5485f
Revises: f8c31b1ffe11
Create Date: 2016-09-19 16:43:34.379029
"""
# revision identifiers, used by Alembic.
revision = '367aadf5485f'
down_revision = 'f8c31b1ffe11'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
from sqlalchemy import func
from aodh.storage.sqlalchemy import models
def upgrade():
bind = op.get_bind()
if bind and bind.engine.name == "mysql":
# NOTE(jd) So that crappy engine that is MySQL does not have "ALTER
# TABLE … USING …". We need to copy everything and convert…
for table_name, column_name in (("alarm", "timestamp"),
("alarm", "state_timestamp"),
("alarm_history", "timestamp")):
existing_type = sa.types.DECIMAL(
precision=20, scale=6, asdecimal=True)
existing_col = sa.Column(
column_name,
existing_type,
nullable=True)
temp_col = sa.Column(
column_name + "_ts",
models.TimestampUTC(),
nullable=True)
op.add_column(table_name, temp_col)
t = sa.sql.table(table_name, existing_col, temp_col)
op.execute(t.update().values(
**{column_name + "_ts": func.from_unixtime(existing_col)}))
op.drop_column(table_name, column_name)
op.alter_column(table_name,
column_name + "_ts",
nullable=True,
type_=models.TimestampUTC(),
existing_nullable=True,
existing_type=existing_type,
new_column_name=column_name)

View File

@@ -1,37 +0,0 @@
# -*- encoding: utf-8 -*-
#
# Copyright 2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""add_reason_column
Revision ID: 6ae0d05d9451
Revises: 367aadf5485f
Create Date: 2017-06-05 16:42:42.379029
"""
# revision identifiers, used by Alembic.
revision = '6ae0d05d9451'
down_revision = '367aadf5485f'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('alarm', sa.Column('state_reason', sa.Text, nullable=True))

View File

@@ -1,36 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""add severity to alarm history
Revision ID: bb07adac380
Revises: 12fe8fac9fe4
Create Date: 2015-08-06 15:15:43.717068
"""
# revision identifiers, used by Alembic.
revision = 'bb07adac380'
down_revision = '12fe8fac9fe4'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('alarm_history',
sa.Column('severity', sa.String(length=50), nullable=True))

View File

@@ -1,37 +0,0 @@
# Copyright 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""add index for enabled and type
Revision ID: f8c31b1ffe11
Revises: bb07adac380
Create Date: 2016-06-02 19:39:42.495020
"""
# revision identifiers, used by Alembic.
revision = 'f8c31b1ffe11'
down_revision = 'bb07adac380'
branch_labels = None
depends_on = None
from alembic import op
def upgrade():
op.create_index(
'ix_alarm_enabled', 'alarm', ['enabled'], unique=False)
op.create_index(
'ix_alarm_type', 'alarm', ['type'], unique=False)

View File

@@ -1,124 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
SQLAlchemy models for aodh data.
"""
import json
from oslo_utils import timeutils
import six
from sqlalchemy import Column, String, Index, Boolean, Text, DateTime
from sqlalchemy.dialects import mysql
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.types import TypeDecorator
class JSONEncodedDict(TypeDecorator):
"""Represents an immutable structure as a json-encoded string."""
impl = Text
@staticmethod
def process_bind_param(value, dialect):
if value is not None:
value = json.dumps(value)
return value
@staticmethod
def process_result_value(value, dialect):
if value is not None:
value = json.loads(value)
return value
class TimestampUTC(TypeDecorator):
"""Represents a timestamp precise to the microsecond."""
impl = DateTime
def load_dialect_impl(self, dialect):
if dialect.name == 'mysql':
return dialect.type_descriptor(mysql.DATETIME(fsp=6))
return self.impl
class AodhBase(object):
"""Base class for Aodh Models."""
__table_args__ = {'mysql_charset': "utf8",
'mysql_engine': "InnoDB"}
__table_initialized__ = False
def __setitem__(self, key, value):
setattr(self, key, value)
def __getitem__(self, key):
return getattr(self, key)
def update(self, values):
"""Make the model object behave like a dict."""
for k, v in six.iteritems(values):
setattr(self, k, v)
Base = declarative_base(cls=AodhBase)
class Alarm(Base):
"""Define Alarm data."""
__tablename__ = 'alarm'
__table_args__ = (
Index('ix_alarm_user_id', 'user_id'),
Index('ix_alarm_project_id', 'project_id'),
Index('ix_alarm_enabled', 'enabled'),
Index('ix_alarm_type', 'type'),
)
alarm_id = Column(String(128), primary_key=True)
enabled = Column(Boolean)
name = Column(Text)
type = Column(String(50))
severity = Column(String(50))
description = Column(Text)
timestamp = Column(TimestampUTC, default=lambda: timeutils.utcnow())
user_id = Column(String(128))
project_id = Column(String(128))
state = Column(String(255))
state_reason = Column(Text)
state_timestamp = Column(TimestampUTC,
default=lambda: timeutils.utcnow())
ok_actions = Column(JSONEncodedDict)
alarm_actions = Column(JSONEncodedDict)
insufficient_data_actions = Column(JSONEncodedDict)
repeat_actions = Column(Boolean)
rule = Column(JSONEncodedDict)
time_constraints = Column(JSONEncodedDict)
class AlarmChange(Base):
"""Define AlarmChange data."""
__tablename__ = 'alarm_history'
__table_args__ = (
Index('ix_alarm_history_alarm_id', 'alarm_id'),
)
event_id = Column(String(128), primary_key=True)
alarm_id = Column(String(128))
on_behalf_of = Column(String(128))
project_id = Column(String(128))
user_id = Column(String(128))
type = Column(String(20))
detail = Column(Text)
timestamp = Column(TimestampUTC, default=lambda: timeutils.utcnow())
severity = Column(String(50))

View File

@@ -1,103 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import operator
from sqlalchemy import and_
from sqlalchemy import asc
from sqlalchemy import desc
from sqlalchemy import func
from sqlalchemy import not_
from sqlalchemy import or_
class QueryTransformer(object):
operators = {"=": operator.eq,
"<": operator.lt,
">": operator.gt,
"<=": operator.le,
"=<": operator.le,
">=": operator.ge,
"=>": operator.ge,
"!=": operator.ne,
"in": lambda field_name, values: field_name.in_(values),
"=~": lambda field, value: field.op("regexp")(value)}
# operators which are different for different dialects
dialect_operators = {'postgresql': {'=~': (lambda field, value:
field.op("~")(value))}}
complex_operators = {"or": or_,
"and": and_,
"not": not_}
ordering_functions = {"asc": asc,
"desc": desc}
def __init__(self, table, query, dialect='mysql'):
self.table = table
self.query = query
self.dialect_name = dialect
def _get_operator(self, op):
return (self.dialect_operators.get(self.dialect_name, {}).get(op)
or self.operators[op])
def _handle_complex_op(self, complex_op, nodes):
op = self.complex_operators[complex_op]
if op == not_:
nodes = [nodes]
element_list = []
for node in nodes:
element = self._transform(node)
element_list.append(element)
return op(*element_list)
def _handle_simple_op(self, simple_op, nodes):
op = self._get_operator(simple_op)
field_name, value = list(nodes.items())[0]
return op(getattr(self.table, field_name), value)
def _transform(self, sub_tree):
operator, nodes = list(sub_tree.items())[0]
if operator in self.complex_operators:
return self._handle_complex_op(operator, nodes)
else:
return self._handle_simple_op(operator, nodes)
def apply_filter(self, expression_tree):
condition = self._transform(expression_tree)
self.query = self.query.filter(condition)
def apply_options(self, orderby, limit):
self._apply_order_by(orderby)
if limit is not None:
self.query = self.query.limit(limit)
def _apply_order_by(self, orderby):
if orderby is not None:
for field in orderby:
attr, order = list(field.items())[0]
ordering_function = self.ordering_functions[order]
if attr == 'severity':
self.query = self.query.order_by(ordering_function(
func.field(getattr(self.table, attr), 'low',
'moderate', 'critical')))
else:
self.query = self.query.order_by(ordering_function(
getattr(self.table, attr)))
else:
self.query = self.query.order_by(desc(self.table.timestamp))
def get_query(self):
return self.query

View File

View File

@@ -1,108 +0,0 @@
#!/usr/bin/env python
#
# Copyright 2012 New Dream Network (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test base classes.
"""
import fixtures
import functools
import os.path
import unittest
import oslo_messaging.conffixture
from oslo_utils import timeutils
from oslotest import base
import six
import webtest
import aodh
from aodh import messaging
class BaseTestCase(base.BaseTestCase):
def setup_messaging(self, conf, exchange=None):
self.useFixture(oslo_messaging.conffixture.ConfFixture(conf))
conf.set_override("notification_driver", ["messaging"])
if not exchange:
exchange = 'aodh'
conf.set_override("control_exchange", exchange)
# NOTE(sileht): Ensure a new oslo.messaging driver is loaded
# between each tests
self.transport = messaging.get_transport(conf, "fake://", cache=False)
self.useFixture(fixtures.MockPatch(
'aodh.messaging.get_transport',
return_value=self.transport))
def assertTimestampEqual(self, first, second, msg=None):
"""Checks that two timestamps are equals.
This relies on assertAlmostEqual to avoid rounding problem, and only
checks up the first microsecond values.
"""
return self.assertAlmostEqual(
timeutils.delta_seconds(first, second),
0.0,
places=5)
def assertIsEmpty(self, obj):
try:
if len(obj) != 0:
self.fail("%s is not empty" % type(obj))
except (TypeError, AttributeError):
self.fail("%s doesn't have length" % type(obj))
def assertIsNotEmpty(self, obj):
try:
if len(obj) == 0:
self.fail("%s is empty" % type(obj))
except (TypeError, AttributeError):
self.fail("%s doesn't have length" % type(obj))
@staticmethod
def path_get(project_file=None):
root = os.path.abspath(os.path.join(os.path.dirname(__file__),
'..',
'..',
)
)
if project_file:
return os.path.join(root, project_file)
else:
return root
def _skip_decorator(func):
@functools.wraps(func)
def skip_if_not_implemented(*args, **kwargs):
try:
return func(*args, **kwargs)
except aodh.NotImplementedError as e:
raise unittest.SkipTest(six.text_type(e))
except webtest.app.AppError as e:
if 'not implemented' in six.text_type(e):
raise unittest.SkipTest(six.text_type(e))
raise
return skip_if_not_implemented
class SkipNotImplementedMeta(type):
def __new__(cls, name, bases, local):
for attr in local:
value = local[attr]
if callable(value) and (
attr.startswith('test_') or attr == 'setUp'):
local[attr] = _skip_decorator(value)
return type.__new__(cls, name, bases, local)

View File

@@ -1,17 +0,0 @@
# Copyright 2014 Huawei Technologies Co., Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import datetime
MIN_DATETIME = datetime.datetime(datetime.MINYEAR, 1, 1)

View File

@@ -1,155 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Base classes for API tests.
"""
from oslo_config import fixture as fixture_config
import webtest
from aodh.api import app
from aodh import service
from aodh.tests.functional import db as db_test_base
class FunctionalTest(db_test_base.TestBase):
"""Used for functional tests of Pecan controllers.
Used in case when you need to test your literal application and its
integration with the framework.
"""
PATH_PREFIX = ''
def setUp(self):
super(FunctionalTest, self).setUp()
conf = service.prepare_service(argv=[], config_files=[])
self.CONF = self.useFixture(fixture_config.Config(conf)).conf
self.setup_messaging(self.CONF)
self.CONF.set_override('auth_mode', None, group='api')
self.app = webtest.TestApp(app.load_app(self.CONF))
def put_json(self, path, params, expect_errors=False, headers=None,
extra_environ=None, status=None):
"""Sends simulated HTTP PUT request to Pecan test app.
:param path: url path of target service
:param params: content for wsgi.input of request
:param expect_errors: boolean value whether an error is expected based
on request
:param headers: A dictionary of headers to send along with the request
:param extra_environ: A dictionary of environ variables to send along
with the request
:param status: Expected status code of response
"""
return self.post_json(path=path, params=params,
expect_errors=expect_errors,
headers=headers, extra_environ=extra_environ,
status=status, method="put")
def post_json(self, path, params, expect_errors=False, headers=None,
method="post", extra_environ=None, status=None):
"""Sends simulated HTTP POST request to Pecan test app.
:param path: url path of target service
:param params: content for wsgi.input of request
:param expect_errors: boolean value whether an error is expected based
on request
:param headers: A dictionary of headers to send along with the request
:param method: Request method type. Appropriate method function call
should be used rather than passing attribute in.
:param extra_environ: A dictionary of environ variables to send along
with the request
:param status: Expected status code of response
"""
full_path = self.PATH_PREFIX + path
response = getattr(self.app, "%s_json" % method)(
str(full_path),
params=params,
headers=headers,
status=status,
extra_environ=extra_environ,
expect_errors=expect_errors
)
return response
def delete(self, path, expect_errors=False, headers=None,
extra_environ=None, status=None):
"""Sends simulated HTTP DELETE request to Pecan test app.
:param path: url path of target service
:param expect_errors: boolean value whether an error is expected based
on request
:param headers: A dictionary of headers to send along with the request
:param extra_environ: A dictionary of environ variables to send along
with the request
:param status: Expected status code of response
"""
full_path = self.PATH_PREFIX + path
response = self.app.delete(str(full_path),
headers=headers,
status=status,
extra_environ=extra_environ,
expect_errors=expect_errors)
return response
def get_json(self, path, expect_errors=False, headers=None,
extra_environ=None, q=None, groupby=None, status=None,
override_params=None, **params):
"""Sends simulated HTTP GET request to Pecan test app.
:param path: url path of target service
:param expect_errors: boolean value whether an error is expected based
on request
:param headers: A dictionary of headers to send along with the request
:param extra_environ: A dictionary of environ variables to send along
with the request
:param q: list of queries consisting of: field, value, op, and type
keys
:param groupby: list of fields to group by
:param status: Expected status code of response
:param override_params: literally encoded query param string
:param params: content for wsgi.input of request
"""
q = q or []
groupby = groupby or []
full_path = self.PATH_PREFIX + path
if override_params:
all_params = override_params
else:
query_params = {'q.field': [],
'q.value': [],
'q.op': [],
'q.type': [],
}
for query in q:
for name in ['field', 'op', 'value', 'type']:
query_params['q.%s' % name].append(query.get(name, ''))
all_params = {}
all_params.update(params)
if q:
all_params.update(query_params)
if groupby:
all_params.update({'groupby': groupby})
response = self.app.get(full_path,
params=all_params,
headers=headers,
extra_environ=extra_environ,
expect_errors=expect_errors,
status=status)
if not expect_errors:
response = response.json
return response

View File

@@ -1,37 +0,0 @@
# Copyright 2014 IBM Corp. All Rights Reserved.
# Copyright 2015 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from oslo_config import cfg
from oslo_config import fixture as fixture_config
from aodh.api import app
from aodh import service
from aodh.tests import base
class TestApp(base.BaseTestCase):
def setUp(self):
super(TestApp, self).setUp()
conf = service.prepare_service(argv=[], config_files=[])
self.CONF = self.useFixture(fixture_config.Config(conf)).conf
def test_api_paste_file_not_exist(self):
self.CONF.set_override('paste_config', 'non-existent-file', "api")
with mock.patch.object(self.CONF, 'find_file') as ff:
ff.return_value = None
self.assertRaises(cfg.ConfigFilesNotFoundError,
app.load_app, self.CONF)

View File

@@ -1,65 +0,0 @@
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from aodh.tests.functional import api
V2_MEDIA_TYPES = [
{
'base': 'application/json',
'type': 'application/vnd.openstack.telemetry-v2+json'
}, {
'base': 'application/xml',
'type': 'application/vnd.openstack.telemetry-v2+xml'
}
]
V2_HTML_DESCRIPTION = {
'href': 'http://docs.openstack.org/',
'rel': 'describedby',
'type': 'text/html',
}
V2_EXPECTED_RESPONSE = {
'id': 'v2',
'links': [
{
'rel': 'self',
'href': 'http://localhost/v2',
},
V2_HTML_DESCRIPTION
],
'media-types': V2_MEDIA_TYPES,
'status': 'stable',
'updated': '2013-02-13T00:00:00Z',
}
V2_VERSION_RESPONSE = {
"version": V2_EXPECTED_RESPONSE
}
VERSIONS_RESPONSE = {
"versions": {
"values": [
V2_EXPECTED_RESPONSE
]
}
}
class TestVersions(api.FunctionalTest):
def test_versions(self):
data = self.get_json('/')
self.assertEqual(VERSIONS_RESPONSE, data)

View File

@@ -1,20 +0,0 @@
#
# Copyright 2012 New Dream Network, LLC (DreamHost)
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from aodh.tests.functional import api
class FunctionalTest(api.FunctionalTest):
PATH_PREFIX = '/v2'

View File

@@ -1,7 +0,0 @@
{
"context_is_admin": "role:admin",
"segregation": "rule:context_is_admin",
"admin_or_owner": "rule:context_is_admin or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"telemetry:get_alarms": "role:admin"
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,174 +0,0 @@
#
# Copyright 2013 IBM Corp.
# Copyright 2013 Julien Danjou
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test basic aodh-api app
"""
import json
import mock
import six
import wsme
from aodh import i18n
from aodh.tests.functional.api import v2
class TestApiMiddleware(v2.FunctionalTest):
no_lang_translated_error = 'No lang translated error'
en_US_translated_error = 'en-US translated error'
def _fake_translate(self, message, user_locale):
if user_locale is None:
return self.no_lang_translated_error
else:
return self.en_US_translated_error
def test_json_parsable_error_middleware_404(self):
response = self.get_json('/invalid_path',
expect_errors=True,
headers={"Accept":
"application/json"}
)
self.assertEqual(404, response.status_int)
self.assertEqual("application/json", response.content_type)
self.assertTrue(response.json['error_message'])
response = self.get_json('/invalid_path',
expect_errors=True,
headers={"Accept":
"application/json,application/xml"}
)
self.assertEqual(404, response.status_int)
self.assertEqual("application/json", response.content_type)
self.assertTrue(response.json['error_message'])
response = self.get_json('/invalid_path',
expect_errors=True,
headers={"Accept":
"application/xml;q=0.8, \
application/json"}
)
self.assertEqual(404, response.status_int)
self.assertEqual("application/json", response.content_type)
self.assertTrue(response.json['error_message'])
response = self.get_json('/invalid_path',
expect_errors=True
)
self.assertEqual(404, response.status_int)
self.assertEqual("application/json", response.content_type)
self.assertTrue(response.json['error_message'])
response = self.get_json('/invalid_path',
expect_errors=True,
headers={"Accept":
"text/html,*/*"}
)
self.assertEqual(404, response.status_int)
self.assertEqual("application/json", response.content_type)
self.assertTrue(response.json['error_message'])
def test_json_parsable_error_middleware_translation_400(self):
# Ensure translated messages get placed properly into json faults
with mock.patch.object(i18n, 'translate',
side_effect=self._fake_translate):
response = self.post_json('/alarms', params={'name': 'foobar',
'type': 'threshold'},
expect_errors=True,
headers={"Accept":
"application/json"}
)
self.assertEqual(400, response.status_int)
self.assertEqual("application/json", response.content_type)
self.assertTrue(response.json['error_message'])
self.assertEqual(self.no_lang_translated_error,
response.json['error_message']['faultstring'])
def test_xml_parsable_error_middleware_404(self):
response = self.get_json('/invalid_path',
expect_errors=True,
headers={"Accept":
"application/xml,*/*"}
)
self.assertEqual(404, response.status_int)
self.assertEqual("application/xml", response.content_type)
self.assertEqual('error_message', response.xml.tag)
response = self.get_json('/invalid_path',
expect_errors=True,
headers={"Accept":
"application/json;q=0.8 \
,application/xml"}
)
self.assertEqual(404, response.status_int)
self.assertEqual("application/xml", response.content_type)
self.assertEqual('error_message', response.xml.tag)
def test_xml_parsable_error_middleware_translation_400(self):
# Ensure translated messages get placed properly into xml faults
with mock.patch.object(i18n, 'translate',
side_effect=self._fake_translate):
response = self.post_json('/alarms', params={'name': 'foobar',
'type': 'threshold'},
expect_errors=True,
headers={"Accept":
"application/xml,*/*"}
)
self.assertEqual(400, response.status_int)
self.assertEqual("application/xml", response.content_type)
self.assertEqual('error_message', response.xml.tag)
fault = response.xml.findall('./error/faultstring')
for fault_string in fault:
self.assertEqual(self.no_lang_translated_error, fault_string.text)
def test_best_match_language(self):
# Ensure that we are actually invoking language negotiation
with mock.patch.object(i18n, 'translate',
side_effect=self._fake_translate):
response = self.post_json('/alarms', params={'name': 'foobar',
'type': 'threshold'},
expect_errors=True,
headers={"Accept":
"application/xml,*/*",
"Accept-Language":
"en-US"}
)
self.assertEqual(400, response.status_int)
self.assertEqual("application/xml", response.content_type)
self.assertEqual('error_message', response.xml.tag)
fault = response.xml.findall('./error/faultstring')
for fault_string in fault:
self.assertEqual(self.en_US_translated_error, fault_string.text)
def test_translated_then_untranslated_error(self):
resp = self.get_json('/alarms/alarm-id-3', expect_errors=True)
self.assertEqual(404, resp.status_code)
body = resp.body
if six.PY3:
body = body.decode('utf-8')
self.assertEqual("Alarm alarm-id-3 not found",
json.loads(body)['error_message']
['faultstring'])
with mock.patch('aodh.api.controllers.'
'v2.base.AlarmNotFound') as CustomErrorClass:
CustomErrorClass.return_value = wsme.exc.ClientSideError(
"untranslated_error", status_code=404)
resp = self.get_json('/alarms/alarm-id-5', expect_errors=True)
self.assertEqual(404, resp.status_code)
body = resp.body
if six.PY3:
body = body.decode('utf-8')
self.assertEqual("untranslated_error",
json.loads(body)['error_message']
['faultstring'])

View File

@@ -1,30 +0,0 @@
#
# Copyright Ericsson AB 2014. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from aodh.tests.functional.api import v2 as tests_api
class TestCapabilitiesController(tests_api.FunctionalTest):
def setUp(self):
super(TestCapabilitiesController, self).setUp()
self.url = '/capabilities'
def test_capabilities(self):
data = self.get_json(self.url)
# check that capabilities data contains both 'api' and 'storage' fields
self.assertIsNotNone(data)
self.assertNotEqual({}, data)
self.assertIn('api', data)
self.assertIn('alarm_storage', data)

Some files were not shown because too many files have changed in this diff Show More