Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: I46b1089c7d0693bf92632438be4de70da83eedb1
This commit is contained in:
Tony Breeds
2017-09-12 15:36:45 -06:00
parent fdfa516a67
commit 1181b2adfb
425 changed files with 14 additions and 79094 deletions

View File

@@ -1,12 +0,0 @@
[run]
branch = True
omit = etc/*,setup.py,*egg*,.tox/*,barbican/tests/*,
functionaltests/*,
barbican/model/migration/alembic_migrations/versions/*,
barbican/plugin/dogtag.py, barbican/plugin/symantec.py
[report]
ignore_errors = True
exclude_lines =
pragma: no cover
@abc.abstractmethod

82
.gitignore vendored
View File

@@ -1,82 +0,0 @@
.venv
.python-version
*.sqlite
*.py[cod]
# C extensions
*.so
# Packages
*.egg
.eggs
*.egg-info
dist
build
eggs
parts
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
*.err.log
# Unit test / coverage reports
cover
.testrepository
.coverage
.coverage.*
.tox
nosetests.xml
coverage.xml
flake8.log
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Pycharm
.idea
*.iml
# Sqlite databases
*.sqlite3
*.db
# Misc. generated files
versiononly.txt
*.orig
myapp.profile
*.out.myapp
AUTHORS
ChangeLog
# Editors
*~
.*.swp
# Mac OS
.DS_Store
# Rope
.ropeproject
# files created by oslo-config-generator
etc/barbican/barbican.conf
etc/barbican/barbican.conf.sample
# File created by oslopolicy-sample-generator
etc/barbican/policy.yaml.sample
# Files created by releasenotes build
releasenotes/build
# Files created by API build
api-ref/build/

View File

@@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/barbican.git

View File

@@ -1,9 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>
<douglas.mendizabal@rackspace.com> <dougmendizabal@gmail.com>
<jarret.raim@rackspace.com> <jarito@gmail.com>
John Wood <john.wood@rackspace.com> <jfwood@ubuntu.(none)>
Malini K. Bhandaru <malini.k.bhandaru@intel.com> <malini@ubuntu.(none)>
Malini K. Bhandaru <malini.k.bhandaru@intel.com> Malini Bhandaru <malini.k.bhandaru@intel.com>

View File

@@ -1,9 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -s ${OS_TEST_PATH:-./barbican/tests/} -t . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list
group_regex=([^\.]+\.)+

View File

@@ -1,74 +0,0 @@
Barbican Style Commandments
============================
- Step 1: Read the OpenStack Style Commandments
http://docs.openstack.org/developer/hacking/
- Step 2: Read on
Barbican Specific Commandments
-------------------------------
- [B310] Check for improper use of logging format arguments.
- [B311] Use assertIsNone(...) instead of assertEqual(None, ...).
- [B312] Use assertTrue(...) rather than assertEqual(True, ...).
- [B314] str() and unicode() cannot be used on an exception. Remove or use six.text_type().
- [B317] 'oslo_' should be used instead of 'oslo.'
- [B318] Must use a dict comprehension instead of a dict constructor
with a sequence of key-value pairs.
- [B319] Ensure to not use xrange().
- [B320] Do not use LOG.warn as it's deprecated.
- [B321] Use assertIsNotNone(...) rather than assertNotEqual(None, ...) or
assertIsNot(None, ...).
Creating Unit Tests
-------------------
For every new feature, unit tests should be created that both test and
(implicitly) document the usage of said feature. If submitting a patch for a
bug that had no unit test, a new passing unit test should be added. If a
submitted bug fix does have a unit test, be sure to add a new one that fails
without the patch and passes with the patch.
Running Tests
-------------
The testing system is based on a combination of tox and testr. If you just
want to run the whole suite, run `tox` and all will be fine. However, if
you'd like to dig in a bit more, you might want to learn some things about
testr itself. A basic walkthrough for OpenStack can be found at
http://wiki.openstack.org/testr
OpenStack Trademark
-------------------
OpenStack is a registered trademark of OpenStack, LLC, and uses the
following capitalization:
OpenStack
Commit Messages
---------------
Using a common format for commit messages will help keep our git history
readable. Follow these guidelines:
First, provide a brief summary (it is recommended to keep the commit title
under 50 chars).
The first line of the commit message should provide an accurate
description of the change, not just a reference to a bug or
blueprint. It must be followed by a single blank line.
Following your brief summary, provide a more detailed description of
the patch, manually wrapping the text at 72 characters. This
description should provide enough detail that one does not have to
refer to external resources to determine its high-level functionality.
Once you use 'git review', two lines will be appended to the commit
message: a blank line followed by a 'Change-Id'. This is important
to correlate this commit with a specific review in Gerrit, and it
should not be modified.
For further information on constructing high quality commit messages,
and how to split up commits into a series of changes, consult the
project wiki:
http://wiki.openstack.org/GitCommitMessages

201
LICENSE
View File

@@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2013, Rackspace (http://www.rackspace.com)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

14
README Normal file
View File

@@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@@ -1,99 +0,0 @@
Team and repository tags
========================
[![Team and repository tags](http://governance.openstack.org/badges/barbican.svg)](http://governance.openstack.org/reference/tags/index.html)
<!-- Change things from this point on -->
# Barbican
Barbican is a REST API designed for the secure storage, provisioning and
management of secrets. It is aimed at being useful for all environments,
including large ephemeral Clouds.
Barbican is an OpenStack project developed by the [Barbican Project Team
](https://wiki.openstack.org/wiki/Barbican) with support from
[Rackspace Hosting](http://www.rackspace.com/), EMC, Ericsson,
Johns Hopkins University, HP, Red Hat, Cisco Systems, and many more.
The full documentation can be found on the [Barbican Developer Documentation
Site](http://docs.openstack.org/developer/barbican/).
If you have a technical question, you can ask it at [Ask OpenStack](
https://ask.openstack.org/en/questions/) with the `barbican` tag, or you can
send an email to the [OpenStack General mailing list](
http://lists.openstack.org/pipermail/openstack/) at
`openstack@lists.openstack.org` with the prefix `[barbican]` in the
subject.
To file a bug, use our bug tracker on [Launchpad](
https://bugs.launchpad.net/barbican/).
For development questions or discussion, hop on the [OpenStack-dev mailing list
](http://lists.openstack.org/pipermail/openstack-dev/)
at `openstack-dev@lists.openstack.org` and let us know what you think, just add
`[barbican]` to the subject. You can also join our IRC channel
`#openstack-barbican` on Freenode.
Barbican began as part of a set of applications that make up the CloudKeep
ecosystem. The other systems are:
* [Postern](https://github.com/cloudkeep/postern) - Go based agent that
provides access to secrets from the Barbican API.
* [Palisade](https://github.com/cloudkeep/palisade) - AngularJS based web ui
for the Barbican API.
* [Python-barbicanclient](https://github.com/openstack/python-barbicanclient) -
A convenient Python-based library to interact with the Barbican API.
## Getting Started
Please visit our [Users, Developers and Operators documentation
](https://docs.openstack.org/developer/barbican/) for details.
## Why Should You Use Barbican?
The current state of key management is atrocious. While Windows does have some
decent options through the use of the Data Protection API (DPAPI) and Active
Directory, Linux lacks a cohesive story around how to manage keys for
application use.
Barbican was designed to solve this problem. The system was motivated by
internal Rackspace needs, requirements from
[OpenStack](http://www.openstack.org/) and a realization that the current state
of the art could use some help.
Barbican will handle many types of secrets, including:
* **Symmetric Keys** - Used to perform reversible encryption of data at rest,
typically using the AES algorithm set. This type of key is required to enable
features like [encrypted Swift containers and Cinder
volumes](http://www.openstack.org/software/openstack-storage/), [encrypted
Cloud Backups](http://www.rackspace.com/cloud/backup/), etc.
* **Asymmetric Keys** - Asymmetric key pairs (sometimes referred to as [public
/ private keys](http://en.wikipedia.org/wiki/Public-key_cryptography)) are
used in many scenarios where communication between untrusted parties is
desired. The most common case is with SSL/TLS certificates, but also is used
in solutions like SSH keys, S/MIME (mail) encryption and digital signatures.
* **Raw Secrets** - Barbican stores secrets as a base64 encoded block of data
(encrypted, naturally). Clients can use the API to store any secrets in any
format they desire. The [Postern](https://github.com/cloudkeep/postern) agent
is capable of presenting these secrets in various formats to ease
integration.
For the symmetric and asymmetric key types, Barbican supports full life cycle
management including provisioning, expiration, reporting, etc. A plugin system
allows for multiple certificate authority support (including public and private
CAs).
## Design Goals
1. Provide a central secret-store capable of distributing secret / keying
material to all types of deployments including ephemeral Cloud instances.
2. Support reasonable compliance regimes through reporting and auditability.
3. Application adoption costs should be minimal or non-existent.
4. Build a community and ecosystem by being open-source and extensible.
5. Improve security through sane defaults and centralized management of
[policies for all
secrets](https://github.com/cloudkeep/barbican/wiki/Policies).
6. Provide an out of band communication mechanism to notify and protect sensitive
assets.

View File

@@ -1,364 +0,0 @@
******************
ACL API User Guide
******************
By default barbican manages access to its resources (secrets, containers) on a per project
level, whereby a user is allowed access to project resources based on the roles a user has
in that project.
Some barbican use cases prefer a more fine-grained access control for secrets and containers,
such as at the user level. The Access Control List (ACL) feature supports this more restrictive
access.
This guide will assume you will be using a local running development environment of barbican.
If you need assistance with getting set up, please reference the
`development guide <http://docs.openstack.org/developer/barbican/setup/dev.html>`__
.. warning::
This ACL documentation is work in progress and may change in near future.
ACL Definition
##############
ACL defines a set of attributes which are used in policy-based authorization to determine
access to a target resource. ACL definition is operation specific and is defined per
secret or per container.
Currently only the 'read' operation is defined. This supports allowing users on the ACL for a
secret to retrieve its metadata or to decrypt its payload. This also allows users on the ACL
for a container to retrieve its list of secret references.
ACL allow a secret or a container to be marked private. Private secret/container means that only
the user who created the secret/container can extract secret. Users with necessary roles on a
secret/container project will not have access. To allow access to other users, their user ids
need to be added in related ACL users list.
An operation specific ACL definition has following attribute:
* `users`: Whitelist of users who are allowed access to target resource. In this case a user means
a Keystone user id.
* `project-access`: Flag to mark a secret or a container private for an operation. Pass `false` to
mark private.
To accomplish above mentioned behavior for a secret/container resource, having ACL data populated
alone is not sufficient.
Following ACL rules are defined and used as `OR` in resource access policy:
* ACL based access is allowed when token user is present in secret/container operation specific
ACL user list e.g. token user present in `read` users list.
* When secret/container resource is marked private, then project-level RBAC users access is not
allowed.
.. note::
Currently barbican default policy just makes use of `read` ACL data only. So only **GET**
calls for a secret and a container resource will make use of ACL data. Other request methods on
secret and container resource still uses project level RBAC checks in policy.
As per default policy rules, a user with `admin` role in a secret/container project or a user who
has created the secret/container can manage ACL for that secret/container.
.. _default_implicit_acl:
Default ACL
-----------
By default when no ACL is explicitly set on a secret or a container, then clients with necessary
roles on secret's project or container's project can access it. This default access pattern translates
to `project-access` as true and no `users` in ACL settings. That's why every secret and container by
default has following implicit ACL.
.. code-block:: json
{
"read":{
"project-access": true
}
}
Above default ACL is also returned on **GET** on secret/container **acl** resource when no
explicit ACL is set on it.
.. _set_acl:
How to Set/Replace ACL
######################
The ACL for an existing secret or container can be modified via a **PUT** to the **acl** resource.
This update completely replaces existing ACL settings for this secret or container.
To set/replace an ACL for a secret:
.. code-block:: bash
Request:
curl -X PUT -H 'content-type:application/json' \
-H 'X-Auth-Token:b06183778aa64b17beb6215e60686a60' \
-d '
{
"read":{
"users":[
"2d0ee7c681cc4549b6d76769c320d91f",
"721e27b8505b499e8ab3b38154705b9e",
"c1d20e4b7e7d4917aee6f0832152269b"
],
"project-access":false
}
}' \
http://localhost:9311/v1/secrets/15621a1b-efdf-41d8-92dc-356cec8e9da9/acl
Response (includes secret ACL reference):
HTTP/1.1 201 Created
{"acl_ref": "http://localhost:9311/v1/secrets/15621a1b-efdf-41d8-92dc-356cec8e9da9/acl"}
To set/replace an ACL for a container:
.. code-block:: bash
Request:
curl -X PUT -H 'content-type:application/json' \
-H 'X-Auth-Token:b06183778aa64b17beb6215e60686a60' \
-d '
{
"read":{
"users":[
"2d0ee7c681cc4549b6d76769c320d91f",
"721e27b8505b499e8ab3b38154705b9e",
"c1d20e4b7e7d4917aee6f0832152269b"
],
"project-access":false
}
}' \
http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl
Response (includes container ACL reference):
HTTP/1.1 201 Created
{"acl_ref": "http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl"}
To get more details on the create API you can reference the
`Set Secret ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#put-v1-secrets-uuid-acl>`__
or `Set Container ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#put-v1-containers-uuid-acl>`__
documentation.
.. _update_acl:
How to Update ACL
#################
Existing ACL can be updated via **PUT** or **PATCH** methods on a given secret/container.
**PUT** interaction replaces existing ACL with provided ACL data whereas **PATCH**
interaction applies the provided changes on existing ACL of a secret or a container.
To replace an existing ACL for a container:
.. code-block:: bash
Request:
curl -X PUT -H 'content-type:application/json' \
-H 'X-Auth-Token:e1f540bc6def456dbb0f8c11f21a74ae' \
-d '
{
"read":{
"users":[
"2d0ee7c681cc4549b6d76769c320d91f",
"721e27b8505b499e8ab3b38154705b9e"
],
"project-access":true
}
}' \
http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl
Response (includes container ACL reference):
HTTP/1.1 200 OK
{"acl_ref": "http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl"}
To remove all users from an existing ACL for a container (pass empty list in `users`):
.. code-block:: bash
Request:
curl -X PUT -H 'content-type:application/json' \
-H 'X-Auth-Token:e1f540bc6def456dbb0f8c11f21a74ae' \
-d '
{
"read":{
"users":[],
"project-access":true
}
}' \
http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl
Response (includes container ACL reference):
HTTP/1.1 200 OK
{"acl_ref": "http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl"}
To update only the `project-access` flag for container ACL (use PATCH):
.. code-block:: bash
Request:
curl -X PATCH -H 'content-type:application/json' \
-H 'X-Auth-Token:e1f540bc6def456dbb0f8c11f21a74ae' \
-d '
{
"read":{
"project-access":false
}
}' \
http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl
Response:
HTTP/1.1 200 OK
{"acl_ref": "http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl"}
To update only the users list for secret ACL (use PATCH):
.. code-block:: bash
Request:
curl -X PATCH -H 'content-type:application/json' \
-H 'X-Auth-Token:e1f540bc6def456dbb0f8c11f21a74ae' \
-d '
{
"read":{
"users":[
"2d0ee7c681cc4549b6d76769c320d91f",
"c1d20e4b7e7d4917aee6f0832152269b"
],
}
}' \
http://localhost:9311/v1/secrets/15621a1b-efdf-41d8-92dc-356cec8e9da9/acl
Response:
HTTP/1.1 200 OK
{"acl_ref": "http://localhost:9311/v1/secrets/15621a1b-efdf-41d8-92dc-356cec8e9da9/acl"}
Container and Secret ACL(s) update operation are similar except `containers` resource is used
instead of the `secrets` resource in URI. To get more details on ACL update APIs, you can reference
the `Update Secret ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#put-secret-acl>`__
, `Update Container ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#put-container-acl>`__
, `Partial Update Secret ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#patch-secret-acl>`__
or `Partial Update Container ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#patch-container-acl>`__
documentation.
.. _retrieve_acl:
How to Retrieve ACL
###################
The ACL defined for a secret or container can be retrieved by using a **GET** operation on
respective **acl** resource.
The returned response contains ACL data.
To get secret ACL data:
.. code-block:: bash
Request:
curl -X GET -H 'X-Auth-Token:b44636bff48c41bbb80f459df69c11aa' \
http://localhost:9311/v1/secrets/15621a1b-efdf-41d8-92dc-356cec8e9da9/acl
Response:
HTTP/1.1 200 OK
{
"read":{
"updated":"2015-05-12T20:08:47.644264",
"created":"2015-05-12T19:23:44.019168",
"users":[
"c1d20e4b7e7d4917aee6f0832152269b",
"2d0ee7c681cc4549b6d76769c320d91f"
],
"project-access":false
}
}
To get container ACL data:
.. code-block:: bash
Request:
curl -X GET -H 'X-Auth-Token:b44636bff48c41bbb80f459df69c11aa' \
http://localhost:9311/v1/containers/8c077991-d524-4e15-8eaf-bc0c3bb225f2/acl
Response:
HTTP/1.1 200 OK
{
"read":{
"updated":"2015-05-12T20:05:17.214948",
"created":"2015-05-12T19:47:20.018657",
"users":[
"721e27b8505b499e8ab3b38154705b9e",
"c1d20e4b7e7d4917aee6f0832152269b",
"2d0ee7c681cc4549b6d76769c320d91f"
],
"project-access":false
}
}
To get more details on ACL lookup APIs you can reference the
`Get Secret ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#get-secret-acl>`__
, `Get Container ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#get-container-acl>`__
documentation.
.. _delete_acl:
How to Delete ACL
#################
ACL defined for a secret or a container can be deleted by using the **DELETE**
operation on their respective `acl` resource. There is no response content
returned on successful deletion.
Delete operation removes existing ACL on a secret or a container if there. It
can be treated as resetting a secret or a container to
`Default ACL <http://docs.openstack.org/developer/barbican/api/userguide/acls.html#default-implicit-acl>`__
setting. That's why invoking delete multiple times on this resource will not
result in error.
.. code-block:: bash
Request:
curl -X DELETE -H 'X-Auth-Token:b06183778aa64b17beb6215e60686a60' \
http://localhost:9311/v1/secrets/50f5ed8e-004e-433a-939c-fa73c7fc81fd/acl
Response:
200 OK
To get more details on ACL delete APIs, you can reference the
`Delete Secret ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#delete-secret-acl>`__
, `Delete Container ACL <http://docs.openstack.org/developer/barbican/api/reference/acls.html#delete-container-acl>`__
documentation.

View File

@@ -1,280 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Key Manager API documentation build configuration file
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# import sys
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ['openstackdocstheme']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Key Manager API Guide'
bug_tag = u'api-guide'
repository_name = 'openstack/barbican'
bug_project = 'barbican'
# Must set this variable to include year, month, day, hours, and minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
copyright = u'2016, OpenStack contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
from barbican.version import version_info
# The short X.Y version.
version = version_info.version_string()
# The full version, including alpha/beta/rc tags.
release = version_info.release_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = []
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'keymanager-api-guide'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'KeyManagerAPI.tex', u'Key Manager API Documentation',
u'OpenStack contributors', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'keymanagerapi', u'Key Manager API Documentation',
[u'OpenStack contributors'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'KeyManagerAPIGuide', u'Key Manager API Guide',
u'OpenStack contributors', 'APIGuide',
'This guide teaches OpenStack Key Manager service users concepts about '
'managing keys in an OpenStack cloud with the Key Manager API.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']
# -- Options for PDF output --------------------------------------------------
pdf_documents = [
('index', u'KeyManagerAPIGuide', u'Key Manager API Guide', u'OpenStack '
'contributors')
]

View File

@@ -1,156 +0,0 @@
**************************
Consumers API - User Guide
**************************
This guide assumes you will be using a local development environment of barbican.
If you need assistance with getting set up, please reference the
`development guide <http://docs.openstack.org/developer/barbican/setup/dev.html>`__.
What is a Consumer?
###################
A consumer is a way to register as an interested party for a container. All of the registered
consumers can be viewed by performing a GET on the {container_ref}/consumers. The idea
being that before a container is deleted all consumers should be notified of the delete.
.. _create_consumer:
How to Create a Consumer
########################
.. code-block:: bash
curl -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" \
-d '{"name": "consumername", "URL": "consumerURL"}' \
http://localhost:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9/consumers
This will return the following response:
.. code-block:: json
{
"status": "ACTIVE",
"updated": "2015-10-15T21:06:33.121113",
"name": "container name",
"consumers": [
{
"URL": "consumerurl",
"name": "consumername"
}
],
"created": "2015-10-15T17:55:44.380002",
"container_ref":
"http://localhost:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9",
"creator_id": "b17c815d80f946ea8505c34347a2aeba",
"secret_refs": [
{
"secret_ref": "http://localhost:9311/v1/secrets/b61613fc-be53-4696-ac01-c3a789e87973",
"name": "private_key"
}
],
"type": "generic"
}
.. _retrieve_consumer:
How to Retrieve a Consumer
##########################
To retrieve a consumer perform a GET on the {container_ref}/consumers
This will return all consumers for this container. You can optionally add a
limit and offset query parameter.
.. code-block:: bash
curl -H "X-Auth-Token: $TOKEN" \
http://192.168.99.100:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9/consumers
This will return the following response:
.. code-block:: json
{
"total": 1,
"consumers": [
{
"status": "ACTIVE",
"URL": "consumerurl",
"updated": "2015-10-15T21:06:33.123878",
"name": "consumername",
"created": "2015-10-15T21:06:33.123872"
}
]
}
The returned value is a list of all consumers for the specified container.
Each consumer will be listed with its metadata..
If the offset and limit parameters are specified then you will see a
previous and next reference which allow you to cycle through all of
the consumers for this container.
.. code-block:: bash
curl -H "X-Auth-Token: $TOKEN" \
http://192.168.99.100:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9/consumers?limit=1\&offset=1
This will return the following response:
.. code-block:: json
{
"total": 3,
"next": "http://localhost:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9/consumers?limit=1&offset=2",
"consumers": [
{
"status": "ACTIVE",
"URL": "consumerURL2",
"updated": "2015-10-15T21:17:08.092416",
"name": "consumername2",
"created": "2015-10-15T21:17:08.092408"
}
],
"previous": "http://localhost:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9/consumers?limit=1&offset=0"
}
.. _delete_consumer:
How to Delete a Consumer
########################
To delete a consumer for a container you must provide the consumer name and
URL which were used when the consumer was created.
.. code-block:: bash
curl -X DELETE -H "X-Auth-Token: $TOKEN" -H "Content-Type: application/json" \
-d '{"name": "consumername", "URL": "consumerURL"}' \
http://localhost:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9/consumers
This will return the following response:
.. code-block:: json
{
"status": "ACTIVE",
"updated": "2015-10-15T17:56:18.626724",
"name": "container name",
"consumers": [],
"created": "2015-10-15T17:55:44.380002",
"container_ref": "http://localhost:9311/v1/containers/74bbd3fd-9ba8-42ee-b87e-2eecf10e47b9",
"creator_id": "b17c815d80f946ea8505c34347a2aeba",
"secret_refs": [
{
"secret_ref": "http://localhost:9311/v1/secrets/b61613fc-be53-4696-ac01-c3a789e87973",
"name": "private_key"
}
],
"type": "generic"
}
A successful delete will return an HTTP 200 OK. The response content will be the
container plus the consumer list, minus the consumer which was just deleted.

View File

@@ -1,324 +0,0 @@
****************************
Containers API - User Guide
****************************
The containers resource is the organizational center piece of barbican. It
creates a logical object that can be used to hold secret references. This is helpful
when having to deal with tracking and having access to hundreds of secrets.
Barbican supports 3 types of containers:
* :ref:`Generic <generic_containers>`
* :ref:`Certificate <certificate_containers>`
* :ref:`RSA <rsa_containers>`
Each of these types have explicit restrictions as to what type of secrets should be
held within. These will be broken down in their respective sections.
This guide will assume you will be using a local running development environment of barbican.
If you need assistance with getting set up, please reference the
`development guide <http://docs.openstack.org/developer/barbican/setup/dev.html>`__.
.. _generic_containers:
Generic Containers
##################
A generic container is used for any type of container that a user may wish to create.
There are no restrictions on the type or amount of secrets that can be held within a container.
An example of a use case for a generic container would be having multiple passwords stored
in the same container reference:
.. code-block:: json
{
"type": "generic",
"status": "ACTIVE",
"name": "Test Environment User Passwords",
"consumers": [],
"container_ref": "https://{barbican_host}/v1/containers/{uuid}",
"secret_refs": [
{
"name": "test_admin_user",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
},
{
"name": "test_audit_user",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
}
],
"created": "2015-03-30T21:10:45.417835",
"updated": "2015-03-30T21:10:45.417835"
}
For more information on creating a generic container, reference the
:ref:`Creating a Generic Container <create_generic_container>` section.
.. _certificate_containers:
Certificate Containers
######################
A certificate container is used for storing the following secrets that are relevant to
certificates:
* certificate
* private_key (optional)
* private_key_passphrase (optional)
* intermediates (optional)
.. code-block:: json
{
"type": "certificate",
"status": "ACTIVE",
"name": "Example.com Certificates",
"consumers": [],
"container_ref": "https://{barbican_host}/v1/containers/{uuid}",
"secret_refs": [
{
"name": "certificate",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
},
{
"name": "private_key",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
},
{
"name": "private_key_passphrase",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
},
{
"name": "intermediates",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
}
],
"created": "2015-03-30T21:10:45.417835",
"updated": "2015-03-30T21:10:45.417835"
}
The payload for the secret referenced as the "certificate" is expected to be a
PEM formatted x509 certificate.
The payload for the secret referenced as the "intermediates" is expected to be a
PEM formatted PKCS7 certificate chain.
For more information on creating a certificate container, reference the
:ref:`Creating a Certificate Container <create_certificate_container>` section.
.. _rsa_containers:
RSA Containers
##############
An RSA container is used for storing RSA public keys, private keys, and private
key pass phrases.
.. code-block:: json
{
"type": "rsa",
"status": "ACTIVE",
"name": "John Smith RSA",
"consumers": [],
"container_ref": "https://{barbican_host}/v1/containers/{uuid}",
"secret_refs": [
{
"name": "private_key",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
},
{
"name": "private_key_passphrase",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
},
{
"name": "public_key",
"secret_ref": "https://{barbican_host}/v1/secrets/{uuid}"
}
],
"created": "2015-03-30T21:10:45.417835",
"updated": "2015-03-30T21:10:45.417835"
}
For more information on creating a certificate container, reference the
:ref:`Creating a RSA Container <create_certificate_container>` section.
.. _create_container:
How to Create a Container
#########################
In order to create a container, we must first have secrets. If you are unfamiliar
with creating secrets, please take some time to refer to the
:doc:`Secret User Guide <secrets>` before moving forward.
.. _create_generic_container:
Creating a Generic Container
****************************
To create a generic container we must have a secret to store as well.
.. code-block:: bash
curl -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type:application/json" -d '{
"type": "generic",
"name": "generic name",
"secret_refs": [
{
"name": "a secret",
"secret_ref": "http://localhost:9311/v1/secrets/feac9896-49e9-49e0-9484-1a6153c9498b"
}
]
}' http://localhost:9311/v1/containers
This should provide a response as follows:
.. code-block:: bash
{"container_ref": "http://localhost:9311/v1/containers/0fecaec4-7cd7-4e70-a760-cc7eaf5c3afb"}
This is our container reference. We will need this in order to retrieve the container.
Jump ahead to :ref:`How To Retrieve a Container <retrieve_container>` to make sure our
container stored as expected.
.. _create_certificate_container:
Creating a Certificate Container
********************************
To create a certificate container we must have a secret to store as well. As we mentioned
in :ref:`Certificate Containers section <certificate_containers>` you are required
to provide a secret named certificate but may also include the optional secrets
named private_key, private_key_passphrase, and intermediates.
.. code-block:: bash
curl -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type:application/json" -d '{
"type": "certificate",
"name": "certificate container",
"secret_refs": [
{
"name": "certificate",
"secret_ref": "http://localhost:9311/v1/secrets/f91b84ac-fb19-416b-87dc-e7e41b7f6039"
},
{
"name": "private_key",
"secret_ref": "http://localhost:9311/v1/secrets/feac9896-49e9-49e0-9484-1a6153c9498b"
},
{
"name": "private_key_passphrase",
"secret_ref": "http://localhost:9311/v1/secrets/f1106c5b-0347-4197-8947-d9e392bf74a3"
},
{
"name": "intermediates",
"secret_ref": "http://localhost:9311/v1/secrets/2e86c661-28e8-46f1-8e91-f1d95062695d"
}
]
}' http://localhost:9311/v1/containers
This should provide a response as follows:
.. code-block:: bash
{"container_ref": "http://localhost:9311/v1/containers/0fecaec4-7cd7-4e70-a760-cc7eaf5c3afb"}
This is our container reference. We will need this in order to retrieve the container.
Jump ahead to :ref:`How To Retrieve a Container <retrieve_container>` to make sure our
container stored as expected.
.. _create_rsa_container:
Creating an RSA Container
*************************
To create a certificate container we must have a secret to store as well. As we mentioned
in :ref:`RSA Containers section <rsa_containers>` you are required
to provide a secret named public_key, private_key, and private_key_passphrase.
.. code-block:: bash
curl -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type:application/json" -d '{
"type": "rsa",
"name": "rsa container",
"secret_refs": [
{
"name": "public_key",
"secret_ref": "http://localhost:9311/v1/secrets/f91b84ac-fb19-416b-87dc-e7e41b7f6039"
},
{
"name": "private_key",
"secret_ref": "http://localhost:9311/v1/secrets/feac9896-49e9-49e0-9484-1a6153c9498b"
},
{
"name": "private_key_passphrase",
"secret_ref": "http://localhost:9311/v1/secrets/f1106c5b-0347-4197-8947-d9e392bf74a3"
}
]
}' http://localhost:9311/v1/containers
This should provide a response as follows:
.. code-block:: bash
{"container_ref": "http://localhost:9311/v1/containers/0fecaec4-7cd7-4e70-a760-cc7eaf5c3afb"}
This is our container reference. We will need this in order to retrieve the container.
Jump ahead to :ref:`How To Retrieve a Container <retrieve_container>` to make sure our
container stored as expected.
.. _retrieve_container:
How to Retrieve a Container
###########################
To retrieve a container we must have a container reference.
.. code-block:: bash
curl -X GET -H "X-Auth-Token: $TOKEN" http://localhost:9311/v1/containers/49d3c5e9-80bb-47ec-8787-968bb500d76e
This should provide a response as follows:
.. code-block:: bash
{
"status": "ACTIVE",
"updated": "2015-03-31T21:21:34.126042",
"name": "container name",
"consumers": [],
"created": "2015-03-31T21:21:34.126042",
"container_ref": "http://localhost:9311/v1/containers/49d3c5e9-80bb-47ec-8787-968bb500d76e",
"secret_refs": [
{
"secret_ref": "http://localhost:9311/v1/secrets/feac9896-49e9-49e0-9484-1a6153c9498b",
"name": "a secret"
}
],
"type": "generic"
}
This is the metadata as well as the list of secret references that are stored within the container.
.. _delete_container:
How to Delete a Container
#########################
To delete a container we must have a container reference.
.. code-block:: bash
curl -X DELETE -H "X-Auth-Token: $TOKEN" http://localhost:9311/v1/containers/d1c23e06-476b-4684-be9f-8afbef42768d
No response will be provided. This is expected behavior! If you do receive a
response, something went wrong and you will have to address that before
moving forward.

View File

@@ -1,200 +0,0 @@
**************************
Dogtag Setup - User Guide
**************************
Dogtag is the Open Source upstream community version of the Red Hat Certificate
System, an enterprise certificate management system that has been deployed in some
of the largest PKI deployments worldwide. RHCS is FIPS 140-2 and Common Criteria certified.
The Dogtag Certificate Authority (CA) subsystem issues, renews and revokes many different
kinds of certificates. It can be used as a private CA back-end to barbican, and interacts
with barbican through the Dogtag CA plugin.
The Dogtag KRA subsystem is used to securely store secrets after being encrypted by
storage keys that are stored either in a software NSS database or in an HSM. It
can serve as a secret store for barbican, and interacts with barbican core through
the Dogtag KRA plugin.
In this guide, we will provide instructions on how to set up a basic Dogtag instance
containing a CA and a KRA, and how to configure barbican to use this instance for a
secret store and a certificate plugin. Much more detail about Dogtag, its deployment
options and its administration are available in the `RHCS documentation
<https://access.redhat.com/documentation/en-US/Red_Hat_Certificate_System>`_.
**Note:** The code below is taken from the devstack Barbican-Dogtag gate job. You can
extract this code by looking at the Dogtag functions in contrib/devstack/lib/barbican.
Installing the Dogtag Packages
******************************
Dogtag packages are available in Fedora/RHEL/Centos and on Ubuntu/Debian distributions.
This guide will include instructions applicable to Fedora/RHEL/Centos.
If installing on a Fedora platform, use at least Fedora 21.
To install the required packages:
.. code-block:: bash
yum install -y pki-ca pki-kra 389-ds-base
Creating the Directory Server Instance for the Dogtag Internal DB
*****************************************************************
The Dogtag CA and KRA subsystems use a 389 directory server as an internal database.
Configure one as follows:
.. code-block:: bash
mkdir -p /etc/389-ds
cat > /etc/389-ds/setup.inf <<EOF
[General]
FullMachineName= localhost.localdomain
SuiteSpotUserID= nobody
SuiteSpotGroup= nobody
[slapd]
ServerPort= 389
ServerIdentifier= pki-tomcat
Suffix= dc=example,dc=com
RootDN= cn=Directory Manager
RootDNPwd= PASSWORD
EOF
setup-ds.pl --silent --file=/etc/389-ds/setup.inf
Creating the Dogtag CA
**********************
The following bash code sets up a Dogtag CA using some reasonable defaults to run in
an Apache Tomcat instance on ports 8373 and 8370. Detailed version-specific documentation
is packaged and installed with the Dogtag packages as Linux man pages. For more
details on how to customize a Dogtag instance, see the man pages for *pkispawn* or
consult the `RHCS documentation <https://access.redhat.com/documentation/en-US/Red_Hat_Certificate_System>`_.
.. code-block:: bash
mkdir -p /etc/dogtag
cat > /etc/dogtag/ca.cfg <<EOF
[CA]
pki_admin_email=caadmin@example.com
pki_admin_name=caadmin
pki_admin_nickname=caadmin
pki_admin_password=PASSWORD
pki_admin_uid=caadmin
pki_backup_password=PASSWORD
pki_client_database_password=PASSWORD
pki_client_database_purge=False
pki_client_pkcs12_password=PASSWORD
pki_clone_pkcs12_password=PASSWORD
pki_ds_base_dn=dc=ca,dc=example,dc=com
pki_ds_database=ca
pki_ds_password=PASSWORD
pki_security_domain_name=EXAMPLE
pki_token_password=PASSWORD
pki_https_port=8373
pki_http_port=8370
pki_ajp_port=8379
pki_tomcat_server_port=8375
EOF
pkispawn -v -f /etc/dogtag/ca.cfg -s CA
Creating the Dogtag KRA
***********************
The following bash code sets up the Dogtag KRA in the same Apache Tomcat instance
as above. In this simple example, it is required to set up the CA even if only
the KRA is being used for a secret store.
Note that the actual hostname of the machine should be used in the script (rather
than localhost) because the hostname is used in the subject name for the SSL
server certificate for the KRA.
.. code-block:: bash
mkdir -p /etc/dogtag
hostname=$(hostname)
cat > /etc/dogtag/kra.cfg <<EOF
[KRA]
pki_admin_cert_file=/root/.dogtag/pki-tomcat/ca_admin.cert
pki_admin_email=kraadmin@example.com
pki_admin_name=kraadmin
pki_admin_nickname=kraadmin
pki_admin_password=PASSWORD
pki_admin_uid=kraadmin
pki_backup_password=PASSWORD
pki_client_database_password=PASSWORD
pki_client_database_purge=False
pki_client_pkcs12_password=PASSWORD
pki_clone_pkcs12_password=PASSWORD
pki_ds_base_dn=dc=kra,dc=example,dc=com
pki_ds_database=kra
pki_ds_password=PASSWORD
pki_security_domain_name=EXAMPLE
pki_security_domain_user=caadmin
pki_security_domain_password=PASSWORD
pki_token_password=PASSWORD
pki_https_port=8373
pki_http_port=8370
pki_ajp_port=8379
pki_tomcat_server_port=8375
pki_security_domain_hostname=$hostname
pki_security_domain_https_port=8373
EOF
pkispawn -v -f /etc/dogtag/kra.cfg -s KRA
Configuring Barbican to Communicate with the Dogtag CA and KRA
**************************************************************
In order for barbican to interact with the Dogtag CA and KRA, a PEM file must be
created with trusted agent credentials.
.. code-block:: bash
PASSWORD=password
USER=barbican
BARBICAN_CONF_DIR=/etc/barbican
openssl pkcs12 -in /root/.dogtag/pki-tomcat/ca_admin_cert.p12 -passin pass:PASSWORD \
-out $BARBICAN_CONF_DIR/kra_admin_cert.pem -nodes
chown $USER $BARBICAN_CONF_DIR/kra_admin_cert.pem
The barbican config file (/etc/barbican/barbican.conf) needs to be modified.
The modifications below set the Dogtag plugins as the only enabled secret store and
certificate plugins. Be sure to restart barbican once these changes are made.
Note that the actual hostname of the machine should be used in the script (rather
than localhost) because the hostname is used in the subject name for the SSL
server certificate for the CA.
.. code-block:: bash
[dogtag_plugin]
pem_path = '/etc/barbican/kra_admin_cert.pem'
dogtag_host = $(hostname)
dogtag_port = 8373
nss_db_path = '/etc/barbican/alias'
nss_db_path_ca = '/etc/barbican/alias-ca'
nss_password = 'password'
simple_cmc_profile = 'caOtherCert'
approved_profile_list = 'caServerCert'
[secretstore]
namespace = barbican.secretstore.plugin
enabled_secretstore_plugins = dogtag_crypto
[certificate]
namespace = barbican.certificate.plugin
enabled_certificate_plugins = dogtag
Testing the Setup
*****************
TODO

View File

@@ -1,17 +0,0 @@
Contents
========
.. toctree::
:maxdepth: 2
secrets
secret_metadata
containers
acls
pkcs11keygeneration
dogtag_setup
quotas
consumers
orders

View File

@@ -1,167 +0,0 @@
***********************
Orders API - User Guide
***********************
The orders resource allows the user to request barbican to generate a secret.
This is also very helpful for requesting the creation of public/private key pairs.
The orders resource supports the following types:
* symmetric keys
* asymmetric keys
This user guide provides high level examples of the orders resource.
It will assume you will be using a local running development environment of barbican.
If you need assistance with getting set up, please reference the
`development guide <http://docs.openstack.org/developer/barbican/setup/dev.html>`__.
.. _create_order:
Creating an Order
#################
When you want barbican to generate a secret you need to create an order.
For an order to be processed correctly the parameters mode,
bit_length, and algorithm must be valid. Otherwise the order will fail and
the secret will not be generated. The example below shows a valid order for
generating a symmetric key. You can find a more detailed explanation about
the parameters in the
`Orders API <http://docs.openstack.org/developer/barbican/api/reference/orders.html>`__
documentation.
.. code-block:: bash
curl -X POST -H "X-Auth-Token: $TOKEN" -H "content-type:application/json" -d '{
"type":"key", "meta": { "name": "secretname", "algorithm": "aes",
"bit_length": 256, "mode": "cbc", "payload_content_type": "application/octet-stream"}
}' http://localhost:9311/v1/orders
You should receive an order reference after placing your order with barbican.
.. code-block:: bash
{"order_ref": "http://localhost:9311/v1/orders/3a5c6748-44de-4c1c-9e54-085c3f79e942"}
The order reference is used to retrieve the metadata for the order you placed
which can then be used to retrieve your secret.
.. _retrieve_order:
Retrieving an Order
###################
In order to retrieve the order we will use the reference returned during
the initial creation. (See :ref:`Creating an Order <create_order>`.)
.. code-block:: bash
curl -H "X-Auth-Token: $TOKEN" -H 'Accept:application/json' \
http://localhost:9311/v1/orders/3a5c6748-44de-4c1c-9e54-085c3f79e942
The typical response is below:
.. code-block:: json
{
"created": "2015-10-15T18:15:10",
"creator_id": "40540f978fbd45c1af18910e3e02b63f",
"meta": {
"algorithm": "AES",
"bit_length": 256,
"expiration": null,
"mode": "cbc",
"name": "secretname",
"payload_content_type": "application/octet-stream"
},
"order_ref": "http://localhost:9311/v1/orders/3a5c6748-44de-4c1c-9e54-085c3f79e942",
"secret_ref": "http://localhost:9311/v1/secrets/bcd1b853-edeb-4509-9f12-019b8c8dfb5f",
"status": "ACTIVE",
"sub_status": "Unknown",
"sub_status_message": "Unknown",
"type": "key",
"updated": "2015-10-15T18:15:10"
}
This is the metadata associated with the order.
To retrieve the secret generated by the order, refer to the :doc:`Secrets User Guide <secrets>`.
The order metadata is very useful for determining if your order was processed
correctly. Since orders are processed asynchronously, you can use the metadata
returned for the order to verify a successful secret creation.
The parameters of the response are explained in more detail
`here <http://docs.openstack.org/developer/barbican/api/reference/orders.html#get-unique-order-response-attributes>`__.
.. _retrieve_order_list:
Retrieving All Orders
#####################
It is also possible to retrieve all orders for a project.
.. code-block:: bash
curl -H "X-Auth-Token: $TOKEN" -H 'Accept:application/json' http://localhost:9311/v1/orders
.. code-block:: json
{
"orders": [
{
"created": "2015-10-15T18:15:10",
"creator_id": "40540f978fbd45c1af18910e3e02b63f",
"meta": {
"algorithm": "AES",
"bit_length": 256,
"expiration": null,
"mode": "cbc",
"name": "secretname",
"payload_content_type": "application/octet-stream"
},
"order_ref": "http://localhost:9311/v1/orders/3a5c6748-44de-4c1c-9e54-085c3f79e942",
"secret_ref": "http://localhost:9311/v1/secrets/bcd1b853-edeb-4509-9f12-019b8c8dfb5f",
"status": "ACTIVE",
"sub_status": "Unknown",
"sub_status_message": "Unknown",
"type": "key",
"updated": "2015-10-15T18:15:10"
},
{
"created": "2015-10-15T18:51:35",
"creator_id": "40540f978fbd45c1af18910e3e02b63f",
"meta": {
"algorithm": "AES",
"bit_length": 256,
"mode": "cbc",
"expiration": null,
"name": null
},
"order_ref": "http://localhost:9311/v1/orders/d99ced51-ea7a-4c14-8e11-0dda0f49c5be",
"secret_ref": "http://localhost:9311/v1/secrets/abadd306-8235-4f6b-984a-cc48ad039def",
"status": "ACTIVE",
"sub_status": "Unknown",
"sub_status_message": "Unknown",
"type": "key",
"updated": "2015-10-15T18:51:35"
}
],
"total": 2
}
You can refer to the
`orders parameters <http://docs.openstack.org/developer/barbican/api/reference/orders.html#get-order-parameters>`__
section of the
`Orders API <http://docs.openstack.org/developer/barbican/api/reference/orders.html>`__
documentation in order to refine your search among orders.
.. _delete_order:
Deleting an Order
#################
It is also possible to delete an order from barbican.
.. code-block:: bash
curl -X DELETE -H "X-Auth-Token: $TOKEN" -H 'Accept:application/json' http://localhost:9311/v1/orders/fbdd845f-4a5e-43e3-8f68-64e8f106c486
Nothing will be returned when you delete an order.
If something was returned there was most likely an error while deleting
the order.

View File

@@ -1,89 +0,0 @@
***********************************
PKCS11 Key Generation - User Guide
***********************************
The Key Generation script was written with the Deployer in mind. It allows the
deployer to create an MKEK and HMAC signing key for their HSM setup. This
script is intended to be used initially or for key rotation scenarios.
Setup
#####
Initially, the deployer will need to examine the settings in their
`barbican.conf` file under the "Crypto plugin" settings section. Set these
values to whichever defaults you need. This will be used for both the script
and your usage of barbican.
The following items are required to use the PKCS11 plugin:
* Library Path
* Login Passphrase (Password to HSM)
* Slot ID (on HSM)
The following will need to be provided to generate the HMAC and MKEK:
* MKEK Label
* MKEK Length
* HMAC Label
Usage
#####
Viewing the help page can give some awareness to the structure of the script
as well as inform you of any changes.
.. code-block:: bash
$ pkcs11-key-generation --help
usage: pkcs11-key-generation [-h] [--library-path LIBRARY_PATH]
[--passphrase PASSPHRASE] [--slot-id SLOT_ID]
{mkek,hmac} ...
Barbican MKEK & HMAC Generator
optional arguments:
-h, --help show this help message and exit
--library-path LIBRARY_PATH
Path to vendor PKCS11 library
--passphrase PASSPHRASE
Password to login to PKCS11 session
--slot-id SLOT_ID HSM Slot id (Should correspond to a configured PKCS11
slot)
subcommands:
Action to perform
{mkek,hmac}
mkek Generates a new MKEK.
hmac Generates a new HMAC.
**Note:** The user is able to pass the password in as an option or they
can leave the flag out and will be prompted for the password upon submission
of the command.
Generating an MKEK
******************
To generate an MKEK, the user must provide a length and a label for the MKEK.
.. code-block:: bash
$ pkcs11-key-generation --library-path {library_path here}
--passphrase {HSM password here} --slot-id {HSM slot here} mkek --length 32
--label 'HMACLabelHere'
MKEK successfully generated!
Generating an HMAC
******************
To generate an HMAC, the user must provide a label for the HMAC.
.. code-block:: bash
$ pkcs11-key-generation --library-path {library_path here}
--passphrase {HSM password here} --slot-id {HSM slot here} hmac
--label 'HMACLabelHere'
HMAC successfully generated!

View File

@@ -1,240 +0,0 @@
************************
Quotas API - User Guide
************************
Running with default settings, the barbican REST API doesn't impose an upper
limit on the number of resources that are allowed to be created. barbican's
backend depends on limited resources. These limited resources include database,
plugin, and Hardware Security Module (HSM) storage space. This
can be an issue in a multi-project or multi-user environment when one project
can exhaust available resources, impacting other projects.
The answer to this, on a per-project basis, is project quotas.
This user guide will show you how a user can lookup his current effective
quotas and how a service admin can create, update, read, and delete project quota
configuration for all projects in his cloud.
This guide will assume you will be using a local running development environment of barbican.
If you need assistance with getting set up, please reference the
`development guide <http://docs.openstack.org/developer/barbican/setup/dev.html>`__.
.. _user_project_quotas_overview:
Project Quotas Overview
#######################
All users authenticated with barbican are able to read the effective quota values
that apply to their project. Barbican can derive the project that a user belongs
to by reading the project scope from the authentication token.
Service administrators can read, set, and delete quota configurations for each
project known to barbican. The service administrator is recognized by his authenticated
role. The service administrator's role is defined in barbican's policy.json file.
The default role for a service admin is "key-manager:service-admin".
Quotas can be enforced for the following barbican resources: secrets, containers,
orders, consumers, and CAs. The configured quota value can be None (use the default),
-1 (unlimited), 0 (disabled), or a positive integer defining the maximum number
allowed for a project.
.. _default_project_quotas:
Default Quotas
--------------
When no project quotas have been set for a project, the default
project quotas are enforced for that project. Default quotas are specified
in the barbican configuration file (barbican.conf). The defaults provided
in the standard configuration file are as follows.
.. code-block:: none
# default number of secrets allowed per project
quota_secrets = -1
# default number of orders allowed per project
quota_orders = -1
# default number of containers allowed per project
quota_containers = -1
# default number of consumers allowed per project
quota_consumers = -1
# default number of CAs allowed per project
quota_cas = -1
The default quotas are returned via a **GET** on the **quotas** resource when no
explicit project quotas have been set for the current project.
.. _user_get_quotas:
How to Read Effective Quotas
############################
The current effective quotas for a project can be read via a **GET** to the **quotas** resource.
Barbican determines the current project ID from the scope of the authentication token sent
with the request.
.. code-block:: bash
Request:
curl -i -X GET -H "X-Auth-Token:$TOKEN" \
-H "Accept:application/json" \
http://localhost:9311/v1/quotas
Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
{"quotas":
{"secrets": -1,
"orders": -1,
"containers": -1,
"consumers": -1,
"cas": -1
}
}
To get more details on the quota lookup API, you can reference the
`Get Quotas <http://docs.openstack.org/developer/barbican/api/reference/quotas.html#get-quotas-request>`__
documentation.
.. _user_put_project_quotas:
How to Set or Replace Project Quotas
####################################
The quotas for a project can be modified via a **PUT** to the **project-quotas** resource.
This request completely replaces existing quota settings for a project. The project
ID is passed in the URI of the request.
To set or replace the quotas for the project with the ID 1234:
.. code-block:: bash
Request:
curl -i -X PUT -H "content-type:application/json" \
-H "X-Auth-Token:$TOKEN" \
-d '{"project_quotas": {"secrets": 500,
"orders": 100, "containers": -1, "consumers": 100,
"cas": 50}}' \
http://localhost:9311/v1/project-quotas/1234
Response:
HTTP/1.1 204 No Content
To get more details on the project quota setting API you can reference the
`Set Project Quotas <http://docs.openstack.org/developer/barbican/api/reference/quotas.html#put-project-quotas>`__
documentation.
.. _user_get_project_quotas:
How to Retrieve Configured Project Quotas
#########################################
The project quota information defined for a project can be retrieved by using
a **GET** operation on the respective **project-quota** resource. The project
ID is passed in the URI of the request. The returned response contains project
quota data.
To get project quota information for a single project:
.. code-block:: bash
Request:
curl -i -X GET -H "X-Auth-Token:$TOKEN" \
-H "Accept:application/json" \
http://localhost:9311/v1/project-quotas/1234
Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
{"project_quotas":
{"secrets": 500,
"orders": 100,
"containers": -1,
"consumers": 100,
"cas": 50}}
The project quota information defined for all projects can be retrieved by using
a **GET** operation on the **project-quota** resource.
The returned response contains a list with all project quota data.
.. code-block:: bash
Request:
curl -i -X GET -H "X-Auth-Token:$TOKEN" \
-H "Accept:application/json" \
http://localhost:9311/v1/project-quotas
Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
{"project_quotas":
[{"project_id": "1234",
"project_quotas":
{"secrets": 500,
"orders": 100,
"containers": -1,
"consumers": 100,
"cas": 50}},
{"project_id": "5678",
"project_quotas":
{"secrets": 500,
"orders": 100,
"containers": -1,
"consumers": 100,
"cas": 50}}]}
To get more details on project quota lookup APIs you can reference
the
`Get Project Quota <http://docs.openstack.org/developer/barbican/api/reference/quotas.html#get-project-quotas-uuid>`__
and
`Get Project Quota List <http://docs.openstack.org/developer/barbican/api/reference/quotas.html#get-project-quotas>`__
documentation.
.. _user_delete_project_quotas:
How to Delete Configured Project Quotas
#######################################
Quotas defined for a project can be deleted by using the **DELETE** operation
on the respective **project-quotas** resource. The quota configuration information
is deleted for a project, the default quotas will then apply to that project.
There is no response content returned on successful deletion.
.. code-block:: bash
Request:
curl -i -X DELETE -H "X-Auth-Token:$TOKEN" \
http://localhost:9311/v1/project-quotas/1234
Response:
HTTP/1.1 204 No Content
To get more details on project quota delete APIs, you can reference the
`Delete Project Quotas <http://docs.openstack.org/developer/barbican/api/reference/quotas.html#delete-project-quotas>`__
documentation.

View File

@@ -1,150 +0,0 @@
********************************
Secret Metadata API - User Guide
********************************
The Secret Metadata resource is an additional resource associated with Secrets.
It allows a user to be able to associate various key/value pairs with a Secret.
.. _create_secret_metadata:
How to Create/Update Secret Metadata
####################################
To create/update the secret metadata for a specific secret, we will need to know
the secret reference of the secret we wish to add user metadata to. Any metadata
that was previously set will be deleted and replaced with this metadata.
For more information on creating/updating secret metadata, you can view the
`PUT /v1/secrets/{uuid}/metadata <http://docs.openstack.org/developer/barbican/api/reference/secret_metadata.html#put-secret-metadata>`__
section.
.. code-block:: bash
curl -X PUT -H "content-type:application/json" -H "X-Auth-Token: $TOKEN" \
-d '{ "metadata": {
"description": "contains the AES key",
"geolocation": "12.3456, -98.7654"
}
}' \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79/metadata
This should provide a response as follows:
.. code-block:: bash
{"metadata_ref": "http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79/metadata"}
.. _retrieve_secret_metadata:
How to Retrieve Secret Metadata
###############################
To retrieve the secret metadata for a single key/value pair, we will need to
know the secret reference of the secret we wish to see the user metadata of.
If there is no metadata for a particular secret, then an empty metadata object
will be returned.
.. code-block:: bash
curl -H "X-Auth-Token: $TOKEN" \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79/metadata/
This should provide a response as follows:
.. code-block:: bash
{
"metadata": {
"description": "contains the AES key",
"geolocation": "12.3456, -98.7654"
}
}
.. _create_secret_metadatum:
How to Create Individual Secret Metadata
########################################
To create the secret metadata for a single key/value pair, we will need to know
the secret reference. This will create a new key/value pair. In order to update
an already existing key, please see the update section below.
.. code-block:: bash
curl -X POST -H "content-type:application/json" -H "X-Auth-Token: $TOKEN" \
-d '{ "key": "access-limit", "value": "11" }' \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79/metadata
This should provide a response as follows:
.. code-block:: bash
Secret Metadata Location: http://example.com:9311/v1/secrets/{uuid}/metadata/access-limit
{
"key": "access-limit",
"value": 11
}
.. _update_secret_metadatum:
How to Update an Individual Secret Metadata
###########################################
To update the secret metadata for a single key/value pair, we will need to know
the secret reference as well as the name of the key.
.. code-block:: bash
curl -X PUT -H "content-type:application/json" -H "X-Auth-Token: $TOKEN" \
-d '{ "key": "access-limit", "value": "0" }' \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79/metadata/access-limit
This should provide a response as follows:
.. code-block:: bash
{
"key": "access-limit",
"value": 0
}
.. _retrieve_secret_metadatum:
How to Retrieve an Individual Secret Metadata
#############################################
To retrieve the secret metadata for a specific key/value pair, we will need to
know the secret reference as well as the name of the metadata key.
.. code-block:: bash
curl -H "X-Auth-Token: $TOKEN" \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79/metadata/access-limit
This should provide a response as follows:
.. code-block:: bash
{
"key": "access-limit",
"value": 0
}
.. _remove_secret_metadatum:
How to Delete an Individual Secret Metadata
###########################################
To delete a single secret metadata key/value, we will need to know the secret
reference as well as the name of the metadata key to delete. In order to
delete all metadata for a secret, please see the create/update section at the
top of this page.
.. code-block:: bash
curl -X DELETE -H "X-Auth-Token: $TOKEN" \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79/metadata/access-limit
No response will be provided. This is expected behavior! If you do receive a
response, something went wrong and you will have to address that before
moving forward.

View File

@@ -1,152 +0,0 @@
*************************
Secrets API - User Guide
*************************
The secrets resource is the heart of the barbican service. It provides access
to the secret / keying material stored in the system.
Barbican supports the storage of data for various content-types securely.
This guide will assume you will be using a local running development environment of barbican.
If you need assistance with getting set up, please reference the
`development guide <http://docs.openstack.org/developer/barbican/setup/dev.html>`__.
What is a Secret?
#################
A secret is a singular item that is stored within barbican. A secret is
anything you want it to be; however, the formal use case is a key that you wish
to store away from prying eyes.
Some examples of a secret may include:
* Private Key
* Certificate
* Password
* SSH Keys
For the purpose of this user guide, we will use a simple plaintext
secret. If you would like to learn more in detail about
`secret parameters <http://docs.openstack.org/developer/barbican/api/reference/secrets.html#secret-parameters>`__,
`responses <http://docs.openstack.org/developer/barbican/api/reference/secrets.html#secret_response_attributes>`__,
and `status codes <http://docs.openstack.org/developer/barbican/api/reference/secrets.html#secret_status_codes>`__
you can reference the
`secret reference <http://docs.openstack.org/developer/barbican/api/reference/secrets.html>`__
documentation.
.. _create_secret:
How to Create a Secret
######################
Single Step Secret Creation
***************************
The first secret we will create is a single step secret. Using a single step,
barbican expects the user to provide the payload to be stored within the secret
itself. Once the secret has been created with a payload it cannot be updated. In
this example we will provide a plain text secret. For more information on creating
secrets you can view the
`POST /v1/secrets <http://docs.openstack.org/developer/barbican/api/reference/secrets.html#post-secrets>`__
section.
.. code-block:: bash
curl -X POST -H "content-type:application/json" -H "X-Auth-Token: $TOKEN" \
-d '{"payload": "my-secret-here", "payload_content_type": "text/plain"}' \
http://localhost:9311/v1/secrets
This should provide a response as follows:
.. code-block:: bash
{"secret_ref": "http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79"}
This is our secret reference. We will need this in order to retrieve the secret in the following steps.
Jump ahead to :ref:`How to Retrieve a Secret <retrieve_secret>` to make sure our secret is
stored as expected.
.. _two_step_secret_create:
Two Step Secret Creation
************************
The second secret we will create is a two-step secret. A two-step secret will
allow the user to create a secret reference initially, but upload the secret
data after the fact. In this example we will not provide a payload.
.. code-block:: bash
curl -X POST -H "content-type:application/json" -H "X-Auth-Token: $TOKEN" \
-d '{}' http://localhost:9311/v1/secrets
This should provide a response as follows:
.. code-block:: bash
{"secret_ref": "http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79"}
Now that we have a secret reference available, we can update the secret data.
.. _update_secret:
How to Update a Secret
######################
To update the secret data we will need to know the secret reference provided
via the initial creation. (See :ref:`Two Step Secret Creation <two_step_secret_create>`
for more information.) In the example below, the secret ref is used from the
previous example. You will have to substitute the uuid after /secrets/ with
your own in order to update the secret.
.. code-block:: bash
curl -X PUT -H "content-type:text/plain" -H "X-Auth-Token: $TOKEN" \
-d 'my-secret-here' \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79
No response will be provided. This is expected behavior! If you do receive a
response, something went wrong and you will have to address that before
moving forward. (For more information visit
`PUT /v1/secrets/{uuid} <http://docs.openstack.org/developer/barbican/api/reference/secrets.html#put-secrets>`__
.)
.. _retrieve_secret:
How to Retrieve a Secret
########################
To retrieve the secret we have created we will need to know the secret reference
provided via the initial creation (See :ref:`How to Create a Secret <create_secret>`.)
.. code-block:: bash
curl -H "Accept: text/plain" -H "X-Auth-Token: $TOKEN" \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79/payload
This should provide a response as follows:
.. code-block:: bash
my-secret-here
This is the plain text data we provided upon initial creation of the secret.
How to Delete a Secret
######################
To delete a secret we will need to know the secret reference provided via
the initial creation (See :ref:`How to Create a Secret <create_secret>`.)
.. code-block:: bash
curl -X DELETE -H "X-Auth-Token: $TOKEN" \
http://localhost:9311/v1/secrets/2a549393-0710-444b-8aa5-84cf0f85ea79
No response will be provided. This is expected behavior! If you do receive a
response, something went wrong and you will have to address that before
moving forward. (For more information visit
`DELETE /v1/secrets/{uuid} <http://docs.openstack.org/developer/barbican/api/reference/secrets.html#delete-secrets>`__
.)

View File

View File

@@ -1,312 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# manila documentation build configuration file, created by
# sphinx-quickstart on Sat May 7 13:35:27 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
html_theme = 'openstackdocs'
html_theme_options = {
"sidebar_mode": "toc",
}
extensions = [
'os_api_ref',
'openstackdocstheme'
]
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../../'))
sys.path.insert(0, os.path.abspath('../'))
sys.path.insert(0, os.path.abspath('./'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Key Manager API Reference'
copyright = u'OpenStack Foundation'
repository_name = 'openstack/barbican'
bug_project = 'barbican'
bug_tag = 'api-ref'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
from barbican.version import version_info
# The full version, including alpha/beta/rc tags.
release = version_info.release_string()
# The short X.Y version.
version = version_info.version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
# html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
# html_title = u'Shared File Systems API Reference v2'
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or
# 32x32 pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not None, a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
# The empty string is equivalent to '%b %d, %Y'.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'
# html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# 'ja' uses this config value.
# 'zh' user can custom change `jieba` dictionary path.
# html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'barbicandoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
# Latex figure (float) alignment
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'barbican.tex',
u'OpenStack Key Manager API Documentation',
u'OpenStack Foundation', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'barbican', u'OpenStack Key Manager API Documentation',
u'Openstack Foundation', 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'Barbican', u'OpenStack Key Manager API Documentation',
u'OpenStack Foundation', 'Barbican', 'OpenStack Key Manager',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False

View File

@@ -1,16 +0,0 @@
==================================
OpenStack Key Manager Service APIs
==================================
.. rest_expand_all::
.. include:: acls.inc
.. include:: cas.inc
.. include:: consumers.inc
.. include:: containers.inc
.. include:: orders.inc
.. include:: quotas.inc
.. include:: secret_metadata.inc
.. include:: secrets.inc
.. include:: secretstores.inc
.. include:: transportkeys.inc

View File

@@ -1,53 +0,0 @@
HOST: https://dfw.barbican.api.rackspacecloud.com/v1/
--- Barbican API v1 ---
---
Barbican is a ReST based key management service. It is built with
[OpenStack](http://www.openstack.org/) in mind, but can be used outside
an OpenStack implementation.
More information can be found on [GitHub](https://github.com/cloudkeep/barbican).
---
--
Secrets Resource
The following is a description of the resources dealing with generic secrets.
These can be encryption keys or anything else a user wants to store in a secure,
auditable manner
--
Allows a user to list all secrets in a tenant. Note: the actual secret
should not be listed here, a user must make a separate call to get the
secret details to view the secret.
GET /secrets
< 200
< Content-Type: application/json
{
"name": "AES key"
"algorithm": "AES"
"cypher_type": "CDC"
"bit_length": 256
"content_types": {
"default": "text/plain"
}
"expiration": "2013-05-08T16:21:38.134160"
"id": "2eb5a8d8-2202-4f46-b64d-89e26eb25487"
"mime_type": "text/plain"
}
Allows a user to create a new secret. This call expects the user to
provide a secret. To have the API generate a secret, see the provisioning
API.
POST /secrets
> Content-Type: application/json
{ "product":"1AB23ORM", "quantity": 2 }
< 201
< Content-Type: application/json
{ "status": "created", "url": "/shopping-cart/2" }
-- Payment Resources --
This resource allows you to submit payment information to process your *shopping cart* items
POST /payment
{ "cc": "12345678900", "cvc": "123", "expiry": "0112" }
< 200
{ "receipt": "/payment/receipt/1" }

View File

@@ -1 +0,0 @@
[python: **.py]

View File

View File

@@ -1,147 +0,0 @@
# Copyright (c) 2013-2015 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
API handler for Barbican
"""
import pkgutil
from oslo_policy import policy
from oslo_serialization import jsonutils as json
import pecan
from barbican.common import config
from barbican.common import exception
from barbican.common import utils
from barbican import i18n as u
LOG = utils.getLogger(__name__)
CONF = config.CONF
class ApiResource(object):
"""Base class for API resources."""
pass
def load_body(req, resp=None, validator=None):
"""Helper function for loading an HTTP request body from JSON.
This body is placed into into a Python dictionary.
:param req: The HTTP request instance to load the body from.
:param resp: The HTTP response instance.
:param validator: The JSON validator to enforce.
:return: A dict of values from the JSON request.
"""
try:
body = req.body_file.read(CONF.max_allowed_request_size_in_bytes)
req.body_file.seek(0)
except IOError:
LOG.exception("Problem reading request JSON stream.")
pecan.abort(500, u._('Read Error'))
try:
# TODO(jwood): Investigate how to get UTF8 format via openstack
# jsonutils:
# parsed_body = json.loads(raw_json, 'utf-8')
parsed_body = json.loads(body)
strip_whitespace(parsed_body)
except ValueError:
LOG.exception("Problem loading request JSON.")
pecan.abort(400, u._('Malformed JSON'))
if validator:
try:
parsed_body = validator.validate(parsed_body)
except exception.BarbicanHTTPException as e:
LOG.exception(e.message)
pecan.abort(e.status_code, e.client_message)
return parsed_body
def generate_safe_exception_message(operation_name, excep):
"""Generates an exception message that is 'safe' for clients to consume.
A 'safe' message is one that doesn't contain sensitive information that
could be used for (say) cryptographic attacks on Barbican. That generally
means that em.CryptoXxxx should be captured here and with a simple
message created on behalf of them.
:param operation_name: Name of attempted operation, with a 'Verb noun'
format (e.g. 'Create Secret).
:param excep: The Exception instance that halted the operation.
:return: (status, message) where 'status' is one of the webob.exc.HTTP_xxx
codes, and 'message' is the sanitized message
associated with the error.
"""
message = None
reason = None
status = 500
try:
raise excep
except policy.PolicyNotAuthorized:
message = u._(
'{operation} attempt not allowed - '
'please review your '
'user/project privileges').format(operation=operation_name)
status = 403
except exception.BarbicanHTTPException as http_exception:
reason = http_exception.client_message
status = http_exception.status_code
except Exception:
message = u._('{operation} failure seen - please contact site '
'administrator.').format(operation=operation_name)
if reason:
message = u._('{operation} issue seen - {reason}.').format(
operation=operation_name, reason=reason)
return status, message
@pkgutil.simplegeneric
def get_items(obj):
"""This is used to get items from either a list or a dictionary.
While false generator is need to process scalar object
"""
while False:
yield None
@get_items.register(dict)
def _json_object(obj):
return obj.items()
@get_items.register(list)
def _json_array(obj):
return enumerate(obj)
def strip_whitespace(json_data):
"""Recursively trim values from the object passed in using get_items()."""
for key, value in get_items(json_data):
if hasattr(value, 'strip'):
json_data[key] = value.strip()
else:
strip_whitespace(value)

View File

@@ -1,110 +0,0 @@
# Copyright (c) 2013-2015 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
API application handler for Barbican
"""
import os
from paste import deploy
import pecan
try:
import newrelic.agent
newrelic_loaded = True
except ImportError:
newrelic_loaded = False
from oslo_log import log
from barbican.api.controllers import versions
from barbican.api import hooks
from barbican.common import config
from barbican.model import repositories
from barbican import queue
CONF = config.CONF
if newrelic_loaded:
newrelic.agent.initialize(
os.environ.get('NEW_RELIC_CONFIG_FILE', '/etc/newrelic/newrelic.ini'),
os.environ.get('NEW_RELIC_ENVIRONMENT')
)
def build_wsgi_app(controller=None, transactional=False):
"""WSGI application creation helper
:param controller: Overrides default application controller
:param transactional: Adds transaction hook for all requests
"""
request_hooks = [hooks.JSONErrorHook()]
if transactional:
request_hooks.append(hooks.BarbicanTransactionHook())
if newrelic_loaded:
request_hooks.insert(0, hooks.NewRelicHook())
# Create WSGI app
wsgi_app = pecan.Pecan(
controller or versions.AVAILABLE_VERSIONS[versions.DEFAULT_VERSION](),
hooks=request_hooks,
force_canonical=False
)
# clear the session created in controller initialization 60
repositories.clear()
return wsgi_app
def main_app(func):
def _wrapper(global_config, **local_conf):
# Queuing initialization
queue.init(CONF, is_server_side=False)
# Configure oslo logging and configuration services.
log.setup(CONF, 'barbican')
config.setup_remote_pydev_debug()
# Initializing the database engine and session factory before the app
# starts ensures we don't lose requests due to lazy initialization of
# db connections.
repositories.setup_database_engine_and_factory()
wsgi_app = func(global_config, **local_conf)
if newrelic_loaded:
wsgi_app = newrelic.agent.WSGIApplicationWrapper(wsgi_app)
LOG = log.getLogger(__name__)
LOG.info('Barbican app created and initialized')
return wsgi_app
return _wrapper
@main_app
def create_main_app(global_config, **local_conf):
"""uWSGI factory method for the Barbican-API application."""
# Setup app with transactional hook enabled
return build_wsgi_app(versions.V1Controller(), transactional=True)
def create_version_app(global_config, **local_conf):
wsgi_app = pecan.make_app(versions.VersionsController())
return wsgi_app
def get_api_wsgi_script():
conf = '/etc/barbican/barbican-api-paste.ini'
application = deploy.loadapp('config:%s' % conf)
return application

View File

@@ -1,26 +0,0 @@
# -*- mode: python -*-
#
# Copyright 2016 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Use this file for deploying the API under mod_wsgi.
See http://pecan.readthedocs.org/en/latest/deployment.html for details.
NOTE(mtreinish): This wsgi script is deprecated since the wsgi app is now
exposed as an entrypoint via barbican-wsgi-api
"""
from barbican.api import app
application = app.get_api_wsgi_script()

View File

@@ -1,223 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
from oslo_policy import policy
import pecan
from webob import exc
from barbican import api
from barbican.common import utils
from barbican import i18n as u
LOG = utils.getLogger(__name__)
def is_json_request_accept(req):
"""Test if http request 'accept' header configured for JSON response.
:param req: HTTP request
:return: True if need to return JSON response.
"""
return (not req.accept
or req.accept.header_value == 'application/json'
or req.accept.header_value == '*/*')
def _get_barbican_context(req):
if 'barbican.context' in req.environ:
return req.environ['barbican.context']
else:
return None
def _do_enforce_rbac(inst, req, action_name, ctx, **kwargs):
"""Enforce RBAC based on 'request' information."""
if action_name and ctx:
# Prepare credentials information.
credentials = {
'roles': ctx.roles,
'user': ctx.user,
'project': ctx.project
}
# Enforce special case: secret GET decryption
if 'secret:get' == action_name and not is_json_request_accept(req):
action_name = 'secret:decrypt' # Override to perform special rules
target_name, target_data = inst.get_acl_tuple(req, **kwargs)
policy_dict = {}
if target_name and target_data:
policy_dict['target'] = {target_name: target_data}
policy_dict.update(kwargs)
# Enforce access controls.
if ctx.policy_enforcer:
ctx.policy_enforcer.enforce(action_name, flatten(policy_dict),
credentials, do_raise=True)
def enforce_rbac(action_name='default'):
"""Decorator handling RBAC enforcement on behalf of REST verb methods."""
def rbac_decorator(fn):
def enforcer(inst, *args, **kwargs):
# Enforce RBAC rules.
# context placed here by context.py
# middleware
ctx = _get_barbican_context(pecan.request)
external_project_id = None
if ctx:
external_project_id = ctx.project
_do_enforce_rbac(inst, pecan.request, action_name, ctx, **kwargs)
# insert external_project_id as the first arg to the guarded method
args = list(args)
args.insert(0, external_project_id)
# Execute guarded method now.
return fn(inst, *args, **kwargs)
return enforcer
return rbac_decorator
def handle_exceptions(operation_name=u._('System')):
"""Decorator handling generic exceptions from REST methods."""
def exceptions_decorator(fn):
def handler(inst, *args, **kwargs):
try:
return fn(inst, *args, **kwargs)
except exc.HTTPError:
LOG.exception('Webob error seen')
raise # Already converted to Webob exception, just reraise
# In case PolicyNotAuthorized, we do not want to expose payload by
# logging exception, so just LOG.error
except policy.PolicyNotAuthorized as pna:
status, message = api.generate_safe_exception_message(
operation_name, pna)
LOG.error(message)
pecan.abort(status, message)
except Exception as e:
# In case intervening modules have disabled logging.
LOG.logger.disabled = False
status, message = api.generate_safe_exception_message(
operation_name, e)
LOG.exception(message)
pecan.abort(status, message)
return handler
return exceptions_decorator
def _do_enforce_content_types(pecan_req, valid_content_types):
"""Content type enforcement
Check to see that content type in the request is one of the valid
types passed in by our caller.
"""
if pecan_req.content_type not in valid_content_types:
m = u._(
"Unexpected content type. Expected content types "
"are: {expected}"
).format(
expected=valid_content_types
)
pecan.abort(415, m)
def enforce_content_types(valid_content_types=[]):
"""Decorator handling content type enforcement on behalf of REST verbs."""
def content_types_decorator(fn):
def content_types_enforcer(inst, *args, **kwargs):
_do_enforce_content_types(pecan.request, valid_content_types)
return fn(inst, *args, **kwargs)
return content_types_enforcer
return content_types_decorator
def flatten(d, parent_key=''):
"""Flatten a nested dictionary
Converts a dictionary with nested values to a single level flat
dictionary, with dotted notation for each key.
"""
items = []
for k, v in d.items():
new_key = parent_key + '.' + k if parent_key else k
if isinstance(v, collections.MutableMapping):
items.extend(flatten(v, new_key).items())
else:
items.append((new_key, v))
return dict(items)
class ACLMixin(object):
def get_acl_tuple(self, req, **kwargs):
return None, None
def get_acl_dict_for_user(self, req, acl_list):
"""Get acl operation found for token user in acl list.
Token user is looked into users list present for each acl operation.
If there is a match, it means that ACL data is applicable for policy
logic. Policy logic requires data as dictionary so this method capture
acl's operation, project_access data in that format.
For operation value, matching ACL record's operation is stored in dict
as key and value both.
project_access flag is intended to make secret/container private for a
given operation. It doesn't require user match. So its captured in dict
format where key is prefixed with related operation and flag is used as
its value.
Then for acl related policy logic, this acl dict data is combined with
target entity (secret or container) creator_id and project id. The
whole dict serves as target in policy enforcement logic i.e. right
hand side of policy rule.
Following is sample outcome where secret or container has ACL defined
and token user is among the ACL users defined for 'read' and 'list'
operation.
{'read': 'read', 'list': 'list', 'read_project_access': True,
'list_project_access': True }
Its possible that ACLs are defined without any user, they just
have project_access flag set. This means only creator can read or list
ACL entities. In that case, dictionary output can be as follows.
{'read_project_access': False, 'list_project_access': False }
"""
ctxt = _get_barbican_context(req)
if not ctxt:
return {}
acl_dict = {acl.operation: acl.operation for acl in acl_list
if ctxt.user in acl.to_dict_fields().get('users', [])}
co_dict = {'%s_project_access' % acl.operation: acl.project_access for
acl in acl_list if acl.project_access is not None}
acl_dict.update(co_dict)
return acl_dict

View File

@@ -1,374 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
import six
from barbican import api
from barbican.api import controllers
from barbican.common import hrefs
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
from barbican.model import models
from barbican.model import repositories as repo
LOG = utils.getLogger(__name__)
def _convert_acl_to_response_format(acl, acls_dict):
fields = acl.to_dict_fields()
operation = fields['operation']
acl_data = {} # dict for each acl operation data
acl_data['project-access'] = fields['project_access']
acl_data['users'] = fields.get('users', [])
acl_data['created'] = fields['created']
acl_data['updated'] = fields['updated']
acls_dict[operation] = acl_data
DEFAULT_ACL = {'read': {'project-access': True}}
class SecretACLsController(controllers.ACLMixin):
"""Handles SecretACL requests by a given secret id."""
def __init__(self, secret):
self.secret = secret
self.secret_project_id = self.secret.project.external_id
self.acl_repo = repo.get_secret_acl_repository()
self.validator = validators.ACLValidator()
def get_acl_tuple(self, req, **kwargs):
d = {'project_id': self.secret_project_id,
'creator_id': self.secret.creator_id}
return 'secret', d
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('SecretACL(s) retrieval'))
@controllers.enforce_rbac('secret_acls:get')
def on_get(self, external_project_id, **kw):
LOG.debug('Start secret ACL on_get '
'for secret-ID %s:', self.secret.id)
return self._return_acl_list_response(self.secret.id)
@index.when(method='PATCH', template='json')
@controllers.handle_exceptions(u._('SecretACL(s) Update'))
@controllers.enforce_rbac('secret_acls:put_patch')
@controllers.enforce_content_types(['application/json'])
def on_patch(self, external_project_id, **kwargs):
"""Handles update of existing secret acl requests.
At least one secret ACL needs to exist for update to proceed.
In update, multiple operation ACL payload can be specified as
mentioned in sample below. A specific ACL can be updated by its
own id via SecretACLController patch request.
{
"read":{
"users":[
"5ecb18f341894e94baca9e8c7b6a824a",
"20b63d71f90848cf827ee48074f213b7",
"c7753f8da8dc4fbea75730ab0b6e0ef4"
]
},
"write":{
"users":[
"5ecb18f341894e94baca9e8c7b6a824a"
],
"project-access":true
}
}
"""
data = api.load_body(pecan.request, validator=self.validator)
LOG.debug('Start on_patch...%s', data)
existing_acls_map = {acl.operation: acl for acl in
self.secret.secret_acls}
for operation in six.moves.filter(lambda x: data.get(x),
validators.ACL_OPERATIONS):
project_access = data[operation].get('project-access')
user_ids = data[operation].get('users')
s_acl = None
if operation in existing_acls_map: # update if matching acl exists
s_acl = existing_acls_map[operation]
if project_access is not None:
s_acl.project_access = project_access
else:
s_acl = models.SecretACL(self.secret.id, operation=operation,
project_access=project_access)
self.acl_repo.create_or_replace_from(self.secret, secret_acl=s_acl,
user_ids=user_ids)
acl_ref = '{0}/acl'.format(
hrefs.convert_secret_to_href(self.secret.id))
return {'acl_ref': acl_ref}
@index.when(method='PUT', template='json')
@controllers.handle_exceptions(u._('SecretACL(s) Update'))
@controllers.enforce_rbac('secret_acls:put_patch')
@controllers.enforce_content_types(['application/json'])
def on_put(self, external_project_id, **kwargs):
"""Handles update of existing secret acl requests.
Replaces existing secret ACL(s) with input ACL(s) data. Existing
ACL operation not specified in input are removed as part of update.
For missing project-access in ACL, true is used as default.
In update, multiple operation ACL payload can be specified as
mentioned in sample below. A specific ACL can be updated by its
own id via SecretACLController patch request.
{
"read":{
"users":[
"5ecb18f341894e94baca9e8c7b6a824a",
"20b63d71f90848cf827ee48074f213b7",
"c7753f8da8dc4fbea75730ab0b6e0ef4"
]
},
"write":{
"users":[
"5ecb18f341894e94baca9e8c7b6a824a"
],
"project-access":false
}
}
Every secret, by default, has an implicit ACL in case client has not
defined an explicit ACL. That default ACL definition, DEFAULT_ACL,
signifies that a secret by default has project based access i.e. client
with necessary roles on secret project can access the secret. That's
why when ACL is added to a secret, it always returns 200 (and not 201)
indicating existence of implicit ACL on a secret.
"""
data = api.load_body(pecan.request, validator=self.validator)
LOG.debug('Start on_put...%s', data)
existing_acls_map = {acl.operation: acl for acl in
self.secret.secret_acls}
for operation in six.moves.filter(lambda x: data.get(x),
validators.ACL_OPERATIONS):
project_access = data[operation].get('project-access', True)
user_ids = data[operation].get('users', [])
s_acl = None
if operation in existing_acls_map: # update if matching acl exists
s_acl = existing_acls_map.pop(operation)
s_acl.project_access = project_access
else:
s_acl = models.SecretACL(self.secret.id, operation=operation,
project_access=project_access)
self.acl_repo.create_or_replace_from(self.secret, secret_acl=s_acl,
user_ids=user_ids)
# delete remaining existing acls as they are not present in input.
for acl in existing_acls_map.values():
self.acl_repo.delete_entity_by_id(entity_id=acl.id,
external_project_id=None)
acl_ref = '{0}/acl'.format(
hrefs.convert_secret_to_href(self.secret.id))
return {'acl_ref': acl_ref}
@index.when(method='DELETE', template='json')
@controllers.handle_exceptions(u._('SecretACL(s) deletion'))
@controllers.enforce_rbac('secret_acls:delete')
def on_delete(self, external_project_id, **kwargs):
count = self.acl_repo.get_count(self.secret.id)
if count > 0:
self.acl_repo.delete_acls_for_secret(self.secret)
def _return_acl_list_response(self, secret_id):
result = self.acl_repo.get_by_secret_id(secret_id)
acls_data = {}
if result:
for acl in result:
_convert_acl_to_response_format(acl, acls_data)
if not acls_data:
acls_data = DEFAULT_ACL.copy()
return acls_data
class ContainerACLsController(controllers.ACLMixin):
"""Handles ContainerACL requests by a given container id."""
def __init__(self, container):
self.container = container
self.container_id = container.id
self.acl_repo = repo.get_container_acl_repository()
self.container_repo = repo.get_container_repository()
self.validator = validators.ACLValidator()
self.container_project_id = container.project.external_id
def get_acl_tuple(self, req, **kwargs):
d = {'project_id': self.container_project_id,
'creator_id': self.container.creator_id}
return 'container', d
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('ContainerACL(s) retrieval'))
@controllers.enforce_rbac('container_acls:get')
def on_get(self, external_project_id, **kw):
LOG.debug('Start container ACL on_get '
'for container-ID %s:', self.container_id)
return self._return_acl_list_response(self.container.id)
@index.when(method='PATCH', template='json')
@controllers.handle_exceptions(u._('ContainerACL(s) Update'))
@controllers.enforce_rbac('container_acls:put_patch')
@controllers.enforce_content_types(['application/json'])
def on_patch(self, external_project_id, **kwargs):
"""Handles update of existing container acl requests.
At least one container ACL needs to exist for update to proceed.
In update, multiple operation ACL payload can be specified as
mentioned in sample below. A specific ACL can be updated by its
own id via ContainerACLController patch request.
{
"read":{
"users":[
"5ecb18f341894e94baca9e8c7b6a824a",
"20b63d71f90848cf827ee48074f213b7",
"c7753f8da8dc4fbea75730ab0b6e0ef4"
]
},
"write":{
"users":[
"5ecb18f341894e94baca9e8c7b6a824a"
],
"project-access":false
}
}
"""
data = api.load_body(pecan.request, validator=self.validator)
LOG.debug('Start ContainerACLsController on_patch...%s', data)
existing_acls_map = {acl.operation: acl for acl in
self.container.container_acls}
for operation in six.moves.filter(lambda x: data.get(x),
validators.ACL_OPERATIONS):
project_access = data[operation].get('project-access')
user_ids = data[operation].get('users')
if operation in existing_acls_map: # update if matching acl exists
c_acl = existing_acls_map[operation]
if project_access is not None:
c_acl.project_access = project_access
else:
c_acl = models.ContainerACL(self.container.id,
operation=operation,
project_access=project_access)
self.acl_repo.create_or_replace_from(self.container,
container_acl=c_acl,
user_ids=user_ids)
acl_ref = '{0}/acl'.format(
hrefs.convert_container_to_href(self.container.id))
return {'acl_ref': acl_ref}
@index.when(method='PUT', template='json')
@controllers.handle_exceptions(u._('ContainerACL(s) Update'))
@controllers.enforce_rbac('container_acls:put_patch')
@controllers.enforce_content_types(['application/json'])
def on_put(self, external_project_id, **kwargs):
"""Handles update of existing container acl requests.
Replaces existing container ACL(s) with input ACL(s) data. Existing
ACL operation not specified in input are removed as part of update.
For missing project-access in ACL, true is used as default.
In update, multiple operation ACL payload can be specified as
mentioned in sample below. A specific ACL can be updated by its
own id via ContainerACLController patch request.
{
"read":{
"users":[
"5ecb18f341894e94baca9e8c7b6a824a",
"20b63d71f90848cf827ee48074f213b7",
"c7753f8da8dc4fbea75730ab0b6e0ef4"
]
},
"write":{
"users":[
"5ecb18f341894e94baca9e8c7b6a824a"
],
"project-access":false
}
}
Every container, by default, has an implicit ACL in case client has not
defined an explicit ACL. That default ACL definition, DEFAULT_ACL,
signifies that a container by default has project based access i.e.
client with necessary roles on container project can access the
container. That's why when ACL is added to a container, it always
returns 200 (and not 201) indicating existence of implicit ACL on a
container.
"""
data = api.load_body(pecan.request, validator=self.validator)
LOG.debug('Start ContainerACLsController on_put...%s', data)
existing_acls_map = {acl.operation: acl for acl in
self.container.container_acls}
for operation in six.moves.filter(lambda x: data.get(x),
validators.ACL_OPERATIONS):
project_access = data[operation].get('project-access', True)
user_ids = data[operation].get('users', [])
if operation in existing_acls_map: # update if matching acl exists
c_acl = existing_acls_map.pop(operation)
c_acl.project_access = project_access
else:
c_acl = models.ContainerACL(self.container.id,
operation=operation,
project_access=project_access)
self.acl_repo.create_or_replace_from(self.container,
container_acl=c_acl,
user_ids=user_ids)
# delete remaining existing acls as they are not present in input.
for acl in existing_acls_map.values():
self.acl_repo.delete_entity_by_id(entity_id=acl.id,
external_project_id=None)
acl_ref = '{0}/acl'.format(
hrefs.convert_container_to_href(self.container.id))
return {'acl_ref': acl_ref}
@index.when(method='DELETE', template='json')
@controllers.handle_exceptions(u._('ContainerACL(s) deletion'))
@controllers.enforce_rbac('container_acls:delete')
def on_delete(self, external_project_id, **kwargs):
count = self.acl_repo.get_count(self.container_id)
if count > 0:
self.acl_repo.delete_acls_for_container(self.container)
def _return_acl_list_response(self, container_id):
result = self.acl_repo.get_by_container_id(container_id)
acls_data = {}
if result:
for acl in result:
_convert_acl_to_response_format(acl, acls_data)
if not acls_data:
acls_data = DEFAULT_ACL.copy()
return acls_data

View File

@@ -1,499 +0,0 @@
# Copyright (c) 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import versionutils
import pecan
from six.moves.urllib import parse
from barbican import api
from barbican.api import controllers
from barbican.common import exception as excep
from barbican.common import hrefs
from barbican.common import quota
from barbican.common import resources as res
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
from barbican.model import models
from barbican.model import repositories as repo
from barbican.tasks import certificate_resources as cert_resources
LOG = utils.getLogger(__name__)
_DEPRECATION_MSG = '%s has been deprecated in the Newton release. ' \
'It will be removed in the Pike release.'
def _certificate_authority_not_found():
"""Throw exception indicating certificate authority not found."""
pecan.abort(404, u._('Not Found. CA not found.'))
def _certificate_authority_attribute_not_found():
"""Throw exception indicating CA attribute was not found."""
pecan.abort(404, u._('Not Found. CA attribute not found.'))
def _ca_not_in_project():
"""Throw exception certificate authority is not in project."""
pecan.abort(404, u._('Not Found. CA not in project.'))
def _requested_preferred_ca_not_a_project_ca():
"""Throw exception indicating that preferred CA is not a project CA."""
pecan.abort(
400,
u._('Cannot set CA as a preferred CA as it is not a project CA.')
)
def _cant_remove_preferred_ca_from_project():
pecan.abort(
409,
u._('Please change the preferred CA to a different project CA '
'before removing it.')
)
class CertificateAuthorityController(controllers.ACLMixin):
"""Handles certificate authority retrieval requests."""
def __init__(self, ca):
LOG.debug('=== Creating CertificateAuthorityController ===')
msg = _DEPRECATION_MSG % "Certificate Authorities API"
versionutils.report_deprecated_feature(LOG, msg)
self.ca = ca
self.ca_repo = repo.get_ca_repository()
self.project_ca_repo = repo.get_project_ca_repository()
self.preferred_ca_repo = repo.get_preferred_ca_repository()
self.project_repo = repo.get_project_repository()
def __getattr__(self, name):
route_table = {
'add-to-project': self.add_to_project,
'remove-from-project': self.remove_from_project,
'set-global-preferred': self.set_global_preferred,
'set-preferred': self.set_preferred,
}
if name in route_table:
return route_table[name]
raise AttributeError
@pecan.expose()
def _lookup(self, attribute, *remainder):
_certificate_authority_attribute_not_found()
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Certificate Authority retrieval'))
@controllers.enforce_rbac('certificate_authority:get')
def on_get(self, external_project_id):
LOG.debug("== Getting certificate authority for %s", self.ca.id)
return self.ca.to_dict_fields()
@pecan.expose()
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('CA Signing Cert retrieval'))
@controllers.enforce_rbac('certificate_authority:get_cacert')
def cacert(self, external_project_id):
LOG.debug("== Getting signing cert for %s", self.ca.id)
cacert = self.ca.ca_meta['ca_signing_certificate'].value
return cacert
@pecan.expose()
@controllers.handle_exceptions(u._('CA Cert Chain retrieval'))
@controllers.enforce_rbac('certificate_authority:get_ca_cert_chain')
def intermediates(self, external_project_id):
LOG.debug("== Getting CA Cert Chain for %s", self.ca.id)
cert_chain = self.ca.ca_meta['intermediates'].value
return cert_chain
@pecan.expose(template='json')
@controllers.handle_exceptions(u._('CA projects retrieval'))
@controllers.enforce_rbac('certificate_authority:get_projects')
def projects(self, external_project_id):
LOG.debug("== Getting Projects for %s", self.ca.id)
project_cas = self.ca.project_cas
if not project_cas:
ca_projects_resp = {'projects': []}
else:
project_list = []
for p in project_cas:
project_list.append(p.project.external_id)
ca_projects_resp = {'projects': project_list}
return ca_projects_resp
@pecan.expose()
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Add CA to project'))
@controllers.enforce_rbac('certificate_authority:add_to_project')
def add_to_project(self, external_project_id):
if pecan.request.method != 'POST':
pecan.abort(405)
LOG.debug("== Saving CA %s to external_project_id %s",
self.ca.id, external_project_id)
project_model = res.get_or_create_project(external_project_id)
# CA must be a base CA or a subCA owned by this project
if (self.ca.project_id is not None and
self.ca.project_id != project_model.id):
raise excep.UnauthorizedSubCA()
project_cas = project_model.cas
num_cas = len(project_cas)
for project_ca in project_cas:
if project_ca.ca_id == self.ca.id:
# project already added
return
project_ca = models.ProjectCertificateAuthority(
project_model.id, self.ca.id)
self.project_ca_repo.create_from(project_ca)
if num_cas == 0:
# set first project CA to be the preferred one
preferred_ca = models.PreferredCertificateAuthority(
project_model.id, self.ca.id)
self.preferred_ca_repo.create_from(preferred_ca)
@pecan.expose()
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Remove CA from project'))
@controllers.enforce_rbac('certificate_authority:remove_from_project')
def remove_from_project(self, external_project_id):
if pecan.request.method != 'POST':
pecan.abort(405)
LOG.debug("== Removing CA %s from project_external_id %s",
self.ca.id, external_project_id)
project_model = res.get_or_create_project(external_project_id)
(project_ca, __offset, __limit, __total) = (
self.project_ca_repo.get_by_create_date(
project_id=project_model.id,
ca_id=self.ca.id,
suppress_exception=True))
if project_ca:
self._do_remove_from_project(project_ca[0])
else:
_ca_not_in_project()
def _do_remove_from_project(self, project_ca):
project_id = project_ca.project_id
ca_id = project_ca.ca_id
preferred_ca = self.preferred_ca_repo.get_project_entities(
project_id)[0]
if cert_resources.is_last_project_ca(project_id):
self.preferred_ca_repo.delete_entity_by_id(preferred_ca.id, None)
else:
self._assert_is_not_preferred_ca(preferred_ca.ca_id, ca_id)
self.project_ca_repo.delete_entity_by_id(project_ca.id, None)
def _assert_is_not_preferred_ca(self, preferred_ca_id, ca_id):
if preferred_ca_id == ca_id:
_cant_remove_preferred_ca_from_project()
@pecan.expose()
@controllers.handle_exceptions(u._('Set preferred project CA'))
@controllers.enforce_rbac('certificate_authority:set_preferred')
def set_preferred(self, external_project_id):
if pecan.request.method != 'POST':
pecan.abort(405)
LOG.debug("== Setting preferred CA %s for project %s",
self.ca.id, external_project_id)
project_model = res.get_or_create_project(external_project_id)
(project_ca, __offset, __limit, __total) = (
self.project_ca_repo.get_by_create_date(
project_id=project_model.id,
ca_id=self.ca.id,
suppress_exception=True))
if not project_ca:
_requested_preferred_ca_not_a_project_ca()
self.preferred_ca_repo.create_or_update_by_project_id(
project_model.id, self.ca.id)
@pecan.expose()
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Set global preferred CA'))
@controllers.enforce_rbac('certificate_authority:set_global_preferred')
def set_global_preferred(self, external_project_id):
if pecan.request.method != 'POST':
pecan.abort(405)
LOG.debug("== Set global preferred CA %s", self.ca.id)
project = res.get_or_create_global_preferred_project()
self.preferred_ca_repo.create_or_update_by_project_id(
project.id, self.ca.id)
@index.when(method='DELETE')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('CA deletion'))
@controllers.enforce_rbac('certificate_authority:delete')
def on_delete(self, external_project_id, **kwargs):
cert_resources.delete_subordinate_ca(external_project_id, self.ca)
LOG.info('Deleted CA for project: %s', external_project_id)
class CertificateAuthoritiesController(controllers.ACLMixin):
"""Handles certificate authority list requests."""
def __init__(self):
LOG.debug('Creating CertificateAuthoritiesController')
msg = _DEPRECATION_MSG % "Certificate Authorities API"
versionutils.report_deprecated_feature(LOG, msg)
self.ca_repo = repo.get_ca_repository()
self.project_ca_repo = repo.get_project_ca_repository()
self.preferred_ca_repo = repo.get_preferred_ca_repository()
self.project_repo = repo.get_project_repository()
self.validator = validators.NewCAValidator()
self.quota_enforcer = quota.QuotaEnforcer('cas', self.ca_repo)
# Populate the CA table at start up
cert_resources.refresh_certificate_resources()
def __getattr__(self, name):
route_table = {
'all': self.get_all,
'global-preferred': self.get_global_preferred,
'preferred': self.preferred,
'unset-global-preferred': self.unset_global_preferred,
}
if name in route_table:
return route_table[name]
raise AttributeError
@pecan.expose()
def _lookup(self, ca_id, *remainder):
ca = self.ca_repo.get(entity_id=ca_id, suppress_exception=True)
if not ca:
_certificate_authority_not_found()
return CertificateAuthorityController(ca), remainder
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(
u._('Certificate Authorities retrieval (limited)'))
@controllers.enforce_rbac('certificate_authorities:get_limited')
def on_get(self, external_project_id, **kw):
LOG.debug('Start certificate_authorities on_get (limited)')
plugin_name = kw.get('plugin_name')
if plugin_name is not None:
plugin_name = parse.unquote_plus(plugin_name)
plugin_ca_id = kw.get('plugin_ca_id', None)
if plugin_ca_id is not None:
plugin_ca_id = parse.unquote_plus(plugin_ca_id)
# refresh CA table, in case plugin entries have expired
cert_resources.refresh_certificate_resources()
project_model = res.get_or_create_project(external_project_id)
if self._project_cas_defined(project_model.id):
cas, offset, limit, total = self._get_subcas_and_project_cas(
offset=kw.get('offset', 0),
limit=kw.get('limit', None),
plugin_name=plugin_name,
plugin_ca_id=plugin_ca_id,
project_id=project_model.id)
else:
cas, offset, limit, total = self._get_subcas_and_root_cas(
offset=kw.get('offset', 0),
limit=kw.get('limit', None),
plugin_name=plugin_name,
plugin_ca_id=plugin_ca_id,
project_id=project_model.id)
return self._display_cas(cas, offset, limit, total)
@pecan.expose(generic=True, template='json')
@controllers.handle_exceptions(u._('Certificate Authorities retrieval'))
@controllers.enforce_rbac('certificate_authorities:get')
def get_all(self, external_project_id, **kw):
LOG.debug('Start certificate_authorities on_get')
plugin_name = kw.get('plugin_name')
if plugin_name is not None:
plugin_name = parse.unquote_plus(plugin_name)
plugin_ca_id = kw.get('plugin_ca_id', None)
if plugin_ca_id is not None:
plugin_ca_id = parse.unquote_plus(plugin_ca_id)
# refresh CA table, in case plugin entries have expired
cert_resources.refresh_certificate_resources()
project_model = res.get_or_create_project(external_project_id)
cas, offset, limit, total = self._get_subcas_and_root_cas(
offset=kw.get('offset', 0),
limit=kw.get('limit', None),
plugin_name=plugin_name,
plugin_ca_id=plugin_ca_id,
project_id=project_model.id)
return self._display_cas(cas, offset, limit, total)
def _get_project_cas(self, project_id, query_filters):
cas, offset, limit, total = self.project_ca_repo.get_by_create_date(
offset_arg=query_filters.get('offset', 0),
limit_arg=query_filters.get('limit', None),
project_id=project_id,
suppress_exception=True
)
return cas, offset, limit, total
def _project_cas_defined(self, project_id):
_cas, _offset, _limit, total = self._get_project_cas(project_id, {})
return total > 0
def _get_subcas_and_project_cas(self, offset, limit, plugin_name,
plugin_ca_id, project_id):
return self.ca_repo.get_by_create_date(
offset_arg=offset,
limit_arg=limit,
plugin_name=plugin_name,
plugin_ca_id=plugin_ca_id,
project_id=project_id,
restrict_to_project_cas=True,
suppress_exception=True)
def _get_subcas_and_root_cas(self, offset, limit, plugin_name,
plugin_ca_id, project_id):
return self.ca_repo.get_by_create_date(
offset_arg=offset,
limit_arg=limit,
plugin_name=plugin_name,
plugin_ca_id=plugin_ca_id,
project_id=project_id,
restrict_to_project_cas=False,
suppress_exception=True)
def _display_cas(self, cas, offset, limit, total):
if not cas:
cas_resp_overall = {'cas': [],
'total': total}
else:
cas_resp = [
hrefs.convert_certificate_authority_to_href(ca.id)
for ca in cas]
cas_resp_overall = hrefs.add_nav_hrefs('cas', offset, limit, total,
{'cas': cas_resp})
cas_resp_overall.update({'total': total})
return cas_resp_overall
@pecan.expose(generic=True, template='json')
@controllers.handle_exceptions(u._('Retrieve global preferred CA'))
@controllers.enforce_rbac(
'certificate_authorities:get_global_preferred_ca')
def get_global_preferred(self, external_project_id, **kw):
LOG.debug('Start certificate_authorities get_global_preferred CA')
pref_ca = cert_resources.get_global_preferred_ca()
if not pref_ca:
pecan.abort(404, u._("No global preferred CA defined"))
return {
'ca_ref':
hrefs.convert_certificate_authority_to_href(pref_ca.ca_id)
}
@pecan.expose()
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Unset global preferred CA'))
@controllers.enforce_rbac('certificate_authorities:unset_global_preferred')
def unset_global_preferred(self, external_project_id):
if pecan.request.method != 'POST':
pecan.abort(405)
LOG.debug("== Unsetting global preferred CA")
self._remove_global_preferred_ca(external_project_id)
def _remove_global_preferred_ca(self, external_project_id):
global_preferred_ca = cert_resources.get_global_preferred_ca()
if global_preferred_ca:
self.preferred_ca_repo.delete_entity_by_id(
global_preferred_ca.id,
external_project_id)
@pecan.expose(generic=True, template='json')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Retrieve project preferred CA'))
@controllers.enforce_rbac('certificate_authorities:get_preferred_ca')
def preferred(self, external_project_id, **kw):
LOG.debug('Start certificate_authorities get'
' project preferred CA')
project = res.get_or_create_project(external_project_id)
pref_ca_id = cert_resources.get_project_preferred_ca_id(project.id)
if not pref_ca_id:
pecan.abort(404, u._("No preferred CA defined for this project"))
return {
'ca_ref':
hrefs.convert_certificate_authority_to_href(pref_ca_id)
}
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('CA creation'))
@controllers.enforce_rbac('certificate_authorities:post')
@controllers.enforce_content_types(['application/json'])
def on_post(self, external_project_id, **kwargs):
LOG.debug('Start on_post for project-ID %s:...',
external_project_id)
data = api.load_body(pecan.request, validator=self.validator)
project = res.get_or_create_project(external_project_id)
ctxt = controllers._get_barbican_context(pecan.request)
if ctxt: # in authenticated pipeline case, always use auth token user
creator_id = ctxt.user
self.quota_enforcer.enforce(project)
new_ca = cert_resources.create_subordinate_ca(
project_model=project,
name=data.get('name'),
description=data.get('description'),
subject_dn=data.get('subject_dn'),
parent_ca_ref=data.get('parent_ca_ref'),
creator_id=creator_id
)
url = hrefs.convert_certificate_authority_to_href(new_ca.id)
LOG.debug('URI to sub-CA is %s', url)
pecan.response.status = 201
pecan.response.headers['Location'] = url
LOG.info('Created a sub CA for project: %s',
external_project_id)
return {'ca_ref': url}

View File

@@ -1,225 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from barbican import api
from barbican.api import controllers
from barbican.common import exception
from barbican.common import hrefs
from barbican.common import quota
from barbican.common import resources as res
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
from barbican.model import models
from barbican.model import repositories as repo
LOG = utils.getLogger(__name__)
def _consumer_not_found():
"""Throw exception indicating consumer not found."""
pecan.abort(404, u._('Not Found. Sorry but your consumer is in '
'another castle.'))
def _consumer_ownership_mismatch():
"""Throw exception indicating the user does not own this consumer."""
pecan.abort(403, u._('Not Allowed. Sorry, only the creator of a consumer '
'can delete it.'))
def _invalid_consumer_id():
"""Throw exception indicating consumer id is invalid."""
pecan.abort(404, u._('Not Found. Provided consumer id is invalid.'))
class ContainerConsumerController(controllers.ACLMixin):
"""Handles Consumer entity retrieval and deletion requests."""
def __init__(self, consumer_id):
self.consumer_id = consumer_id
self.consumer_repo = repo.get_container_consumer_repository()
self.validator = validators.ContainerConsumerValidator()
@pecan.expose(generic=True)
def index(self):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('ContainerConsumer retrieval'))
@controllers.enforce_rbac('consumer:get')
def on_get(self, external_project_id):
consumer = self.consumer_repo.get(
entity_id=self.consumer_id,
suppress_exception=True)
if not consumer:
_consumer_not_found()
dict_fields = consumer.to_dict_fields()
LOG.info('Retrieved a consumer for project: %s',
external_project_id)
return hrefs.convert_to_hrefs(
hrefs.convert_to_hrefs(dict_fields)
)
class ContainerConsumersController(controllers.ACLMixin):
"""Handles Consumer creation requests."""
def __init__(self, container_id):
self.container_id = container_id
self.consumer_repo = repo.get_container_consumer_repository()
self.container_repo = repo.get_container_repository()
self.project_repo = repo.get_project_repository()
self.validator = validators.ContainerConsumerValidator()
self.quota_enforcer = quota.QuotaEnforcer('consumers',
self.consumer_repo)
@pecan.expose()
def _lookup(self, consumer_id, *remainder):
if not utils.validate_id_is_uuid(consumer_id):
_invalid_consumer_id()()
return ContainerConsumerController(consumer_id), remainder
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('ContainerConsumers(s) retrieval'))
@controllers.enforce_rbac('consumers:get')
def on_get(self, external_project_id, **kw):
LOG.debug('Start consumers on_get '
'for container-ID %s:', self.container_id)
result = self.consumer_repo.get_by_container_id(
self.container_id,
offset_arg=kw.get('offset', 0),
limit_arg=kw.get('limit'),
suppress_exception=True
)
consumers, offset, limit, total = result
if not consumers:
resp_ctrs_overall = {'consumers': [], 'total': total}
else:
resp_ctrs = [
hrefs.convert_to_hrefs(c.to_dict_fields())
for c in consumers
]
consumer_path = "containers/{container_id}/consumers".format(
container_id=self.container_id)
resp_ctrs_overall = hrefs.add_nav_hrefs(
consumer_path,
offset,
limit,
total,
{'consumers': resp_ctrs}
)
resp_ctrs_overall.update({'total': total})
LOG.info('Retrieved a consumer list for project: %s',
external_project_id)
return resp_ctrs_overall
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('ContainerConsumer creation'))
@controllers.enforce_rbac('consumers:post')
@controllers.enforce_content_types(['application/json'])
def on_post(self, external_project_id, **kwargs):
project = res.get_or_create_project(external_project_id)
data = api.load_body(pecan.request, validator=self.validator)
LOG.debug('Start on_post...%s', data)
container = self._get_container(self.container_id)
self.quota_enforcer.enforce(project)
new_consumer = models.ContainerConsumerMetadatum(self.container_id,
project.id,
data)
self.consumer_repo.create_or_update_from(new_consumer, container)
url = hrefs.convert_consumer_to_href(new_consumer.container_id)
pecan.response.headers['Location'] = url
LOG.info('Created a consumer for project: %s',
external_project_id)
return self._return_container_data(self.container_id)
@index.when(method='DELETE', template='json')
@controllers.handle_exceptions(u._('ContainerConsumer deletion'))
@controllers.enforce_rbac('consumers:delete')
@controllers.enforce_content_types(['application/json'])
def on_delete(self, external_project_id, **kwargs):
data = api.load_body(pecan.request, validator=self.validator)
LOG.debug('Start on_delete...%s', data)
project = self.project_repo.find_by_external_project_id(
external_project_id, suppress_exception=True)
if not project:
_consumer_not_found()
consumer = self.consumer_repo.get_by_values(
self.container_id,
data["name"],
data["URL"],
suppress_exception=True
)
if not consumer:
_consumer_not_found()
LOG.debug("Found consumer: %s", consumer)
container = self._get_container(self.container_id)
owner_of_consumer = consumer.project_id == project.id
owner_of_container = container.project.external_id \
== external_project_id
if not owner_of_consumer and not owner_of_container:
_consumer_ownership_mismatch()
try:
self.consumer_repo.delete_entity_by_id(consumer.id,
external_project_id)
except exception.NotFound:
LOG.exception('Problem deleting consumer')
_consumer_not_found()
ret_data = self._return_container_data(self.container_id)
LOG.info('Deleted a consumer for project: %s',
external_project_id)
return ret_data
def _get_container(self, container_id):
container = self.container_repo.get_container_by_id(
container_id, suppress_exception=True)
if not container:
controllers.containers.container_not_found()
return container
def _return_container_data(self, container_id):
container = self._get_container(container_id)
dict_fields = container.to_dict_fields()
for secret_ref in dict_fields['secret_refs']:
hrefs.convert_to_hrefs(secret_ref)
# TODO(john-wood-w) Why two calls to convert_to_hrefs()?
return hrefs.convert_to_hrefs(
hrefs.convert_to_hrefs(dict_fields)
)

View File

@@ -1,329 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from barbican import api
from barbican.api import controllers
from barbican.api.controllers import acls
from barbican.api.controllers import consumers
from barbican.common import exception
from barbican.common import hrefs
from barbican.common import quota
from barbican.common import resources as res
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
from barbican.model import models
from barbican.model import repositories as repo
LOG = utils.getLogger(__name__)
CONTAINER_GET = 'container:get'
def container_not_found():
"""Throw exception indicating container not found."""
pecan.abort(404, u._('Not Found. Sorry but your container is in '
'another castle.'))
def invalid_container_id():
"""Throw exception indicating container id is invalid."""
pecan.abort(404, u._('Not Found. Provided container id is invalid.'))
class ContainerController(controllers.ACLMixin):
"""Handles Container entity retrieval and deletion requests."""
def __init__(self, container):
self.container = container
self.container_id = container.id
self.consumer_repo = repo.get_container_consumer_repository()
self.container_repo = repo.get_container_repository()
self.validator = validators.ContainerValidator()
self.consumers = consumers.ContainerConsumersController(
self.container_id)
self.acl = acls.ContainerACLsController(self.container)
def get_acl_tuple(self, req, **kwargs):
d = self.get_acl_dict_for_user(req, self.container.container_acls)
d['project_id'] = self.container.project.external_id
d['creator_id'] = self.container.creator_id
return 'container', d
@pecan.expose(generic=True, template='json')
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Container retrieval'))
@controllers.enforce_rbac(CONTAINER_GET)
def on_get(self, external_project_id):
dict_fields = self.container.to_dict_fields()
for secret_ref in dict_fields['secret_refs']:
hrefs.convert_to_hrefs(secret_ref)
LOG.info('Retrieved container for project: %s',
external_project_id)
return hrefs.convert_to_hrefs(
hrefs.convert_to_hrefs(dict_fields)
)
@index.when(method='DELETE')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Container deletion'))
@controllers.enforce_rbac('container:delete')
def on_delete(self, external_project_id, **kwargs):
container_consumers = self.consumer_repo.get_by_container_id(
self.container_id,
suppress_exception=True
)
try:
self.container_repo.delete_entity_by_id(
entity_id=self.container_id,
external_project_id=external_project_id
)
except exception.NotFound:
LOG.exception('Problem deleting container')
container_not_found()
LOG.info('Deleted container for project: %s',
external_project_id)
for consumer in container_consumers[0]:
try:
self.consumer_repo.delete_entity_by_id(
consumer.id, external_project_id)
except exception.NotFound: # nosec
pass
class ContainersController(controllers.ACLMixin):
"""Handles Container creation requests."""
def __init__(self):
self.consumer_repo = repo.get_container_consumer_repository()
self.container_repo = repo.get_container_repository()
self.secret_repo = repo.get_secret_repository()
self.validator = validators.ContainerValidator()
self.quota_enforcer = quota.QuotaEnforcer('containers',
self.container_repo)
@pecan.expose()
def _lookup(self, container_id, *remainder):
if not utils.validate_id_is_uuid(container_id):
invalid_container_id()
container = self.container_repo.get_container_by_id(
entity_id=container_id, suppress_exception=True)
if not container:
container_not_found()
if len(remainder) > 0 and remainder[0] == 'secrets':
return ContainersSecretsController(container), ()
return ContainerController(container), remainder
@pecan.expose(generic=True, template='json')
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Containers(s) retrieval'))
@controllers.enforce_rbac('containers:get')
def on_get(self, project_id, **kw):
LOG.debug('Start containers on_get for project-ID %s:', project_id)
result = self.container_repo.get_by_create_date(
project_id,
offset_arg=kw.get('offset', 0),
limit_arg=kw.get('limit', None),
name_arg=kw.get('name', None),
suppress_exception=True
)
containers, offset, limit, total = result
if not containers:
resp_ctrs_overall = {'containers': [], 'total': total}
else:
resp_ctrs = [
hrefs.convert_to_hrefs(c.to_dict_fields())
for c in containers
]
for ctr in resp_ctrs:
for secret_ref in ctr.get('secret_refs', []):
hrefs.convert_to_hrefs(secret_ref)
resp_ctrs_overall = hrefs.add_nav_hrefs(
'containers',
offset,
limit,
total,
{'containers': resp_ctrs}
)
resp_ctrs_overall.update({'total': total})
LOG.info('Retrieved container list for project: %s', project_id)
return resp_ctrs_overall
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('Container creation'))
@controllers.enforce_rbac('containers:post')
@controllers.enforce_content_types(['application/json'])
def on_post(self, external_project_id, **kwargs):
project = res.get_or_create_project(external_project_id)
data = api.load_body(pecan.request, validator=self.validator)
ctxt = controllers._get_barbican_context(pecan.request)
if ctxt: # in authenticated pipleline case, always use auth token user
data['creator_id'] = ctxt.user
self.quota_enforcer.enforce(project)
LOG.debug('Start on_post...%s', data)
new_container = models.Container(data)
new_container.project_id = project.id
# TODO(hgedikli): performance optimizations
for secret_ref in new_container.container_secrets:
secret = self.secret_repo.get(
entity_id=secret_ref.secret_id,
external_project_id=external_project_id,
suppress_exception=True)
if not secret:
# This only partially localizes the error message and
# doesn't localize secret_ref.name.
pecan.abort(
404,
u._("Secret provided for '{secret_name}' doesn't "
"exist.").format(secret_name=secret_ref.name)
)
self.container_repo.create_from(new_container)
url = hrefs.convert_container_to_href(new_container.id)
pecan.response.status = 201
pecan.response.headers['Location'] = url
LOG.info('Created a container for project: %s',
external_project_id)
return {'container_ref': url}
class ContainersSecretsController(controllers.ACLMixin):
"""Handles ContainerSecret creation and deletion requests."""
def __init__(self, container):
LOG.debug('=== Creating ContainerSecretsController ===')
self.container = container
self.container_secret_repo = repo.get_container_secret_repository()
self.secret_repo = repo.get_secret_repository()
self.validator = validators.ContainerSecretValidator()
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('Container Secret creation'))
@controllers.enforce_rbac('container_secret:post')
@controllers.enforce_content_types(['application/json'])
def on_post(self, external_project_id, **kwargs):
"""Handles adding an existing secret to an existing container."""
if self.container.type != 'generic':
pecan.abort(400, u._("Only 'generic' containers can be modified."))
data = api.load_body(pecan.request, validator=self.validator)
name = data.get('name')
secret_ref = data.get('secret_ref')
secret_id = hrefs.get_secret_id_from_ref(secret_ref)
secret = self.secret_repo.get(
entity_id=secret_id,
external_project_id=external_project_id,
suppress_exception=True)
if not secret:
pecan.abort(404, u._("Secret provided doesn't exist."))
found_container_secrets = list(
filter(lambda cs: cs.secret_id == secret_id and cs.name == name,
self.container.container_secrets)
)
if found_container_secrets:
pecan.abort(409, u._('Conflict. A secret with that name and ID is '
'already stored in this container. The same '
'secret can exist in a container as long as '
'the name is unique.'))
LOG.debug('Start container secret on_post...%s', secret_ref)
new_container_secret = models.ContainerSecret()
new_container_secret.container_id = self.container.id
new_container_secret.name = name
new_container_secret.secret_id = secret_id
self.container_secret_repo.save(new_container_secret)
url = hrefs.convert_container_to_href(self.container.id)
LOG.debug('URI to container is %s', url)
pecan.response.status = 201
pecan.response.headers['Location'] = url
LOG.info('Created a container secret for project: %s',
external_project_id)
return {'container_ref': url}
@index.when(method='DELETE')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Container Secret deletion'))
@controllers.enforce_rbac('container_secret:delete')
def on_delete(self, external_project_id, **kwargs):
"""Handles removing a secret reference from an existing container."""
data = api.load_body(pecan.request, validator=self.validator)
name = data.get('name')
secret_ref = data.get('secret_ref')
secret_id = hrefs.get_secret_id_from_ref(secret_ref)
secret = self.secret_repo.get(
entity_id=secret_id,
external_project_id=external_project_id,
suppress_exception=True)
if not secret:
pecan.abort(404, u._("Secret '{secret_name}' with reference "
"'{secret_ref}' doesn't exist.").format(
secret_name=name, secret_ref=secret_ref))
found_container_secrets = list(
filter(lambda cs: cs.secret_id == secret_id and cs.name == name,
self.container.container_secrets)
)
if not found_container_secrets:
pecan.abort(404, u._('Secret provided is not in the container'))
for container_secret in found_container_secrets:
self.container_secret_repo.delete_entity_by_id(
container_secret.id, external_project_id)
pecan.response.status = 204
LOG.info('Deleted container secret for project: %s',
external_project_id)

View File

@@ -1,257 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import versionutils
import pecan
from barbican import api
from barbican.api import controllers
from barbican.common import hrefs
from barbican.common import quota
from barbican.common import resources as res
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
from barbican.model import models
from barbican.model import repositories as repo
from barbican.queue import client as async_client
LOG = utils.getLogger(__name__)
_DEPRECATION_MSG = '%s has been deprecated in the Newton release. ' \
'It will be removed in the Pike release.'
def _order_not_found():
"""Throw exception indicating order not found."""
pecan.abort(404, u._('Not Found. Sorry but your order is in '
'another castle.'))
def _secret_not_in_order():
"""Throw exception that secret info is not available in the order."""
pecan.abort(400, u._("Secret metadata expected but not received."))
def _order_update_not_supported():
"""Throw exception that PUT operation is not supported for orders."""
pecan.abort(405, u._("Order update is not supported."))
def _order_update_not_supported_for_type(order_type):
"""Throw exception that update is not supported."""
pecan.abort(400, u._("Updates are not supported for order type "
"{0}.").format(order_type))
def _order_cannot_be_updated_if_not_pending(order_status):
"""Throw exception that order cannot be updated if not PENDING."""
pecan.abort(400, u._("Only PENDING orders can be updated. Order is in the"
"{0} state.").format(order_status))
def order_cannot_modify_order_type():
"""Throw exception that order type cannot be modified."""
pecan.abort(400, u._("Cannot modify order type."))
class OrderController(controllers.ACLMixin):
"""Handles Order retrieval and deletion requests."""
def __init__(self, order, queue_resource=None):
self.order = order
self.order_repo = repo.get_order_repository()
self.queue = queue_resource or async_client.TaskClient()
self.type_order_validator = validators.TypeOrderValidator()
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Order retrieval'))
@controllers.enforce_rbac('order:get')
def on_get(self, external_project_id):
return hrefs.convert_to_hrefs(self.order.to_dict_fields())
@index.when(method='PUT')
@controllers.handle_exceptions(u._('Order update'))
@controllers.enforce_rbac('order:put')
@controllers.enforce_content_types(['application/json'])
def on_put(self, external_project_id, **kwargs):
body = api.load_body(pecan.request,
validator=self.type_order_validator)
project = res.get_or_create_project(external_project_id)
order_type = body.get('type')
request_id = None
ctxt = controllers._get_barbican_context(pecan.request)
if ctxt and ctxt.request_id:
request_id = ctxt.request_id
if self.order.type != order_type:
order_cannot_modify_order_type()
if models.OrderType.CERTIFICATE != self.order.type:
_order_update_not_supported_for_type(order_type)
if models.States.PENDING != self.order.status:
_order_cannot_be_updated_if_not_pending(self.order.status)
updated_meta = body.get('meta')
validators.validate_ca_id(project.id, updated_meta)
# TODO(chellygel): Put 'meta' into a separate order association
# entity.
self.queue.update_order(order_id=self.order.id,
project_id=external_project_id,
updated_meta=updated_meta,
request_id=request_id)
@index.when(method='DELETE')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Order deletion'))
@controllers.enforce_rbac('order:delete')
def on_delete(self, external_project_id, **kwargs):
self.order_repo.delete_entity_by_id(
entity_id=self.order.id,
external_project_id=external_project_id)
class OrdersController(controllers.ACLMixin):
"""Handles Order requests for Secret creation."""
def __init__(self, queue_resource=None):
LOG.debug('Creating OrdersController')
self.order_repo = repo.get_order_repository()
self.queue = queue_resource or async_client.TaskClient()
self.type_order_validator = validators.TypeOrderValidator()
self.quota_enforcer = quota.QuotaEnforcer('orders', self.order_repo)
@pecan.expose()
def _lookup(self, order_id, *remainder):
# NOTE(jaosorior): It's worth noting that even though this section
# actually does a lookup in the database regardless of the RBAC policy
# check, the execution only gets here if authentication of the user was
# previously successful.
ctx = controllers._get_barbican_context(pecan.request)
order = self.order_repo.get(entity_id=order_id,
external_project_id=ctx.project,
suppress_exception=True)
if not order:
_order_not_found()
return OrderController(order, self.order_repo), remainder
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Order(s) retrieval'))
@controllers.enforce_rbac('orders:get')
def on_get(self, external_project_id, **kw):
LOG.debug('Start orders on_get '
'for project-ID %s:', external_project_id)
result = self.order_repo.get_by_create_date(
external_project_id, offset_arg=kw.get('offset', 0),
limit_arg=kw.get('limit', None), meta_arg=kw.get('meta', None),
suppress_exception=True)
orders, offset, limit, total = result
if not orders:
orders_resp_overall = {'orders': [],
'total': total}
else:
orders_resp = [
hrefs.convert_to_hrefs(o.to_dict_fields())
for o in orders
]
orders_resp_overall = hrefs.add_nav_hrefs('orders',
offset, limit, total,
{'orders': orders_resp})
orders_resp_overall.update({'total': total})
return orders_resp_overall
@index.when(method='PUT', template='json')
@controllers.handle_exceptions(u._('Order update'))
@controllers.enforce_rbac('orders:put')
def on_put(self, external_project_id, **kwargs):
_order_update_not_supported()
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('Order creation'))
@controllers.enforce_rbac('orders:post')
@controllers.enforce_content_types(['application/json'])
def on_post(self, external_project_id, **kwargs):
project = res.get_or_create_project(external_project_id)
body = api.load_body(pecan.request,
validator=self.type_order_validator)
order_type = body.get('type')
order_meta = body.get('meta')
request_type = order_meta.get('request_type')
LOG.debug('Processing order type %(order_type)s,'
' request type %(request_type)s' %
{'order_type': order_type,
'request_type': request_type})
if order_type == models.OrderType.CERTIFICATE:
msg = _DEPRECATION_MSG % "Certificate Order Resource"
versionutils.report_deprecated_feature(LOG, msg)
validators.validate_ca_id(project.id, body.get('meta'))
if request_type == 'stored-key':
container_ref = order_meta.get('container_ref')
validators.validate_stored_key_rsa_container(
external_project_id,
container_ref, pecan.request)
self.quota_enforcer.enforce(project)
new_order = models.Order()
new_order.meta = body.get('meta')
new_order.type = order_type
new_order.project_id = project.id
request_id = None
ctxt = controllers._get_barbican_context(pecan.request)
if ctxt:
new_order.creator_id = ctxt.user
request_id = ctxt.request_id
self.order_repo.create_from(new_order)
# Grab our id before commit due to obj expiration from sqlalchemy
order_id = new_order.id
# Force commit to avoid async issues with the workers
repo.commit()
self.queue.process_type_order(order_id=order_id,
project_id=external_project_id,
request_id=request_id)
url = hrefs.convert_order_to_href(order_id)
pecan.response.status = 202
pecan.response.headers['Location'] = url
return {'order_ref': url}

View File

@@ -1,136 +0,0 @@
# Copyright (c) 2015 Cisco Systems
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from barbican import api
from barbican.api import controllers
from barbican.common import exception
from barbican.common import quota
from barbican.common import resources as res
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
LOG = utils.getLogger(__name__)
def _project_quotas_not_found():
"""Throw exception indicating project quotas not found."""
pecan.abort(404, u._('Not Found. Sorry but your project quotas are in '
'another castle.'))
class QuotasController(controllers.ACLMixin):
"""Handles quota retrieval requests."""
def __init__(self):
LOG.debug('=== Creating QuotasController ===')
self.quota_driver = quota.QuotaDriver()
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Quotas'))
@controllers.enforce_rbac('quotas:get')
def on_get(self, external_project_id, **kwargs):
LOG.debug('=== QuotasController GET ===')
# make sure project exists
res.get_or_create_project(external_project_id)
resp = self.quota_driver.get_quotas(external_project_id)
return resp
class ProjectQuotasController(controllers.ACLMixin):
"""Handles project quota requests."""
def __init__(self, project_id):
LOG.debug('=== Creating ProjectQuotasController ===')
self.passed_project_id = project_id
self.validator = validators.ProjectQuotaValidator()
self.quota_driver = quota.QuotaDriver()
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Project Quotas'))
@controllers.enforce_rbac('project_quotas:get')
def on_get(self, external_project_id, **kwargs):
LOG.debug('=== ProjectQuotasController GET ===')
resp = self.quota_driver.get_project_quotas(self.passed_project_id)
if resp:
return resp
else:
_project_quotas_not_found()
@index.when(method='PUT', template='json')
@controllers.handle_exceptions(u._('Project Quotas'))
@controllers.enforce_rbac('project_quotas:put')
@controllers.enforce_content_types(['application/json'])
def on_put(self, external_project_id, **kwargs):
LOG.debug('=== ProjectQuotasController PUT ===')
if not pecan.request.body:
raise exception.NoDataToProcess()
api.load_body(pecan.request,
validator=self.validator)
self.quota_driver.set_project_quotas(self.passed_project_id,
kwargs['project_quotas'])
LOG.info('Put Project Quotas')
pecan.response.status = 204
@index.when(method='DELETE', template='json')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Project Quotas'))
@controllers.enforce_rbac('project_quotas:delete')
def on_delete(self, external_project_id, **kwargs):
LOG.debug('=== ProjectQuotasController DELETE ===')
try:
self.quota_driver.delete_project_quotas(self.passed_project_id)
except exception.NotFound:
LOG.info('Delete Project Quotas - Project not found')
_project_quotas_not_found()
else:
LOG.info('Delete Project Quotas')
pecan.response.status = 204
class ProjectsQuotasController(controllers.ACLMixin):
"""Handles projects quota retrieval requests."""
def __init__(self):
LOG.debug('=== Creating ProjectsQuotaController ===')
self.quota_driver = quota.QuotaDriver()
@pecan.expose()
def _lookup(self, project_id, *remainder):
return ProjectQuotasController(project_id), remainder
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Project Quotas'))
@controllers.enforce_rbac('project_quotas:get')
def on_get(self, external_project_id, **kwargs):
resp = self.quota_driver.get_project_quotas_list(
offset_arg=kwargs.get('offset', 0),
limit_arg=kwargs.get('limit', None)
)
return resp

View File

@@ -1,180 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import pecan
from barbican import api
from barbican.api import controllers
from barbican.common import hrefs
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
from barbican.model import repositories as repo
LOG = utils.getLogger(__name__)
def _secret_metadata_not_found():
"""Throw exception indicating secret metadata not found."""
pecan.abort(404, u._('Not Found. Sorry but your secret metadata is in '
'another castle.'))
class SecretMetadataController(controllers.ACLMixin):
"""Handles SecretMetadata requests by a given secret id."""
def __init__(self, secret):
LOG.debug('=== Creating SecretMetadataController ===')
self.secret = secret
self.secret_project_id = self.secret.project.external_id
self.secret_repo = repo.get_secret_repository()
self.user_meta_repo = repo.get_secret_user_meta_repository()
self.metadata_validator = validators.NewSecretMetadataValidator()
self.metadatum_validator = validators.NewSecretMetadatumValidator()
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Secret metadata retrieval'))
@controllers.enforce_rbac('secret_meta:get')
def on_get(self, external_project_id, **kwargs):
"""Handles retrieval of existing secret metadata requests."""
LOG.debug('Start secret metadata on_get '
'for secret-ID %s:', self.secret.id)
resp = self.user_meta_repo.get_metadata_for_secret(self.secret.id)
pecan.response.status = 200
return {"metadata": resp}
@index.when(method='PUT', template='json')
@controllers.handle_exceptions(u._('Secret metadata creation'))
@controllers.enforce_rbac('secret_meta:put')
@controllers.enforce_content_types(['application/json'])
def on_put(self, external_project_id, **kwargs):
"""Handles creation/update of secret metadata."""
data = api.load_body(pecan.request, validator=self.metadata_validator)
LOG.debug('Start secret metadata on_put...%s', data)
self.user_meta_repo.create_replace_user_metadata(self.secret.id,
data)
url = hrefs.convert_user_meta_to_href(self.secret.id)
LOG.debug('URI to secret metadata is %s', url)
pecan.response.status = 201
return {'metadata_ref': url}
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('Secret metadatum creation'))
@controllers.enforce_rbac('secret_meta:post')
@controllers.enforce_content_types(['application/json'])
def on_post(self, external_project_id, **kwargs):
"""Handles creation of secret metadatum."""
data = api.load_body(pecan.request, validator=self.metadatum_validator)
key = data.get('key')
value = data.get('value')
metadata = self.user_meta_repo.get_metadata_for_secret(self.secret.id)
if key in metadata:
pecan.abort(409, u._('Conflict. Key in request is already in the '
'secret metadata'))
LOG.debug('Start secret metadatum on_post...%s', metadata)
self.user_meta_repo.create_replace_user_metadatum(self.secret.id,
key, value)
url = hrefs.convert_user_meta_to_href(self.secret.id)
LOG.debug('URI to secret metadata is %s', url)
pecan.response.status = 201
return {'metadata_ref': url + "/%s {key: %s, value:%s}" % (key,
key,
value)}
class SecretMetadatumController(controllers.ACLMixin):
def __init__(self, secret):
LOG.debug('=== Creating SecretMetadatumController ===')
self.user_meta_repo = repo.get_secret_user_meta_repository()
self.secret = secret
self.metadatum_validator = validators.NewSecretMetadatumValidator()
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Secret metadatum retrieval'))
@controllers.enforce_rbac('secret_meta:get')
def on_get(self, external_project_id, remainder, **kwargs):
"""Handles retrieval of existing secret metadatum."""
LOG.debug('Start secret metadatum on_get '
'for secret-ID %s:', self.secret.id)
metadata = self.user_meta_repo.get_metadata_for_secret(self.secret.id)
if remainder in metadata:
pecan.response.status = 200
pair = {'key': remainder, 'value': metadata[remainder]}
return collections.OrderedDict(sorted(pair.items()))
else:
_secret_metadata_not_found()
@index.when(method='PUT', template='json')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Secret metadatum update'))
@controllers.enforce_rbac('secret_meta:put')
@controllers.enforce_content_types(['application/json'])
def on_put(self, external_project_id, remainder, **kwargs):
"""Handles update of existing secret metadatum."""
metadata = self.user_meta_repo.get_metadata_for_secret(self.secret.id)
data = api.load_body(pecan.request, validator=self.metadatum_validator)
key = data.get('key')
value = data.get('value')
if remainder not in metadata:
_secret_metadata_not_found()
elif remainder != key:
msg = 'Key in request data does not match key in the '
'request url.'
pecan.abort(409, msg)
else:
LOG.debug('Start secret metadatum on_put...%s', metadata)
self.user_meta_repo.create_replace_user_metadatum(self.secret.id,
key, value)
pecan.response.status = 200
pair = {'key': key, 'value': value}
return collections.OrderedDict(sorted(pair.items()))
@index.when(method='DELETE', template='json')
@controllers.handle_exceptions(u._('Secret metadatum removal'))
@controllers.enforce_rbac('secret_meta:delete')
def on_delete(self, external_project_id, remainder, **kwargs):
"""Handles removal of existing secret metadatum."""
self.user_meta_repo.delete_metadatum(self.secret.id,
remainder)
msg = 'Deleted secret metadatum: %s for secret %s' % (remainder,
self.secret.id)
pecan.response.status = 204
LOG.info(msg)

View File

@@ -1,456 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_utils import timeutils
import pecan
from six.moves.urllib import parse
from barbican import api
from barbican.api import controllers
from barbican.api.controllers import acls
from barbican.api.controllers import secretmeta
from barbican.common import exception
from barbican.common import hrefs
from barbican.common import quota
from barbican.common import resources as res
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
from barbican.model import models
from barbican.model import repositories as repo
from barbican.plugin import resources as plugin
from barbican.plugin import util as putil
LOG = utils.getLogger(__name__)
def _secret_not_found():
"""Throw exception indicating secret not found."""
pecan.abort(404, u._('Not Found. Sorry but your secret is in '
'another castle.'))
def _invalid_secret_id():
"""Throw exception indicating secret id is invalid."""
pecan.abort(404, u._('Not Found. Provided secret id is invalid.'))
def _secret_payload_not_found():
"""Throw exception indicating secret's payload is not found."""
pecan.abort(404, u._('Not Found. Sorry but your secret has no payload.'))
def _secret_already_has_data():
"""Throw exception that the secret already has data."""
pecan.abort(409, u._("Secret already has data, cannot modify it."))
def _bad_query_string_parameters():
pecan.abort(400, u._("URI provided invalid query string parameters."))
def _request_has_twsk_but_no_transport_key_id():
"""Throw exception for bad wrapping parameters.
Throw exception if transport key wrapped session key has been provided,
but the transport key id has not.
"""
pecan.abort(400, u._('Transport key wrapped session key has been '
'provided to wrap secrets for retrieval, but the '
'transport key id has not been provided.'))
class SecretController(controllers.ACLMixin):
"""Handles Secret retrieval and deletion requests."""
def __init__(self, secret):
LOG.debug('=== Creating SecretController ===')
self.secret = secret
self.transport_key_repo = repo.get_transport_key_repository()
def get_acl_tuple(self, req, **kwargs):
d = self.get_acl_dict_for_user(req, self.secret.secret_acls)
d['project_id'] = self.secret.project.external_id
d['creator_id'] = self.secret.creator_id
return 'secret', d
@pecan.expose()
def _lookup(self, sub_resource, *remainder):
if sub_resource == 'acl':
return acls.SecretACLsController(self.secret), remainder
elif sub_resource == 'metadata':
if len(remainder) == 0 or remainder == ('',):
return secretmeta.SecretMetadataController(self.secret), \
remainder
else:
request_method = pecan.request.method
allowed_methods = ['GET', 'PUT', 'DELETE']
if request_method in allowed_methods:
return secretmeta.SecretMetadatumController(self.secret), \
remainder
else:
# methods cannot be handled at controller level
pecan.abort(405)
else:
# only 'acl' and 'metadata' as sub-resource is supported
pecan.abort(405)
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Secret retrieval'))
@controllers.enforce_rbac('secret:get')
def on_get(self, external_project_id, **kwargs):
if controllers.is_json_request_accept(pecan.request):
resp = self._on_get_secret_metadata(self.secret, **kwargs)
LOG.info('Retrieved secret metadata for project: %s',
external_project_id)
return resp
else:
LOG.warning('Decrypted secret %s requested using deprecated '
'API call.', self.secret.id)
return self._on_get_secret_payload(self.secret,
external_project_id,
**kwargs)
def _on_get_secret_metadata(self, secret, **kwargs):
"""GET Metadata-only for a secret."""
pecan.override_template('json', 'application/json')
secret_fields = putil.mime_types.augment_fields_with_content_types(
secret)
transport_key_id = self._get_transport_key_id_if_needed(
kwargs.get('transport_key_needed'), secret)
if transport_key_id:
secret_fields['transport_key_id'] = transport_key_id
return hrefs.convert_to_hrefs(secret_fields)
def _get_transport_key_id_if_needed(self, transport_key_needed, secret):
if transport_key_needed and transport_key_needed.lower() == 'true':
return plugin.get_transport_key_id_for_retrieval(secret)
return None
def _on_get_secret_payload(self, secret, external_project_id, **kwargs):
"""GET actual payload containing the secret."""
# With ACL support, the user token project does not have to be same as
# project associated with secret. The lookup project_id needs to be
# derived from the secret's data considering authorization is already
# done.
external_project_id = secret.project.external_id
project = res.get_or_create_project(external_project_id)
# default to application/octet-stream if there is no Accept header
accept_header = getattr(pecan.request.accept, 'header_value',
'application/octet-stream')
pecan.override_template('', accept_header)
# check if payload exists before proceeding
if not secret.encrypted_data and not secret.secret_store_metadata:
_secret_payload_not_found()
twsk = kwargs.get('trans_wrapped_session_key', None)
transport_key = None
if twsk:
transport_key = self._get_transport_key(
kwargs.get('transport_key_id', None))
return plugin.get_secret(accept_header,
secret,
project,
twsk,
transport_key)
def _get_transport_key(self, transport_key_id):
if transport_key_id is None:
_request_has_twsk_but_no_transport_key_id()
transport_key_model = self.transport_key_repo.get(
entity_id=transport_key_id,
suppress_exception=True)
return transport_key_model.transport_key
@pecan.expose()
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Secret payload retrieval'))
@controllers.enforce_rbac('secret:decrypt')
def payload(self, external_project_id, **kwargs):
if pecan.request.method != 'GET':
pecan.abort(405)
resp = self._on_get_secret_payload(self.secret,
external_project_id,
**kwargs)
LOG.info('Retrieved secret payload for project: %s',
external_project_id)
return resp
@index.when(method='PUT')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Secret update'))
@controllers.enforce_rbac('secret:put')
@controllers.enforce_content_types(['application/octet-stream',
'text/plain'])
def on_put(self, external_project_id, **kwargs):
if (not pecan.request.content_type or
pecan.request.content_type == 'application/json'):
pecan.abort(
415,
u._("Content-Type of '{content_type}' is not supported for "
"PUT.").format(content_type=pecan.request.content_type)
)
transport_key_id = kwargs.get('transport_key_id')
payload = pecan.request.body
if not payload:
raise exception.NoDataToProcess()
if validators.secret_too_big(payload):
raise exception.LimitExceeded()
if self.secret.encrypted_data or self.secret.secret_store_metadata:
_secret_already_has_data()
project_model = res.get_or_create_project(external_project_id)
content_type = pecan.request.content_type
content_encoding = pecan.request.headers.get('Content-Encoding')
plugin.store_secret(
unencrypted_raw=payload,
content_type_raw=content_type,
content_encoding=content_encoding,
secret_model=self.secret,
project_model=project_model,
transport_key_id=transport_key_id)
LOG.info('Updated secret for project: %s', external_project_id)
@index.when(method='DELETE')
@utils.allow_all_content_types
@controllers.handle_exceptions(u._('Secret deletion'))
@controllers.enforce_rbac('secret:delete')
def on_delete(self, external_project_id, **kwargs):
plugin.delete_secret(self.secret, external_project_id)
LOG.info('Deleted secret for project: %s', external_project_id)
class SecretsController(controllers.ACLMixin):
"""Handles Secret creation requests."""
def __init__(self):
LOG.debug('Creating SecretsController')
self.validator = validators.NewSecretValidator()
self.secret_repo = repo.get_secret_repository()
self.quota_enforcer = quota.QuotaEnforcer('secrets', self.secret_repo)
def _is_valid_date_filter(self, date_filter):
filters = date_filter.split(',')
sorted_filters = dict()
try:
for filter in filters:
if filter.startswith('gt:'):
if sorted_filters.get('gt') or sorted_filters.get('gte'):
return False
sorted_filters['gt'] = timeutils.parse_isotime(filter[3:])
elif filter.startswith('gte:'):
if sorted_filters.get('gt') or sorted_filters.get(
'gte') or sorted_filters.get('eq'):
return False
sorted_filters['gte'] = timeutils.parse_isotime(filter[4:])
elif filter.startswith('lt:'):
if sorted_filters.get('lt') or sorted_filters.get('lte'):
return False
sorted_filters['lt'] = timeutils.parse_isotime(filter[3:])
elif filter.startswith('lte:'):
if sorted_filters.get('lt') or sorted_filters.get(
'lte') or sorted_filters.get('eq'):
return False
sorted_filters['lte'] = timeutils.parse_isotime(filter[4:])
elif sorted_filters.get('eq') or sorted_filters.get(
'gte') or sorted_filters.get('lte'):
return False
else:
sorted_filters['eq'] = timeutils.parse_isotime(filter)
except ValueError:
return False
return True
def _is_valid_sorting(self, sorting):
allowed_keys = ['algorithm', 'bit_length', 'created',
'expiration', 'mode', 'name', 'secret_type', 'status',
'updated']
allowed_directions = ['asc', 'desc']
sorted_keys = dict()
for sort in sorting.split(','):
if ':' in sort:
try:
key, direction = sort.split(':')
except ValueError:
return False
else:
key, direction = sort, 'asc'
if key not in allowed_keys or direction not in allowed_directions:
return False
if sorted_keys.get(key):
return False
else:
sorted_keys[key] = direction
return True
@pecan.expose()
def _lookup(self, secret_id, *remainder):
# NOTE(jaosorior): It's worth noting that even though this section
# actually does a lookup in the database regardless of the RBAC policy
# check, the execution only gets here if authentication of the user was
# previously successful.
if not utils.validate_id_is_uuid(secret_id):
_invalid_secret_id()()
secret = self.secret_repo.get_secret_by_id(
entity_id=secret_id, suppress_exception=True)
if not secret:
_secret_not_found()
return SecretController(secret), remainder
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Secret(s) retrieval'))
@controllers.enforce_rbac('secrets:get')
def on_get(self, external_project_id, **kw):
def secret_fields(field):
return putil.mime_types.augment_fields_with_content_types(field)
LOG.debug('Start secrets on_get '
'for project-ID %s:', external_project_id)
name = kw.get('name', '')
if name:
name = parse.unquote_plus(name)
bits = kw.get('bits', 0)
try:
bits = int(bits)
except ValueError:
# as per Github issue 171, if bits is invalid then
# the default should be used.
bits = 0
for date_filter in 'created', 'updated', 'expiration':
if kw.get(date_filter) and not self._is_valid_date_filter(
kw.get(date_filter)):
_bad_query_string_parameters()
if kw.get('sort') and not self._is_valid_sorting(kw.get('sort')):
_bad_query_string_parameters()
ctxt = controllers._get_barbican_context(pecan.request)
user_id = None
if ctxt:
user_id = ctxt.user
result = self.secret_repo.get_secret_list(
external_project_id,
offset_arg=kw.get('offset', 0),
limit_arg=kw.get('limit'),
name=name,
alg=kw.get('alg'),
mode=kw.get('mode'),
bits=bits,
secret_type=kw.get('secret_type'),
suppress_exception=True,
acl_only=kw.get('acl_only'),
user_id=user_id,
created=kw.get('created'),
updated=kw.get('updated'),
expiration=kw.get('expiration'),
sort=kw.get('sort')
)
secrets, offset, limit, total = result
if not secrets:
secrets_resp_overall = {'secrets': [],
'total': total}
else:
secrets_resp = [
hrefs.convert_to_hrefs(secret_fields(s))
for s in secrets
]
secrets_resp_overall = hrefs.add_nav_hrefs(
'secrets', offset, limit, total,
{'secrets': secrets_resp}
)
secrets_resp_overall.update({'total': total})
LOG.info('Retrieved secret list for project: %s',
external_project_id)
return secrets_resp_overall
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('Secret creation'))
@controllers.enforce_rbac('secrets:post')
@controllers.enforce_content_types(['application/json'])
def on_post(self, external_project_id, **kwargs):
LOG.debug('Start on_post for project-ID %s:...',
external_project_id)
data = api.load_body(pecan.request, validator=self.validator)
project = res.get_or_create_project(external_project_id)
self.quota_enforcer.enforce(project)
transport_key_needed = data.get('transport_key_needed',
'false').lower() == 'true'
ctxt = controllers._get_barbican_context(pecan.request)
if ctxt: # in authenticated pipleline case, always use auth token user
data['creator_id'] = ctxt.user
secret_model = models.Secret(data)
new_secret, transport_key_model = plugin.store_secret(
unencrypted_raw=data.get('payload'),
content_type_raw=data.get('payload_content_type',
'application/octet-stream'),
content_encoding=data.get('payload_content_encoding'),
secret_model=secret_model,
project_model=project,
transport_key_needed=transport_key_needed,
transport_key_id=data.get('transport_key_id'))
url = hrefs.convert_secret_to_href(new_secret.id)
LOG.debug('URI to secret is %s', url)
pecan.response.status = 201
pecan.response.headers['Location'] = url
LOG.info('Created a secret for project: %s',
external_project_id)
if transport_key_model is not None:
tkey_url = hrefs.convert_transport_key_to_href(
transport_key_model.id)
return {'secret_ref': url, 'transport_key_ref': tkey_url}
else:
return {'secret_ref': url}

View File

@@ -1,214 +0,0 @@
# (c) Copyright 2015-2016 Hewlett Packard Enterprise Development LP
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from barbican.api import controllers
from barbican.common import hrefs
from barbican.common import resources as res
from barbican.common import utils
from barbican import i18n as u
from barbican.model import repositories as repo
from barbican.plugin.util import multiple_backends
LOG = utils.getLogger(__name__)
def _secret_store_not_found():
"""Throw exception indicating secret store not found."""
pecan.abort(404, u._('Not Found. Secret store not found.'))
def _preferred_secret_store_not_found():
"""Throw exception indicating preferred secret store not found."""
pecan.abort(404, u._('Not Found. No preferred secret store defined for '
'this project.'))
def _multiple_backends_not_enabled():
"""Throw exception indicating multiple backends support is not enabled."""
pecan.abort(404, u._('Not Found. Multiple backends support is not enabled '
'in service configuration.'))
def convert_secret_store_to_response_format(secret_store):
data = secret_store.to_dict_fields()
data['secret_store_plugin'] = data.pop('store_plugin')
data['secret_store_ref'] = hrefs.convert_secret_stores_to_href(
data['secret_store_id'])
# no need to pass store id as secret_store_ref is returned
data.pop('secret_store_id', None)
return data
class PreferredSecretStoreController(controllers.ACLMixin):
"""Handles preferred secret store set/removal requests."""
def __init__(self, secret_store):
LOG.debug('=== Creating PreferredSecretStoreController ===')
self.secret_store = secret_store
self.proj_store_repo = repo.get_project_secret_store_repository()
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='DELETE', template='json')
@controllers.handle_exceptions(u._('Removing preferred secret store'))
@controllers.enforce_rbac('secretstore_preferred:delete')
def on_delete(self, external_project_id, **kw):
LOG.debug('Start: Remove project preferred secret-store for store'
' id %s', self.secret_store.id)
project = res.get_or_create_project(external_project_id)
project_store = self.proj_store_repo.get_secret_store_for_project(
project.id, None, suppress_exception=True)
if project_store is None:
_preferred_secret_store_not_found()
self.proj_store_repo.delete_entity_by_id(
entity_id=project_store.id,
external_project_id=external_project_id)
pecan.response.status = 204
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('Setting preferred secret store'))
@controllers.enforce_rbac('secretstore_preferred:post')
def on_post(self, external_project_id, **kwargs):
LOG.debug('Start: Set project preferred secret-store for store '
'id %s', self.secret_store.id)
project = res.get_or_create_project(external_project_id)
self.proj_store_repo.create_or_update_for_project(project.id,
self.secret_store.id)
pecan.response.status = 204
class SecretStoreController(controllers.ACLMixin):
"""Handles secret store retrieval requests."""
def __init__(self, secret_store):
LOG.debug('=== Creating SecretStoreController ===')
self.secret_store = secret_store
@pecan.expose()
def _lookup(self, action, *remainder):
if (action == 'preferred'):
return PreferredSecretStoreController(self.secret_store), remainder
else:
pecan.abort(405)
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Secret store retrieval'))
@controllers.enforce_rbac('secretstore:get')
def on_get(self, external_project_id):
LOG.debug("== Getting secret store for %s", self.secret_store.id)
return convert_secret_store_to_response_format(self.secret_store)
class SecretStoresController(controllers.ACLMixin):
"""Handles secret-stores list requests."""
def __init__(self):
LOG.debug('Creating SecretStoresController')
self.secret_stores_repo = repo.get_secret_stores_repository()
self.proj_store_repo = repo.get_project_secret_store_repository()
def __getattr__(self, name):
route_table = {
'global-default': self.get_global_default,
'preferred': self.get_preferred,
}
if name in route_table:
return route_table[name]
raise AttributeError
@pecan.expose()
def _lookup(self, secret_store_id, *remainder):
if not utils.is_multiple_backends_enabled():
_multiple_backends_not_enabled()
secret_store = self.secret_stores_repo.get(entity_id=secret_store_id,
suppress_exception=True)
if not secret_store:
_secret_store_not_found()
return SecretStoreController(secret_store), remainder
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('List available secret stores'))
@controllers.enforce_rbac('secretstores:get')
def on_get(self, external_project_id, **kw):
LOG.debug('Start SecretStoresController on_get: listing secret '
'stores')
if not utils.is_multiple_backends_enabled():
_multiple_backends_not_enabled()
res.get_or_create_project(external_project_id)
secret_stores = self.secret_stores_repo.get_all()
resp_list = []
for store in secret_stores:
item = convert_secret_store_to_response_format(store)
resp_list.append(item)
resp = {'secret_stores': resp_list}
return resp
@pecan.expose(generic=True, template='json')
@controllers.handle_exceptions(u._('Retrieve global default secret store'))
@controllers.enforce_rbac('secretstores:get_global_default')
def get_global_default(self, external_project_id, **kw):
LOG.debug('Start secret-stores get global default secret store')
if not utils.is_multiple_backends_enabled():
_multiple_backends_not_enabled()
res.get_or_create_project(external_project_id)
store = multiple_backends.get_global_default_secret_store()
return convert_secret_store_to_response_format(store)
@pecan.expose(generic=True, template='json')
@controllers.handle_exceptions(u._('Retrieve project preferred store'))
@controllers.enforce_rbac('secretstores:get_preferred')
def get_preferred(self, external_project_id, **kw):
LOG.debug('Start secret-stores get preferred secret store')
if not utils.is_multiple_backends_enabled():
_multiple_backends_not_enabled()
project = res.get_or_create_project(external_project_id)
project_store = self.proj_store_repo.get_secret_store_for_project(
project.id, None, suppress_exception=True)
if project_store is None:
_preferred_secret_store_not_found()
return convert_secret_store_to_response_format(
project_store.secret_store)

View File

@@ -1,161 +0,0 @@
# Copyright (c) 2014 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from six.moves.urllib import parse
from barbican import api
from barbican.api import controllers
from barbican.common import exception
from barbican.common import hrefs
from barbican.common import utils
from barbican.common import validators
from barbican import i18n as u
from barbican.model import models
from barbican.model import repositories as repo
LOG = utils.getLogger(__name__)
def _transport_key_not_found():
"""Throw exception indicating transport key not found."""
pecan.abort(404, u._('Not Found. Transport Key not found.'))
def _invalid_transport_key_id():
"""Throw exception indicating transport key id is invalid."""
pecan.abort(404, u._('Not Found. Provided transport key id is invalid.'))
class TransportKeyController(controllers.ACLMixin):
"""Handles transport key retrieval requests."""
def __init__(self, transport_key_id, transport_key_repo=None):
LOG.debug('=== Creating TransportKeyController ===')
self.transport_key_id = transport_key_id
self.repo = transport_key_repo or repo.TransportKeyRepo()
@pecan.expose(generic=True)
def index(self, external_project_id, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET')
@controllers.handle_exceptions(u._('Transport Key retrieval'))
@controllers.enforce_rbac('transport_key:get')
def on_get(self, external_project_id):
LOG.debug("== Getting transport key for %s", external_project_id)
transport_key = self.repo.get(entity_id=self.transport_key_id)
if not transport_key:
_transport_key_not_found()
pecan.override_template('json', 'application/json')
return transport_key
@index.when(method='DELETE')
@controllers.handle_exceptions(u._('Transport Key deletion'))
@controllers.enforce_rbac('transport_key:delete')
def on_delete(self, external_project_id, **kwargs):
LOG.debug("== Deleting transport key ===")
try:
self.repo.delete_entity_by_id(
entity_id=self.transport_key_id,
external_project_id=external_project_id)
# TODO(alee) response should be 204 on success
# pecan.response.status = 204
except exception.NotFound:
LOG.exception('Problem deleting transport_key')
_transport_key_not_found()
class TransportKeysController(controllers.ACLMixin):
"""Handles transport key list requests."""
def __init__(self, transport_key_repo=None):
LOG.debug('Creating TransportKeyController')
self.repo = transport_key_repo or repo.TransportKeyRepo()
self.validator = validators.NewTransportKeyValidator()
@pecan.expose()
def _lookup(self, transport_key_id, *remainder):
if not utils.validate_id_is_uuid(transport_key_id):
_invalid_transport_key_id()
return TransportKeyController(transport_key_id, self.repo), remainder
@pecan.expose(generic=True)
def index(self, external_project_id, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@controllers.handle_exceptions(u._('Transport Key(s) retrieval'))
@controllers.enforce_rbac('transport_keys:get')
def on_get(self, external_project_id, **kw):
LOG.debug('Start transport_keys on_get')
plugin_name = kw.get('plugin_name', None)
if plugin_name is not None:
plugin_name = parse.unquote_plus(plugin_name)
result = self.repo.get_by_create_date(
plugin_name=plugin_name,
offset_arg=kw.get('offset', 0),
limit_arg=kw.get('limit', None),
suppress_exception=True
)
transport_keys, offset, limit, total = result
if not transport_keys:
transport_keys_resp_overall = {'transport_keys': [],
'total': total}
else:
transport_keys_resp = [
hrefs.convert_transport_key_to_href(s.id)
for s in transport_keys
]
transport_keys_resp_overall = hrefs.add_nav_hrefs(
'transport_keys',
offset,
limit,
total,
{'transport_keys': transport_keys_resp}
)
transport_keys_resp_overall.update({'total': total})
return transport_keys_resp_overall
@index.when(method='POST', template='json')
@controllers.handle_exceptions(u._('Transport Key Creation'))
@controllers.enforce_rbac('transport_keys:post')
@controllers.enforce_content_types(['application/json'])
def on_post(self, external_project_id, **kwargs):
LOG.debug('Start transport_keys on_post')
# TODO(alee) POST should determine the plugin name and call the
# relevant get_transport_key() call. We will implement this once
# we figure out how the plugins will be enumerated.
data = api.load_body(pecan.request, validator=self.validator)
new_key = models.TransportKey(data.get('plugin_name'),
data.get('transport_key'))
self.repo.create_from(new_key)
url = hrefs.convert_transport_key_to_href(new_key.id)
LOG.debug('URI to transport key is %s', url)
pecan.response.status = 201
pecan.response.headers['Location'] = url
return {'transport_key_ref': url}

View File

@@ -1,167 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pecan
from six.moves.urllib import parse
from barbican.api import controllers
from barbican.api.controllers import cas
from barbican.api.controllers import containers
from barbican.api.controllers import orders
from barbican.api.controllers import quotas
from barbican.api.controllers import secrets
from barbican.api.controllers import secretstores
from barbican.api.controllers import transportkeys
from barbican.common import utils
from barbican import i18n as u
from barbican import version
LOG = utils.getLogger(__name__)
MIME_TYPE_JSON = 'application/json'
MIME_TYPE_JSON_HOME = 'application/json-home'
MEDIA_TYPE_JSON = 'application/vnd.openstack.key-manager-%s+json'
def _version_not_found():
"""Throw exception indicating version not found."""
pecan.abort(404, u._("The version you requested wasn't found"))
def _get_versioned_url(version):
if version[-1] != '/':
version += '/'
# If host_href is not set in barbican conf, then derive it from request url
host_part = utils.get_base_url_from_request()
return parse.urljoin(host_part, version)
class BaseVersionController(object):
"""Base class for the version-specific controllers"""
@classmethod
def get_version_info(cls, request):
return {
'id': cls.version_id,
'status': 'stable',
'updated': cls.last_updated,
'links': [
{
'rel': 'self',
'href': _get_versioned_url(cls.version_string),
}, {
'rel': 'describedby',
'type': 'text/html',
'href': 'https://docs.openstack.org/'
}
],
'media-types': [
{
'base': MIME_TYPE_JSON,
'type': MEDIA_TYPE_JSON % cls.version_string
}
]
}
class V1Controller(BaseVersionController):
"""Root controller for the v1 API"""
version_string = 'v1'
# NOTE(jaosorior): We might start using decimals in the future, meanwhile
# this is the same as the version string.
version_id = 'v1'
last_updated = '2015-04-28T00:00:00Z'
def __init__(self):
LOG.debug('=== Creating V1Controller ===')
self.secrets = secrets.SecretsController()
self.orders = orders.OrdersController()
self.containers = containers.ContainersController()
self.transport_keys = transportkeys.TransportKeysController()
self.cas = cas.CertificateAuthoritiesController()
self.quotas = quotas.QuotasController()
setattr(self, 'project-quotas', quotas.ProjectsQuotasController())
setattr(self, 'secret-stores', secretstores.SecretStoresController())
@pecan.expose(generic=True)
def index(self):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@utils.allow_certain_content_types(MIME_TYPE_JSON, MIME_TYPE_JSON_HOME)
@controllers.handle_exceptions(u._('Version retrieval'))
def on_get(self):
pecan.core.override_template('json')
return {'version': self.get_version_info(pecan.request)}
AVAILABLE_VERSIONS = {
V1Controller.version_string: V1Controller,
}
DEFAULT_VERSION = V1Controller.version_string
class VersionsController(object):
def __init__(self):
LOG.debug('=== Creating VersionsController ===')
@pecan.expose(generic=True)
def index(self, **kwargs):
pecan.abort(405) # HTTP 405 Method Not Allowed as default
@index.when(method='GET', template='json')
@utils.allow_certain_content_types(MIME_TYPE_JSON, MIME_TYPE_JSON_HOME)
def on_get(self, **kwargs):
"""The list of versions is dependent on the context."""
self._redirect_to_default_json_home_if_needed(pecan.request)
if 'build' in kwargs:
return {'build': version.__version__}
versions_info = [version_class.get_version_info(pecan.request) for
version_class in AVAILABLE_VERSIONS.values()]
version_output = {
'versions': {
'values': versions_info
}
}
# Since we are returning all the versions available, the proper status
# code is Multiple Choices (300)
pecan.response.status = 300
return version_output
def _redirect_to_default_json_home_if_needed(self, request):
if self._mime_best_match(request.accept) == MIME_TYPE_JSON_HOME:
url = _get_versioned_url(DEFAULT_VERSION)
LOG.debug("Redirecting Request to " + url)
# NOTE(jaosorior): This issues an "external" redirect because of
# two reasons:
# * This module doesn't require authorization, and accessing
# specific version info needs that.
# * The resource is a separate app_factory and won't be found
# internally
pecan.redirect(url, request=request)
def _mime_best_match(self, accept):
if not accept:
return MIME_TYPE_JSON
SUPPORTED_TYPES = [MIME_TYPE_JSON, MIME_TYPE_JSON_HOME]
return accept.best_match(SUPPORTED_TYPES)

View File

@@ -1,56 +0,0 @@
# Copyright (c) 2015 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pecan
import webob
from oslo_serialization import jsonutils
try:
import newrelic.agent
newrelic_loaded = True
except ImportError:
newrelic_loaded = False
from barbican.model import repositories
class JSONErrorHook(pecan.hooks.PecanHook):
def on_error(self, state, exc):
if isinstance(exc, webob.exc.HTTPError):
exc.body = jsonutils.dump_as_bytes({
'code': exc.status_int,
'title': exc.title,
'description': exc.detail
})
state.response.content_type = "application/json"
return exc.body
class BarbicanTransactionHook(pecan.hooks.TransactionHook):
"""Custom hook for Barbican transactions."""
def __init__(self):
super(BarbicanTransactionHook, self).__init__(
start=repositories.start,
start_ro=repositories.start_read_only,
commit=repositories.commit,
rollback=repositories.rollback,
clear=repositories.clear
)
class NewRelicHook(pecan.hooks.PecanHook):
def on_error(self, state, exc):
if newrelic_loaded:
newrelic.agent.record_exception()

View File

@@ -1,103 +0,0 @@
# Copyright (c) 2013-2015 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Barbican middleware modules.
"""
import sys
import webob.dec
from barbican.common import utils
LOG = utils.getLogger(__name__)
class Middleware(object):
"""Base WSGI middleware wrapper
These classes require an application to be
initialized that will be called next. By default the middleware will
simply call its wrapped app, or you can override __call__ to customize its
behavior.
"""
def __init__(self, application):
self.application = application
@classmethod
def factory(cls, global_conf, **local_conf):
def filter(app):
return cls(app)
return filter
def process_request(self, req):
"""Called on each request.
If this returns None, the next application down the stack will be
executed. If it returns a response then that response will be returned
and execution will stop here.
"""
return None
def process_response(self, response):
"""Do whatever you'd like to the response."""
return response
@webob.dec.wsgify
def __call__(self, req):
response = self.process_request(req)
if response:
return response
response = req.get_response(self.application)
response.request = req
return self.process_response(response)
# Brought over from an OpenStack project
class Debug(Middleware):
"""Debug helper class
This class can be inserted into any WSGI application chain
to get information about the request and response.
"""
@webob.dec.wsgify
def __call__(self, req):
LOG.debug(("*" * 40) + " REQUEST ENVIRON")
for key, value in req.environ.items():
LOG.debug('%s=%s', key, value)
LOG.debug(' ')
resp = req.get_response(self.application)
LOG.debug(("*" * 40) + " RESPONSE HEADERS")
for (key, value) in resp.headers.items():
LOG.debug('%s=%s', key, value)
LOG.debug(' ')
resp.app_iter = self.print_generator(resp.app_iter)
return resp
@staticmethod
def print_generator(app_iter):
"""Iterator that prints the contents of a wrapper string iterator."""
LOG.debug(("*" * 40) + " BODY")
for part in app_iter:
sys.stdout.write(part)
sys.stdout.flush()
yield part
LOG.debug(' ')

View File

@@ -1,151 +0,0 @@
# Copyright 2011-2012 OpenStack LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import webob.exc
from barbican.api import middleware as mw
from barbican.common import config
from barbican.common import utils
import barbican.context
from barbican import i18n as u
LOG = utils.getLogger(__name__)
CONF = config.CONF
class BaseContextMiddleware(mw.Middleware):
def process_request(self, req):
request_id = req.headers.get('x-openstack-request-id')
if not request_id:
request_id = 'req-' + utils.generate_uuid()
setattr(req, 'request_id', request_id)
def process_response(self, resp):
resp.headers['x-openstack-request-id'] = resp.request.request_id
LOG.info('Processed request: %(status)s - %(method)s %(url)s',
{"status": resp.status,
"method": resp.request.method,
"url": resp.request.url})
return resp
class ContextMiddleware(BaseContextMiddleware):
def __init__(self, app):
super(ContextMiddleware, self).__init__(app)
def process_request(self, req):
"""Convert authentication information into a request context
Generate a barbican.context.RequestContext object from the available
authentication headers and store on the 'context' attribute
of the req object.
:param req: wsgi request object that will be given the context object
:raises webob.exc.HTTPUnauthorized: when value of the X-Identity-Status
header is not 'Confirmed' and
anonymous access is disallowed
"""
super(ContextMiddleware, self).process_request(req)
if req.headers.get('X-Identity-Status') == 'Confirmed':
req.context = self._get_authenticated_context(req)
elif CONF.allow_anonymous_access:
req.context = self._get_anonymous_context()
LOG.debug("==== Inserted barbican unauth "
"request context: %s ====", req.context.to_dict())
else:
raise webob.exc.HTTPUnauthorized()
# Ensure that down wind mw.Middleware/app can see this context.
req.environ['barbican.context'] = req.context
def _get_anonymous_context(self):
kwargs = {
'user': None,
'tenant': None,
'is_admin': False,
'read_only': True,
}
return barbican.context.RequestContext(**kwargs)
def _get_authenticated_context(self, req):
# NOTE(bcwaldon): X-Roles is a csv string, but we need to parse
# it into a list to be useful
roles_header = req.headers.get('X-Roles', '')
roles = [r.strip().lower() for r in roles_header.split(',')]
# NOTE(bcwaldon): This header is deprecated in favor of X-Auth-Token
# NOTE(mkbhanda): keeping this just-in-case for swift
deprecated_token = req.headers.get('X-Storage-Token')
kwargs = {
'auth_token': req.headers.get('X-Auth-Token', deprecated_token),
'user': req.headers.get('X-User-Id'),
'project': req.headers.get('X-Project-Id'),
'roles': roles,
'is_admin': CONF.admin_role.strip().lower() in roles,
'request_id': req.request_id
}
if req.headers.get('X-Domain-Id'):
kwargs['domain'] = req.headers['X-Domain-Id']
if req.headers.get('X-User-Domain-Id'):
kwargs['user_domain'] = req.headers['X-User-Domain-Id']
if req.headers.get('X-Project-Domain-Id'):
kwargs['project_domain'] = req.headers['X-Project-Domain-Id']
return barbican.context.RequestContext(**kwargs)
class UnauthenticatedContextMiddleware(BaseContextMiddleware):
def _get_project_id_from_header(self, req):
project_id = req.headers.get('X-Project-Id')
if not project_id:
accept_header = req.headers.get('Accept')
if not accept_header:
req.headers['Accept'] = 'text/plain'
raise webob.exc.HTTPBadRequest(detail=u._('Missing X-Project-Id'))
return project_id
def process_request(self, req):
"""Create a context without an authorized user."""
super(UnauthenticatedContextMiddleware, self).process_request(req)
project_id = self._get_project_id_from_header(req)
config_admin_role = CONF.admin_role.strip().lower()
roles_header = req.headers.get('X-Roles', '')
roles = [r.strip().lower() for r in roles_header.split(',') if r]
# If a role wasn't specified we default to admin
if not roles:
roles = [config_admin_role]
kwargs = {
'user': req.headers.get('X-User-Id'),
'domain': req.headers.get('X-Domain-Id'),
'user_domain': req.headers.get('X-User-Domain-Id'),
'project_domain': req.headers.get('X-Project-Domain-Id'),
'project': project_id,
'roles': roles,
'is_admin': config_admin_role in roles,
'request_id': req.request_id
}
context = barbican.context.RequestContext(**kwargs)
req.environ['barbican.context'] = context

View File

@@ -1,35 +0,0 @@
# Copyright 2011 OpenStack LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
A filter middleware that just outputs to logs, for instructive/sample
purposes only.
"""
from barbican.api import middleware
from barbican.common import utils
LOG = utils.getLogger(__name__)
class SimpleFilter(middleware.Middleware):
def __init__(self, app):
super(SimpleFilter, self).__init__(app)
def process_request(self, req):
"""Just announce we have been called."""
LOG.debug("Calling SimpleFilter")
return None

View File

@@ -1,328 +0,0 @@
#!/usr/bin/env python
# Copyright 2010-2015 OpenStack LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
CLI interface for barbican management
"""
from __future__ import print_function
import argparse
import sys
from oslo_config import cfg
from oslo_log import log as logging
from barbican.cmd import pkcs11_kek_rewrap as pkcs11_rewrap
from barbican.common import config
from barbican.model import clean
from barbican.model.migration import commands
from barbican.plugin.crypto import pkcs11
import barbican.version
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
# Decorators for actions
def args(*args, **kwargs):
def _decorator(func):
func.__dict__.setdefault('args', []).insert(0, (args, kwargs))
return func
return _decorator
class DbCommands(object):
"""Class for managing barbican database"""
description = "Subcommands for managing barbican database"
clean_description = "Clean up soft deletions in the database"
@args('--db-url', '-d', metavar='<db-url>', dest='dburl',
help='barbican database URL')
@args('--min-days', '-m', metavar='<min-days>', dest='min_days', type=int,
default=90, help='minimum number of days to keep soft deletions. '
'default is %(default)s days.')
@args('--verbose', '-V', action='store_true', dest='verbose',
default=False, help='Show verbose information about the clean up.')
@args('--log-file', '-L', metavar='<log-file>', type=str, default=None,
dest='log_file', help='Set log file location. '
'Default value for log_file can be found in barbican.conf')
@args('--clean-unassociated-projects', '-p', action='store_true',
dest='do_clean_unassociated_projects', default=False,
help='Remove projects that have no '
'associated resources.')
@args('--soft-delete-expired-secrets', '-e', action='store_true',
dest='do_soft_delete_expired_secrets', default=False,
help='Soft delete secrets that are expired.')
def clean(self, dburl=None, min_days=None, verbose=None, log_file=None,
do_clean_unassociated_projects=None,
do_soft_delete_expired_secrets=None):
"""Clean soft deletions in the database"""
if dburl is None:
dburl = CONF.sql_connection
if log_file is None:
log_file = CONF.log_file
clean.clean_command(
sql_url=dburl,
min_num_days=min_days,
do_clean_unassociated_projects=do_clean_unassociated_projects,
do_soft_delete_expired_secrets=do_soft_delete_expired_secrets,
verbose=verbose,
log_file=log_file)
revision_description = "Create a new database version file"
@args('--db-url', '-d', metavar='<db-url>', dest='dburl',
help='barbican database URL')
@args('--message', '-m', metavar='<message>', default='DB change',
help='the message for the DB change')
@args('--autogenerate', action="store_true", dest='autogen',
default=False, help='autogenerate from models')
def revision(self, dburl=None, message=None, autogen=None):
"""Process the 'revision' Alembic command."""
if dburl is None:
commands.generate(autogenerate=autogen, message=str(message),
sql_url=CONF.sql_connection)
else:
commands.generate(autogenerate=autogen, message=str(message),
sql_url=str(dburl))
upgrade_description = "Upgrade to a future database version"
@args('--db-url', '-d', metavar='<db-url>', dest='dburl',
help='barbican database URL')
@args('--version', '-v', metavar='<version>', default='head',
help='the version to upgrade to, or else '
'the latest/head if not specified.')
def upgrade(self, dburl=None, version=None):
"""Process the 'upgrade' Alembic command."""
if dburl is None:
commands.upgrade(to_version=str(version),
sql_url=CONF.sql_connection)
else:
commands.upgrade(to_version=str(version), sql_url=str(dburl))
history_description = "Show database changset history"
@args('--db-url', '-d', metavar='<db-url>', dest='dburl',
help='barbican database URL')
@args('--verbose', '-V', action='store_true', dest='verbose',
default=False, help='Show full information about the revisions.')
def history(self, dburl=None, verbose=None):
if dburl is None:
commands.history(verbose, sql_url=CONF.sql_connection)
else:
commands.history(verbose, sql_url=str(dburl))
current_description = "Show current revision of database"
@args('--db-url', '-d', metavar='<db-url>', dest='dburl',
help='barbican database URL')
@args('--verbose', '-V', action='store_true', dest='verbose',
default=False, help='Show full information about the revisions.')
def current(self, dburl=None, verbose=None):
if dburl is None:
commands.current(verbose, sql_url=CONF.sql_connection)
else:
commands.current(verbose, sql_url=str(dburl))
class HSMCommands(object):
"""Class for managing HSM/pkcs11 plugin"""
description = "Subcommands for managing HSM/PKCS11"
gen_mkek_description = "Generates a new MKEK"
@args('--library-path', metavar='<library-path>', dest='libpath',
default='/usr/lib/libCryptoki2_64.so',
help='Path to vendor PKCS11 library')
@args('--slot-id', metavar='<slot-id>', dest='slotid', default=1,
help='HSM Slot id (Should correspond to a configured PKCS11 slot, \
default is 1)')
@args('--passphrase', metavar='<passphrase>', default=None, required=True,
help='Password to login to PKCS11 session')
@args('--label', '-L', metavar='<label>', default='primarymkek',
help='The label of the Master Key Encrypt Key')
@args('--length', '-l', metavar='<length>', default=32,
help='The length of the Master Key Encrypt Key (default is 32)')
def gen_mkek(self, passphrase, libpath=None, slotid=None, label=None,
length=None):
self._create_pkcs11_session(str(passphrase), str(libpath), int(slotid))
self._verify_label_does_not_exist(str(label), self.session)
self.pkcs11.generate_key(int(length), self.session, str(label),
encrypt=True, wrap=True, master_key=True)
self.pkcs11.return_session(self.session)
print("MKEK successfully generated!")
gen_hmac_description = "Generates a new HMAC key"
@args('--library-path', metavar='<library-path>', dest='libpath',
default='/usr/lib/libCryptoki2_64.so',
help='Path to vendor PKCS11 library')
@args('--slot-id', metavar='<slot-id>', dest='slotid', default=1,
help='HSM Slot id (Should correspond to a configured PKCS11 slot, \
default is 1)')
@args('--passphrase', metavar='<passphrase>', default=None, required=True,
help='Password to login to PKCS11 session')
@args('--label', '-L', metavar='<label>', default='primarymkek',
help='The label of the Master HMAC Key')
@args('--length', '-l', metavar='<length>', default=32,
help='The length of the Master HMAC Key (default is 32)')
def gen_hmac(self, passphrase, libpath=None, slotid=None, label=None,
length=None):
self._create_pkcs11_session(str(passphrase), str(libpath), int(slotid))
self._verify_label_does_not_exist(str(label), self.session)
self.pkcs11.generate_key(int(length), self.session, str(label),
sign=True, master_key=True)
self.pkcs11.return_session(self.session)
print("HMAC successfully generated!")
rewrap_pkek_description = "Re-wrap project MKEKs"
@args('--dry-run', action="store_true", dest='dryrun', default=False,
help='Displays changes that will be made (Non-destructive)')
def rewrap_pkek(self, dryrun=None):
rewrapper = pkcs11_rewrap.KekRewrap(pkcs11_rewrap.CONF)
rewrapper.execute(dryrun)
rewrapper.pkcs11.return_session(rewrapper.hsm_session)
def _create_pkcs11_session(self, passphrase, libpath, slotid):
self.pkcs11 = pkcs11.PKCS11(
library_path=libpath, login_passphrase=passphrase,
rw_session=True, slot_id=slotid
)
self.session = self.pkcs11.get_session()
def _verify_label_does_not_exist(self, label, session):
key_handle = self.pkcs11.get_key_handle(label, session)
if key_handle:
print(
"The label {label} already exists! "
"Please try again.".format(label=label)
)
sys.exit(1)
CATEGORIES = {
'db': DbCommands,
'hsm': HSMCommands,
}
# Modifying similar code from nova/cmd/manage.py
def methods_of(obj):
"""Get all callable methods of an object that don't start with underscore
returns a list of tuples of the form (method_name, method)
"""
result = []
for fn in dir(obj):
if callable(getattr(obj, fn)) and not fn.startswith('_'):
result.append((fn, getattr(obj, fn),
getattr(obj, fn + '_description', None)))
return result
# Shamelessly taking same code from nova/cmd/manage.py
def add_command_parsers(subparsers):
"""Add subcommand parser to oslo_config object"""
for category in CATEGORIES:
command_object = CATEGORIES[category]()
desc = getattr(command_object, 'description', None)
parser = subparsers.add_parser(category, description=desc)
parser.set_defaults(command_object=command_object)
category_subparsers = parser.add_subparsers(dest='action')
for (action, action_fn, action_desc) in methods_of(command_object):
parser = category_subparsers.add_parser(action,
description=action_desc)
action_kwargs = []
for args, kwargs in getattr(action_fn, 'args', []):
# Assuming dest is the arg name without the leading
# hyphens if no dest is supplied
kwargs.setdefault('dest', args[0][2:])
if kwargs['dest'].startswith('action_kwarg_'):
action_kwargs.append(
kwargs['dest'][len('action_kwarg_'):])
else:
action_kwargs.append(kwargs['dest'])
kwargs['dest'] = 'action_kwarg_' + kwargs['dest']
parser.add_argument(*args, **kwargs)
parser.set_defaults(action_fn=action_fn)
parser.set_defaults(action_kwargs=action_kwargs)
parser.add_argument('action_args', nargs='*',
help=argparse.SUPPRESS)
# Define subcommand category
category_opt = cfg.SubCommandOpt('category',
title='Command categories',
help='Available categories',
handler=add_command_parsers)
def main():
"""Parse options and call the appropriate class/method."""
CONF = config.new_config()
CONF.register_cli_opt(category_opt)
try:
logging.register_options(CONF)
logging.setup(CONF, "barbican-manage")
cfg_files = cfg.find_config_files(project='barbican')
CONF(args=sys.argv[1:],
project='barbican',
prog='barbican-manage',
version=barbican.version.__version__,
default_config_files=cfg_files)
except RuntimeError as e:
sys.exit("ERROR: %s" % e)
# find sub-command and its arguments
fn = CONF.category.action_fn
fn_args = [arg.decode('utf-8') for arg in CONF.category.action_args]
fn_kwargs = {}
for k in CONF.category.action_kwargs:
v = getattr(CONF.category, 'action_kwarg_' + k)
if v is None:
continue
if isinstance(v, bytes):
v = v.decode('utf-8')
fn_kwargs[k] = v
# call the action with the remaining arguments
try:
return fn(*fn_args, **fn_kwargs)
except Exception as e:
sys.exit("ERROR: %s" % e)
if __name__ == '__main__':
main()

View File

@@ -1,184 +0,0 @@
#!/usr/bin/env python
# Copyright 2010-2015 OpenStack LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import sys
sys.path.insert(0, os.getcwd())
from barbican.common import config
from barbican.model import clean
from barbican.model.migration import commands
from oslo_log import log
# Import and configure logging.
CONF = config.CONF
log.setup(CONF, 'barbican')
LOG = log.getLogger(__name__)
class DatabaseManager(object):
"""Database Manager class.
Builds and executes a CLI parser to manage the Barbican database
This extends the Alembic commands.
"""
def __init__(self, conf):
self.conf = conf
self.parser = self.get_main_parser()
self.subparsers = self.parser.add_subparsers(
title='subcommands',
description='Action to perform')
self.add_revision_args()
self.add_upgrade_args()
self.add_history_args()
self.add_current_args()
self.add_clean_args()
def get_main_parser(self):
"""Create top-level parser and arguments."""
parser = argparse.ArgumentParser(description='Barbican DB manager.')
parser.add_argument('--dburl', '-d', default=self.conf.sql_connection,
help='URL to the database.')
return parser
def add_revision_args(self):
"""Create 'revision' command parser and arguments."""
create_parser = self.subparsers.add_parser('revision', help='Create a '
'new DB version file.')
create_parser.add_argument('--message', '-m', default='DB change',
help='the message for the DB change')
create_parser.add_argument('--autogenerate',
help='autogenerate from models',
action='store_true')
create_parser.set_defaults(func=self.revision)
def add_upgrade_args(self):
"""Create 'upgrade' command parser and arguments."""
create_parser = self.subparsers.add_parser('upgrade',
help='Upgrade to a '
'future version DB '
'version file')
create_parser.add_argument('--version', '-v', default='head',
help='the version to upgrade to, or else '
'the latest/head if not specified.')
create_parser.set_defaults(func=self.upgrade)
def add_history_args(self):
"""Create 'history' command parser and arguments."""
create_parser = self.subparsers.add_parser(
'history',
help='List changeset scripts in chronological order.')
create_parser.add_argument('--verbose', '-V', action="store_true",
help='Show full information about the '
'revisions.')
create_parser.set_defaults(func=self.history)
def add_current_args(self):
"""Create 'current' command parser and arguments."""
create_parser = self.subparsers.add_parser(
'current',
help='Display the current revision for a database.')
create_parser.add_argument('--verbose', '-V', action="store_true",
help='Show full information about the '
'revision.')
create_parser.set_defaults(func=self.current)
def add_clean_args(self):
"""Create 'clean' command parser and arguments."""
create_parser = self.subparsers.add_parser(
'clean',
help='Clean up soft deletions in the database')
create_parser.add_argument(
'--min-days', '-m', type=int, default=90,
help='minimum number of days to keep soft deletions. default is'
' %(default)s days.')
create_parser.add_argument('--clean-unassociated-projects', '-p',
action="store_true",
help='Remove projects that have no '
'associated resources.')
create_parser.add_argument('--soft-delete-expired-secrets', '-e',
action="store_true",
help='Soft delete expired secrets.')
create_parser.add_argument('--verbose', '-V', action='store_true',
help='Show full information about the'
' cleanup')
create_parser.add_argument('--log-file', '-L',
default=CONF.log_file,
type=str,
help='Set log file location. '
'Default value for log_file can be '
'found in barbican.conf')
create_parser.set_defaults(func=self.clean)
def revision(self, args):
"""Process the 'revision' Alembic command."""
commands.generate(autogenerate=args.autogenerate,
message=args.message,
sql_url=args.dburl)
def upgrade(self, args):
"""Process the 'upgrade' Alembic command."""
LOG.debug("Performing database schema migration...")
commands.upgrade(to_version=args.version, sql_url=args.dburl)
def history(self, args):
commands.history(args.verbose, sql_url=args.dburl)
def current(self, args):
commands.current(args.verbose, sql_url=args.dburl)
def clean(self, args):
clean.clean_command(
sql_url=args.dburl,
min_num_days=args.min_days,
do_clean_unassociated_projects=args.clean_unassociated_projects,
do_soft_delete_expired_secrets=args.soft_delete_expired_secrets,
verbose=args.verbose,
log_file=args.log_file)
def execute(self):
"""Parse the command line arguments."""
args = self.parser.parse_args()
# Perform other setup here...
args.func(args)
def _exception_is_successful_exit(thrown_exception):
return (isinstance(thrown_exception, SystemExit) and
(thrown_exception.code is None or thrown_exception.code == 0))
def main():
try:
dm = DatabaseManager(CONF)
dm.execute()
except Exception as ex:
if not _exception_is_successful_exit(ex):
LOG.exception('Problem seen trying to run barbican db manage')
sys.stderr.write("ERROR: {0}\n".format(ex))
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -1,9 +0,0 @@
[DEFAULT]
test_command=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m coverage run -a -m subunit.run discover -s ./cmd -t . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@@ -1,318 +0,0 @@
# Copyright (c) 2016 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import time
from testtools import testcase
from barbican.common import config as barbican_config
from barbican.tests import utils
from functionaltests.api import base
from functionaltests.api.v1.behaviors import container_behaviors
from functionaltests.api.v1.behaviors import secret_behaviors
from functionaltests.api.v1.models import container_models
from functionaltests.api.v1.models import secret_models
from functionaltests.common import config
from oslo_db.sqlalchemy import session
# Import and configure logging.
BCONF = barbican_config.CONF
CONF = config.get_config()
admin_a = CONF.rbac_users.admin_a
admin_b = CONF.rbac_users.admin_b
class DBManageTestCase(base.TestCase):
def setUp(self):
super(DBManageTestCase, self).setUp()
self.sbehaviors = secret_behaviors.SecretBehaviors(self.client)
self.cbehaviors = container_behaviors.ContainerBehaviors(self.client)
db_url = BCONF.sql_connection
time.sleep(5)
# Setup session for tests to query DB
engine = session.create_engine(db_url)
self.conn = engine.connect()
def tearDown(self):
super(DBManageTestCase, self).tearDown()
self.conn.close()
self.sbehaviors.delete_all_created_secrets()
self.cbehaviors.delete_all_created_containers()
def _create_secret_list(self,
user,
delete=False,
expiration="2050-02-28T19:14:44.180394"):
secret_defaults_data = {
"name": "AES key",
"expiration": expiration,
"algorithm": "aes",
"bit_length": 256,
"mode": "cbc",
"payload": "gF6+lLoF3ohA9aPRpt+6bQ==",
"payload_content_type": "application/octet-stream",
"payload_content_encoding": "base64",
}
secret_list = []
for i in range(0, 5):
secret_model = secret_models.SecretModel(**secret_defaults_data)
resp, secret_ref = self.sbehaviors.create_secret(secret_model,
user_name=user)
self.assertEqual(resp.status_code, 201)
self.assertIsNotNone(secret_ref)
secret_list.append(secret_ref)
if delete is True:
self._delete_secret_list(secret_list, user)
return secret_list
def _create_container_uuid_list(
self,
user,
secret_expiration="2050-02-28T19:14:44.180394",
delete_secret=False,
delete_container=False):
secret_list = self._create_secret_list(
user=user,
expiration=secret_expiration
)
container_data = {
"name": "containername",
"type": "generic",
"secret_refs": [
{
"name": "secret",
"secret_ref": secret_list[0]
}
]
}
container_list = []
for i in range(0, 5):
container_model = container_models.ContainerModel(**container_data)
post_container_resp, c_ref = self.cbehaviors.create_container(
container_model,
user_name=user)
self.assertEqual(post_container_resp.status_code, 201)
self.assertIsNotNone(c_ref)
container_list.append(c_ref)
if delete_container is True:
self._delete_container_list(container_list, user)
if delete_secret is True:
self._delete_secret_list(secret_list)
return container_list
def _delete_secret_list(self, secret_list, user):
for secret in secret_list:
del_resp = self.sbehaviors.delete_secret(secret, user_name=user)
self.assertEqual(del_resp.status_code, 204)
def _delete_container_list(self, container_list, user):
for container in container_list:
del_resp = self.cbehaviors.delete_container(container,
user_name=user)
self.assertEqual(del_resp.status_code, 204)
def _get_uuid(self, ref):
uuid = ref.split('/')[-1]
return uuid
@testcase.attr('positive')
def test_active_secret_not_deleted(self):
"""Verify that active secrets are not removed"""
project_a_secrets = self._create_secret_list(user=admin_a)
project_b_secrets = self._create_secret_list(user=admin_b)
os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e")
results = self.conn.execute("select * from secrets")
secret_list = []
for row in results:
secret_list.append(str(row[0]))
for secret in project_a_secrets:
secret_uuid = self._get_uuid(secret)
self.assertIn(secret_uuid, secret_list)
for secret in project_b_secrets:
secret_uuid = self._get_uuid(secret)
self.assertIn(secret_uuid, secret_list)
@testcase.attr('positive')
def test_soft_deleted_secrets_are_removed(self):
"""Test that soft deleted secrets are removed"""
project_a_secrets = self._create_secret_list(user=admin_a,
delete=True)
project_b_secrets = self._create_secret_list(user=admin_b,
delete=True)
os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e")
results = self.conn.execute("select * from secrets")
secret_list = []
for row in results:
secret_list.append(str(row[0]))
for secret in project_a_secrets:
secret_uuid = self._get_uuid(secret)
self.assertNotIn(secret_uuid, secret_list)
for secret in project_b_secrets:
secret_uuid = self._get_uuid(secret)
self.assertNotIn(secret_uuid, secret_list)
@testcase.attr('positive')
def test_expired_secrets_are_not_removed_from_db(self):
"""Test expired secrests are left in soft deleted state.
Currently this clean will set the threshold at the start
of the test. Expired secrets will be deleted and the
deleted at date will now be later then the threshold
date.
"""
current_time = utils.create_timestamp_w_tz_and_offset(seconds=10)
project_a_secrets = self._create_secret_list(user=admin_a,
expiration=current_time)
project_b_secrets = self._create_secret_list(user=admin_b,
expiration=current_time)
time.sleep(10)
os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e")
results = self.conn.execute("select * from secrets")
secret_list = []
for row in results:
secret_list.append(str(row[0]))
for secret in project_a_secrets:
secret_uuid = self._get_uuid(secret)
self.assertIn(secret_uuid, secret_list)
for secret in project_b_secrets:
secret_uuid = self._get_uuid(secret)
self.assertIn(secret_uuid, secret_list)
@testcase.attr('positive')
def test_no_soft_deleted_secrets_in_db(self):
"""Test that no soft deleted secrets are in db"""
os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e")
results = self.conn.execute("select * from secrets where deleted=1")
secret_list = []
for row in results:
secret_list.append(str(row[0]))
self.assertEqual(len(secret_list), 0)
@testcase.attr('positive')
def test_active_containers_not_deleted(self):
"""Active containers are not deleted"""
project_a_containers = self._create_container_uuid_list(
user=admin_a)
project_b_containers = self._create_container_uuid_list(
user=admin_b)
os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e")
results = self.conn.execute("select * from containers")
container_list = []
for row in results:
container_list.append(str(row[0]))
for container in project_a_containers:
container_uuid = self._get_uuid(container)
self.assertIn(container_uuid, container_list)
for container in project_b_containers:
container_uuid = self._get_uuid(container)
self.assertIn(container_uuid, container_list)
@testcase.attr('positive')
def test_cleanup_soft_deleted_containers(self):
"""Soft deleted containers are deleted"""
project_a_delete_containers = self._create_container_uuid_list(
user=admin_a,
delete_container=True)
project_b_delete_containers = self._create_container_uuid_list(
user=admin_b,
delete_container=True)
os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e")
results = self.conn.execute("select * from containers")
container_list = []
for row in results:
container_list.append(str(row[0]))
for container in project_a_delete_containers:
container_uuid = self._get_uuid(container)
self.assertNotIn(container_uuid, container_list)
for container in project_b_delete_containers:
container_uuid = self._get_uuid(container)
self.assertNotIn(container_uuid, container_list)
@testcase.attr('positive')
def test_containers_with_expired_secrets_are_deleted(self):
"""Containers with expired secrets are deleted"""
current_time = utils.create_timestamp_w_tz_and_offset(seconds=10)
project_a_delete_containers = self._create_container_uuid_list(
user=admin_a,
delete_container=True,
secret_expiration=current_time)
project_b_delete_containers = self._create_container_uuid_list(
user=admin_b,
delete_container=True,
secret_expiration=current_time)
time.sleep(10)
os.system("python barbican/cmd/db_manage.py clean -m 0 -p -e")
results = self.conn.execute("select * from containers")
container_list = []
for row in results:
container_list.append(str(row[0]))
for container in project_a_delete_containers:
container_uuid = self._get_uuid(container)
self.assertNotIn(container_uuid, container_list)
for container in project_b_delete_containers:
container_uuid = self._get_uuid(container)
self.assertNotIn(container_uuid, container_list)

View File

@@ -1,84 +0,0 @@
#!/usr/bin/env python
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Barbican Keystone notification listener server.
"""
import eventlet
import os
import sys
# Oslo messaging notification server uses eventlet.
#
# To have remote debugging, thread module needs to be disabled.
# eventlet.monkey_patch(thread=False)
eventlet.monkey_patch()
# 'Borrowed' from the Glance project:
# If ../barbican/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'barbican', '__init__.py')):
sys.path.insert(0, possible_topdir)
from barbican.common import config
from barbican import queue
from barbican.queue import keystone_listener
from barbican import version
from oslo_log import log
from oslo_service import service
def fail(returncode, e):
sys.stderr.write("ERROR: {0}\n".format(e))
sys.exit(returncode)
def main():
try:
config.setup_remote_pydev_debug()
CONF = config.CONF
CONF(sys.argv[1:], project='barbican',
version=version.version_info.version_string)
# Import and configure logging.
log.setup(CONF, 'barbican')
LOG = log.getLogger(__name__)
LOG.info("Booting up Barbican Keystone listener node...")
# Queuing initialization
queue.init(CONF)
if getattr(getattr(CONF, queue.KS_NOTIFICATIONS_GRP_NAME), 'enable'):
service.launch(
CONF,
keystone_listener.MessageServer(CONF)
).wait()
else:
LOG.info("Exiting as Barbican Keystone listener is not enabled...")
except RuntimeError as e:
fail(1, e)
if __name__ == '__main__':
sys.exit(main())

View File

@@ -1,168 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import base64
import json
import traceback
from oslo_db.sqlalchemy import session
from sqlalchemy import orm
from sqlalchemy.orm import scoping
from barbican.common import utils
from barbican.model import models
from barbican.plugin.crypto import p11_crypto
# Use config values from p11_crypto
CONF = p11_crypto.CONF
class KekRewrap(object):
def __init__(self, conf):
self.dry_run = False
self.db_engine = session.create_engine(conf.sql_connection)
self._session_creator = scoping.scoped_session(
orm.sessionmaker(
bind=self.db_engine,
autocommit=True
)
)
self.crypto_plugin = p11_crypto.P11CryptoPlugin(conf)
self.pkcs11 = self.crypto_plugin.pkcs11
self.plugin_name = utils.generate_fullname_for(self.crypto_plugin)
self.hsm_session = self.pkcs11.get_session()
self.new_mkek_label = self.crypto_plugin.mkek_label
self.new_hmac_label = self.crypto_plugin.hmac_label
self.new_mkek = self.crypto_plugin._get_master_key(self.new_mkek_label)
self.new_mkhk = self.crypto_plugin._get_master_key(self.new_hmac_label)
def rewrap_kek(self, project, kek):
with self.db_session.begin():
meta_dict = json.loads(kek.plugin_meta)
if self.dry_run:
msg = 'Would have unwrapped key with {} and rewrapped with {}'
print(msg.format(meta_dict['mkek_label'], self.new_mkek_label))
print('Would have updated KEKDatum in db {}'.format(kek.id))
print('Rewrapping KEK {}'.format(kek.id))
print('Pre-change IV: {}, Wrapped Key: {}'.format(
meta_dict['iv'], meta_dict['wrapped_key']))
return
session = self.hsm_session
# Get KEK's master keys
kek_mkek = self.pkcs11.get_key_handle(
meta_dict['mkek_label'], session
)
kek_mkhk = self.pkcs11.get_key_handle(
meta_dict['hmac_label'], session
)
# Decode data
iv = base64.b64decode(meta_dict['iv'])
wrapped_key = base64.b64decode(meta_dict['wrapped_key'])
hmac = base64.b64decode(meta_dict['hmac'])
# Verify HMAC
kek_data = iv + wrapped_key
self.pkcs11.verify_hmac(kek_mkhk, hmac, kek_data, session)
# Unwrap KEK
current_kek = self.pkcs11.unwrap_key(kek_mkek, iv, wrapped_key,
session)
# Wrap KEK with new master keys
new_kek = self.pkcs11.wrap_key(self.new_mkek, current_kek,
session)
# Compute HMAC for rewrapped KEK
new_kek_data = new_kek['iv'] + new_kek['wrapped_key']
new_hmac = self.pkcs11.compute_hmac(self.new_mkhk, new_kek_data,
session)
# Destroy unwrapped KEK
self.pkcs11.destroy_object(current_kek, session)
# Build updated meta dict
updated_meta = meta_dict.copy()
updated_meta['mkek_label'] = self.new_mkek_label
updated_meta['hmac_label'] = self.new_hmac_label
updated_meta['iv'] = base64.b64encode(new_kek['iv'])
updated_meta['wrapped_key'] = base64.b64encode(
new_kek['wrapped_key'])
updated_meta['hmac'] = base64.b64encode(new_hmac)
print('Post-change IV: {}, Wrapped Key: {}'.format(
updated_meta['iv'], updated_meta['wrapped_key']))
# Update KEK metadata in DB
kek.plugin_meta = p11_crypto.json_dumps_compact(updated_meta)
def get_keks_for_project(self, project):
keks = []
with self.db_session.begin() as transaction:
print('Retrieving KEKs for Project {}'.format(project.id))
query = transaction.session.query(models.KEKDatum)
query = query.filter_by(project_id=project.id)
query = query.filter_by(plugin_name=self.plugin_name)
keks = query.all()
return keks
def get_projects(self):
print('Retrieving all available projects')
projects = []
with self.db_session.begin() as transaction:
projects = transaction.session.query(models.Project).all()
return projects
@property
def db_session(self):
return self._session_creator()
def execute(self, dry_run=True):
self.dry_run = dry_run
if self.dry_run:
print('-- Running in dry-run mode --')
projects = self.get_projects()
for project in projects:
keks = self.get_keks_for_project(project)
for kek in keks:
try:
self.rewrap_kek(project, kek)
except Exception:
print('Error occurred! SQLAlchemy automatically rolled-'
'back the transaction')
traceback.print_exc()
def main():
script_desc = 'Utility to re-wrap project KEKs after rotating an MKEK.'
parser = argparse.ArgumentParser(description=script_desc)
parser.add_argument(
'--dry-run',
action='store_true',
help='Displays changes that will be made (Non-destructive)'
)
args = parser.parse_args()
rewrapper = KekRewrap(CONF)
rewrapper.execute(args.dry_run)
rewrapper.pkcs11.return_session(rewrapper.hsm_session)
if __name__ == '__main__':
main()

View File

@@ -1,126 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import six
import sys
from barbican.plugin.crypto import pkcs11
class KeyGenerator(object):
def __init__(self, ffi=None):
self.parser = self.get_main_parser()
self.subparsers = self.parser.add_subparsers(
title='subcommands',
description='Action to perform'
)
self.add_mkek_args()
self.add_hmac_args()
self.args = self.parser.parse_args()
if not self.args.passphrase:
password = six.moves.input("Please enter your password: ")
self.pkcs11 = pkcs11.PKCS11(
library_path=self.args.library_path,
login_passphrase=self.args.passphrase or password,
rw_session=True,
slot_id=int(self.args.slot_id),
ffi=ffi
)
self.session = self.pkcs11.get_session()
def get_main_parser(self):
"""Create a top-level parser and arguments."""
parser = argparse.ArgumentParser(
description='Barbican MKEK & HMAC Generator'
)
parser.add_argument(
'--library-path',
default='/usr/lib/libCryptoki2_64.so',
help='Path to vendor PKCS11 library'
)
parser.add_argument(
'--passphrase',
default=None,
help='Password to login to PKCS11 session'
)
parser.add_argument(
'--slot-id',
default=1,
help='HSM Slot id (Should correspond to a configured PKCS11 slot)'
)
return parser
def add_mkek_args(self):
"""Create MKEK generation parser and arguments."""
create_parser = self.subparsers.add_parser('mkek', help='Generates a '
'new MKEK.')
create_parser.add_argument('--length', '-l', default=32,
help='the length of the MKEK')
create_parser.add_argument('--label', '-L', default='primarymkek',
help='the label for the MKEK')
create_parser.set_defaults(func=self.generate_mkek)
def add_hmac_args(self):
"""Create HMAC generation parser and arguments."""
create_parser = self.subparsers.add_parser('hmac', help='Generates a '
'new HMAC.')
create_parser.add_argument('--length', '-l', default=32,
help='the length of the HMACKEY')
create_parser.add_argument('--label', '-L', default='primaryhmac',
help='the label for the HMAC')
create_parser.set_defaults(func=self.generate_hmac)
def verify_label_does_not_exist(self, label, session):
key_handle = self.pkcs11.get_key_handle(label, session)
if key_handle:
print(
"The label {label} already exists! "
"Please try again.".format(label=label)
)
sys.exit(1)
def generate_mkek(self, args):
"""Process the generate MKEK with given arguments"""
self.verify_label_does_not_exist(args.label, self.session)
self.pkcs11.generate_key(int(args.length), self.session, args.label,
encrypt=True, wrap=True, master_key=True)
print("MKEK successfully generated!")
def generate_hmac(self, args):
"""Process the generate HMAC with given arguments"""
self.verify_label_does_not_exist(args.label, self.session)
self.pkcs11.generate_key(int(args.length), self.session,
args.label, sign=True,
master_key=True)
print("HMAC successfully generated!")
def execute(self):
"""Parse the command line arguments."""
try:
self.args.func(self.args)
except Exception as e:
print(e)
finally:
self.pkcs11.return_session(self.session)
def main():
kg = KeyGenerator()
kg.execute()
if __name__ == '__main__':
main()

View File

@@ -1,169 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import base64
import json
import traceback
from oslo_db.sqlalchemy import session
from sqlalchemy import orm
from sqlalchemy.orm import scoping
from barbican.common import utils
from barbican.model import models
from barbican.plugin.crypto import p11_crypto
from barbican.plugin.crypto.pkcs11 import P11CryptoPluginException
# Use config values from p11_crypto
CONF = p11_crypto.CONF
class KekSignatureMigrator(object):
def __init__(self, db_connection, library_path, login, slot_id):
self.dry_run = False
self.db_engine = session.create_engine(db_connection)
self._session_creator = scoping.scoped_session(
orm.sessionmaker(
bind=self.db_engine,
autocommit=True
)
)
self.crypto_plugin = p11_crypto.P11CryptoPlugin(CONF)
self.plugin_name = utils.generate_fullname_for(self.crypto_plugin)
self.pkcs11 = self.crypto_plugin.pkcs11
self.session = self.pkcs11.get_session()
def recalc_kek_hmac(self, project, kek):
with self.db_session.begin():
meta_dict = json.loads(kek.plugin_meta)
iv = base64.b64decode(meta_dict['iv'])
wrapped_key = base64.b64decode(meta_dict['wrapped_key'])
hmac = base64.b64decode(meta_dict['hmac'])
kek_data = iv + wrapped_key
hmac_key = self.pkcs11.get_key_handle(
meta_dict['hmac_label'], self.session)
# Verify if hmac signature validates with new method
try:
self.pkcs11.verify_hmac(hmac_key, hmac, kek_data, self.session)
sig_good = True
except P11CryptoPluginException as e:
if 'CKR_SIGNATURE_INVALID' in e.message:
sig_good = False
else:
raise
if sig_good:
msg = 'Skipping KEK {}, good signature'
print(msg.format(kek.kek_label))
return
# Previous method failed.
# Verify if hmac signature validates with old method
try:
self.pkcs11.verify_hmac(
hmac_key, hmac, wrapped_key, self.session
)
sig_bad = True
except P11CryptoPluginException as e:
if 'CKR_SIGNATURE_INVALID' in e.message:
sig_bad = False
else:
raise
if not sig_bad:
msg = "Skipping KEK {}, can not validate with either method!"
print(msg.format(kek.kek_label))
return
if self.dry_run:
msg = 'KEK {} needs recalculation'
print(msg.format(kek.kek_label))
return
# Calculate new HMAC
new_hmac = self.pkcs11.compute_hmac(
hmac_key, kek_data, self.session
)
# Update KEK plugin_meta with new hmac signature
meta_dict['hmac'] = base64.b64encode(new_hmac)
kek.plugin_meta = p11_crypto.json_dumps_compact(meta_dict)
def get_keks_for_project(self, project):
keks = []
with self.db_session.begin() as transaction:
print('Retrieving KEKs for Project {}'.format(project.id))
query = transaction.session.query(models.KEKDatum)
query = query.filter_by(project_id=project.id)
query = query.filter_by(plugin_name=self.plugin_name)
keks = query.all()
return keks
def get_projects(self):
print('Retrieving all available projects')
projects = []
with self.db_session.begin() as transaction:
projects = transaction.session.query(models.Project).all()
return projects
@property
def db_session(self):
return self._session_creator()
def execute(self, dry_run=True):
self.dry_run = dry_run
if self.dry_run:
print('-- Running in dry-run mode --')
projects = self.get_projects()
for project in projects:
keks = self.get_keks_for_project(project)
for kek in keks:
try:
self.recalc_kek_hmac(project, kek)
except Exception:
print('Error occurred! SQLAlchemy automatically rolled-'
'back the transaction')
traceback.print_exc()
def main():
script_desc = (
'Utility to migrate existing project KEK signatures to include IV.'
)
parser = argparse.ArgumentParser(description=script_desc)
parser.add_argument(
'--dry-run',
action='store_true',
help='Displays changes that will be made (Non-destructive)'
)
args = parser.parse_args()
migrator = KekSignatureMigrator(
db_connection=CONF.sql_connection,
library_path=CONF.p11_crypto_plugin.library_path,
login=CONF.p11_crypto_plugin.login,
slot_id=CONF.p11_crypto_plugin.slot_id
)
migrator.execute(args.dry_run)
if __name__ == '__main__':
main()

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2015 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Barbican worker server, running a periodic retry/scheduler process.
"""
import eventlet
import os
import sys
# Oslo messaging RPC server uses eventlet.
eventlet.monkey_patch()
# 'Borrowed' from the Glance project:
# If ../barbican/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'barbican', '__init__.py')):
sys.path.insert(0, possible_topdir)
from barbican.common import config
from barbican import queue
from barbican.queue import retry_scheduler
from barbican import version
from oslo_log import log
from oslo_service import service
def fail(returncode, e):
sys.stderr.write("ERROR: {0}\n".format(e))
sys.exit(returncode)
def main():
try:
CONF = config.CONF
CONF(sys.argv[1:], project='barbican',
version=version.version_info.version_string)
# Import and configure logging.
log.setup(CONF, 'barbican-retry-scheduler')
LOG = log.getLogger(__name__)
LOG.debug("Booting up Barbican worker retry/scheduler node...")
# Queuing initialization (as a client only).
queue.init(CONF, is_server_side=False)
service.launch(
CONF,
retry_scheduler.PeriodicServer()
).wait()
except RuntimeError as e:
fail(1, e)
if __name__ == '__main__':
main()

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013-2014 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Barbican worker server.
"""
import eventlet
import os
import sys
# Oslo messaging RPC server uses eventlet.
eventlet.monkey_patch()
# 'Borrowed' from the Glance project:
# If ../barbican/__init__.py exists, add ../ to Python search path, so that
# it will override what happens to be installed in /usr/(local/)lib/python...
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
os.pardir,
os.pardir))
if os.path.exists(os.path.join(possible_topdir, 'barbican', '__init__.py')):
sys.path.insert(0, possible_topdir)
from barbican.common import config
from barbican import queue
from barbican.queue import server
from barbican import version
from oslo_log import log
from oslo_service import service
def fail(returncode, e):
sys.stderr.write("ERROR: {0}\n".format(e))
sys.exit(returncode)
def main():
try:
CONF = config.CONF
CONF(sys.argv[1:], project='barbican',
version=version.version_info.version_string)
# Import and configure logging.
log.setup(CONF, 'barbican')
LOG = log.getLogger(__name__)
LOG.debug("Booting up Barbican worker node...")
# Queuing initialization
queue.init(CONF)
service.launch(
CONF,
server.TaskServer(),
workers=CONF.queue.asynchronous_workers
).wait()
except RuntimeError as e:
fail(1, e)
if __name__ == '__main__':
main()

View File

@@ -1,359 +0,0 @@
# Copyright (c) 2013-2014 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Configuration setup for Barbican.
"""
import logging
import os
from oslo_config import cfg
from oslo_log import log
from oslo_middleware import cors
from oslo_service import _options
from barbican import i18n as u
import barbican.version
MAX_BYTES_REQUEST_INPUT_ACCEPTED = 15000
DEFAULT_MAX_SECRET_BYTES = 10000
KS_NOTIFICATIONS_GRP_NAME = 'keystone_notifications'
context_opts = [
cfg.StrOpt('admin_role', default='admin',
help=u._('Role used to identify an authenticated user as '
'administrator.')),
cfg.BoolOpt('allow_anonymous_access', default=False,
help=u._('Allow unauthenticated users to access the API with '
'read-only privileges. This only applies when using '
'ContextMiddleware.')),
]
common_opts = [
cfg.IntOpt('max_allowed_request_size_in_bytes',
default=MAX_BYTES_REQUEST_INPUT_ACCEPTED,
help=u._("Maximum allowed http request size against the "
"barbican-api.")),
cfg.IntOpt('max_allowed_secret_in_bytes',
default=DEFAULT_MAX_SECRET_BYTES,
help=u._("Maximum allowed secret size in bytes.")),
]
host_opts = [
cfg.StrOpt('host_href', default='http://localhost:9311',
help=u._("Host name, for use in HATEOAS-style references Note: "
"Typically this would be the load balanced endpoint "
"that clients would use to communicate back with this "
"service. If a deployment wants to derive host from "
"wsgi request instead then make this blank. Blank is "
"needed to override default config value which is "
"'http://localhost:9311'")),
]
db_opts = [
cfg.StrOpt('sql_connection',
default="sqlite:///barbican.sqlite",
secret=True,
help=u._("SQLAlchemy connection string for the reference "
"implementation registry server. Any valid "
"SQLAlchemy connection string is fine. See: "
"http://www.sqlalchemy.org/docs/05/reference/"
"sqlalchemy/connections.html#sqlalchemy."
"create_engine. Note: For absolute addresses, use "
"'////' slashes after 'sqlite:'.")),
cfg.IntOpt('sql_idle_timeout', default=3600,
help=u._("Period in seconds after which SQLAlchemy should "
"reestablish its connection to the database. MySQL "
"uses a default `wait_timeout` of 8 hours, after "
"which it will drop idle connections. This can result "
"in 'MySQL Gone Away' exceptions. If you notice this, "
"you can lower this value to ensure that SQLAlchemy "
"reconnects before MySQL can drop the connection.")),
cfg.IntOpt('sql_max_retries', default=60,
help=u._("Maximum number of database connection retries "
"during startup. Set to -1 to specify an infinite "
"retry count.")),
cfg.IntOpt('sql_retry_interval', default=1,
help=u._("Interval between retries of opening a SQL "
"connection.")),
cfg.BoolOpt('db_auto_create', default=True,
help=u._("Create the Barbican database on service startup.")),
cfg.IntOpt('max_limit_paging', default=100,
help=u._("Maximum page size for the 'limit' paging URL "
"parameter.")),
cfg.IntOpt('default_limit_paging', default=10,
help=u._("Default page size for the 'limit' paging URL "
"parameter.")),
cfg.StrOpt('sql_pool_class', default="QueuePool",
help=u._("Accepts a class imported from the sqlalchemy.pool "
"module, and handles the details of building the "
"pool for you. If commented out, SQLAlchemy will "
"select based on the database dialect. Other options "
"are QueuePool (for SQLAlchemy-managed connections) "
"and NullPool (to disabled SQLAlchemy management of "
"connections). See http://docs.sqlalchemy.org/en/"
"latest/core/pooling.html for more details")),
cfg.BoolOpt('sql_pool_logging', default=False,
help=u._("Show SQLAlchemy pool-related debugging output in "
"logs (sets DEBUG log level output) if specified.")),
cfg.IntOpt('sql_pool_size', default=5,
help=u._("Size of pool used by SQLAlchemy. This is the largest "
"number of connections that will be kept persistently "
"in the pool. Can be set to 0 to indicate no size "
"limit. To disable pooling, use a NullPool with "
"sql_pool_class instead. Comment out to allow "
"SQLAlchemy to select the default.")),
cfg.IntOpt('sql_pool_max_overflow', default=10,
help=u._("# The maximum overflow size of the pool used by "
"SQLAlchemy. When the number of checked-out "
"connections reaches the size set in sql_pool_size, "
"additional connections will be returned up to this "
"limit. It follows then that the total number of "
"simultaneous connections the pool will allow is "
"sql_pool_size + sql_pool_max_overflow. Can be set "
"to -1 to indicate no overflow limit, so no limit "
"will be placed on the total number of concurrent "
"connections. Comment out to allow SQLAlchemy to "
"select the default.")),
]
retry_opt_group = cfg.OptGroup(name='retry_scheduler',
title='Retry/Scheduler Options')
retry_opts = [
cfg.FloatOpt(
'initial_delay_seconds', default=10.0,
help=u._('Seconds (float) to wait before starting retry scheduler')),
cfg.FloatOpt(
'periodic_interval_max_seconds', default=10.0,
help=u._('Seconds (float) to wait between periodic schedule events')),
]
queue_opt_group = cfg.OptGroup(name='queue',
title='Queue Application Options')
queue_opts = [
cfg.BoolOpt('enable', default=False,
help=u._('True enables queuing, False invokes '
'workers synchronously')),
cfg.StrOpt('namespace', default='barbican',
help=u._('Queue namespace')),
cfg.StrOpt('topic', default='barbican.workers',
help=u._('Queue topic name')),
cfg.StrOpt('version', default='1.1',
help=u._('Version of tasks invoked via queue')),
cfg.StrOpt('server_name', default='barbican.queue',
help=u._('Server name for RPC task processing server')),
cfg.IntOpt('asynchronous_workers', default=1,
help=u._('Number of asynchronous worker processes')),
]
ks_queue_opt_group = cfg.OptGroup(name=KS_NOTIFICATIONS_GRP_NAME,
title='Keystone Notification Options')
ks_queue_opts = [
cfg.BoolOpt('enable', default=False,
help=u._('True enables keystone notification listener '
' functionality.')),
cfg.StrOpt('control_exchange', default='openstack',
help=u._('The default exchange under which topics are scoped. '
'May be overridden by an exchange name specified in '
'the transport_url option.')),
cfg.StrOpt('topic', default='notifications',
help=u._("Keystone notification queue topic name. This name "
"needs to match one of values mentioned in Keystone "
"deployment's 'notification_topics' configuration "
"e.g."
" notification_topics=notifications, "
" barbican_notifications"
"Multiple servers may listen on a topic and messages "
"will be dispatched to one of the servers in a "
"round-robin fashion. That's why Barbican service "
"should have its own dedicated notification queue so "
"that it receives all of Keystone notifications.")),
cfg.BoolOpt('allow_requeue', default=False,
help=u._('True enables requeue feature in case of notification'
' processing error. Enable this only when underlying '
'transport supports this feature.')),
cfg.StrOpt('version', default='1.0',
help=u._('Version of tasks invoked via notifications')),
cfg.IntOpt('thread_pool_size', default=10,
help=u._('Define the number of max threads to be used for '
'notification server processing functionality.')),
]
quota_opt_group = cfg.OptGroup(name='quotas',
title='Quota Options')
quota_opts = [
cfg.IntOpt('quota_secrets',
default=-1,
help=u._('Number of secrets allowed per project')),
cfg.IntOpt('quota_orders',
default=-1,
help=u._('Number of orders allowed per project')),
cfg.IntOpt('quota_containers',
default=-1,
help=u._('Number of containers allowed per project')),
cfg.IntOpt('quota_consumers',
default=-1,
help=u._('Number of consumers allowed per project')),
cfg.IntOpt('quota_cas',
default=-1,
help=u._('Number of CAs allowed per project'))
]
def list_opts():
yield None, context_opts
yield None, common_opts
yield None, host_opts
yield None, db_opts
yield None, _options.eventlet_backdoor_opts
yield retry_opt_group, retry_opts
yield queue_opt_group, queue_opts
yield ks_queue_opt_group, ks_queue_opts
yield quota_opt_group, quota_opts
# Flag to indicate barbican configuration is already parsed once or not
_CONFIG_PARSED_ONCE = False
def parse_args(conf, args=None, usage=None, default_config_files=None):
global _CONFIG_PARSED_ONCE
conf(args=args if args else [],
project='barbican',
prog='barbican',
version=barbican.version.__version__,
usage=usage,
default_config_files=default_config_files)
conf.pydev_debug_host = os.environ.get('PYDEV_DEBUG_HOST')
conf.pydev_debug_port = os.environ.get('PYDEV_DEBUG_PORT')
# Assign cfg.CONF handle to parsed barbican configuration once at startup
# only. No need to keep re-assigning it with separate plugin conf usage
if not _CONFIG_PARSED_ONCE:
cfg.CONF = conf
_CONFIG_PARSED_ONCE = True
def new_config():
conf = cfg.ConfigOpts()
log.register_options(conf)
conf.register_opts(context_opts)
conf.register_opts(common_opts)
conf.register_opts(host_opts)
conf.register_opts(db_opts)
conf.register_opts(_options.eventlet_backdoor_opts)
conf.register_opts(_options.periodic_opts)
conf.register_opts(_options.ssl_opts, "ssl")
conf.register_group(retry_opt_group)
conf.register_opts(retry_opts, group=retry_opt_group)
conf.register_group(queue_opt_group)
conf.register_opts(queue_opts, group=queue_opt_group)
conf.register_group(ks_queue_opt_group)
conf.register_opts(ks_queue_opts, group=ks_queue_opt_group)
conf.register_group(quota_opt_group)
conf.register_opts(quota_opts, group=quota_opt_group)
# Update default values from libraries that carry their own oslo.config
# initialization and configuration.
set_middleware_defaults()
return conf
def setup_remote_pydev_debug():
"""Required setup for remote debugging."""
if CONF.pydev_debug_host and CONF.pydev_debug_port:
try:
try:
from pydev import pydevd
except ImportError:
import pydevd
pydevd.settrace(CONF.pydev_debug_host,
port=int(CONF.pydev_debug_port),
stdoutToServer=True,
stderrToServer=True)
except Exception:
LOG.exception('Unable to join debugger, please '
'make sure that the debugger processes is '
'listening on debug-host \'%(debug-host)s\' '
'debug-port \'%(debug-port)s\'.',
{'debug-host': CONF.pydev_debug_host,
'debug-port': CONF.pydev_debug_port})
raise
def set_middleware_defaults():
"""Update default configuration options for oslo.middleware."""
cors.set_defaults(
allow_headers=['X-Auth-Token',
'X-Openstack-Request-Id',
'X-Project-Id',
'X-Identity-Status',
'X-User-Id',
'X-Storage-Token',
'X-Domain-Id',
'X-User-Domain-Id',
'X-Project-Domain-Id',
'X-Roles'],
expose_headers=['X-Auth-Token',
'X-Openstack-Request-Id',
'X-Project-Id',
'X-Identity-Status',
'X-User-Id',
'X-Storage-Token',
'X-Domain-Id',
'X-User-Domain-Id',
'X-Project-Domain-Id',
'X-Roles'],
allow_methods=['GET',
'PUT',
'POST',
'DELETE',
'PATCH']
)
CONF = new_config()
LOG = logging.getLogger(__name__)
parse_args(CONF)
# Adding global scope dict for all different configs created in various
# modules. In barbican, each plugin module creates its own *new* config
# instance so its error prone to share/access config values across modules
# as these module imports introduce a cyclic dependency. To avoid this, each
# plugin can set this dict after its own config instance is created and parsed.
_CONFIGS = {}
def set_module_config(name, module_conf):
"""Each plugin can set its own conf instance with its group name."""
_CONFIGS[name] = module_conf
def get_module_config(name):
"""Get handle to plugin specific config instance by its group name."""
return _CONFIGS[name]

View File

@@ -1,396 +0,0 @@
# Copyright (c) 2013-2014 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Barbican exception subclasses
"""
from barbican import i18n as u
_FATAL_EXCEPTION_FORMAT_ERRORS = False
class BarbicanException(Exception):
"""Base Barbican Exception
To correctly use this class, inherit from it and define
a 'message' property. That message will get printf'd
with the keyword arguments provided to the constructor.
"""
message = u._("An unknown exception occurred")
def __init__(self, message_arg=None, *args, **kwargs):
if not message_arg:
message_arg = self.message
try:
self.message = message_arg % kwargs
except Exception as e:
if _FATAL_EXCEPTION_FORMAT_ERRORS:
raise e
else:
# at least get the core message out if something happened
pass
super(BarbicanException, self).__init__(self.message)
class BarbicanHTTPException(BarbicanException):
"""Base Barbican Exception to handle HTTP responses
To correctly use this class, inherit from it and define the following
properties:
- message: The message that will be displayed in the server log.
- client_message: The message that will actually be outputted to the
client.
- status_code: The HTTP status code that should be returned.
The default status code is 500.
"""
client_message = u._("failure seen - please contact site administrator.")
status_code = 500
def __init__(self, message_arg=None, client_message=None, *args, **kwargs):
if not client_message:
client_message = self.client_message
try:
self.client_message = client_message % kwargs
except Exception as e:
if _FATAL_EXCEPTION_FORMAT_ERRORS:
raise e
else:
# at least get the core message out if something happened
pass
super(BarbicanHTTPException, self).__init__(
message_arg, self.client_message, *args, **kwargs)
class MissingArgumentError(BarbicanException):
message = u._("Missing required argument.")
class MissingMetadataField(BarbicanHTTPException):
message = u._("Missing required metadata field for %(required)s")
client_message = message
status_code = 400
class InvalidMetadataRequest(BarbicanHTTPException):
message = u._("Invalid Metadata. Keys and Values must be Strings.")
client_message = message
status_code = 400
class InvalidMetadataKey(BarbicanHTTPException):
message = u._("Invalid Key. Key must be URL safe.")
client_message = message
status_code = 400
class InvalidSubjectDN(BarbicanHTTPException):
message = u._("Invalid subject DN: %(subject_dn)s")
client_message = message
status_code = 400
class InvalidContainer(BarbicanHTTPException):
message = u._("Invalid container: %(reason)s")
client_message = message
status_code = 400
class InvalidExtensionsData(BarbicanHTTPException):
message = u._("Invalid extensions data.")
client_message = message
status_code = 400
class InvalidCMCData(BarbicanHTTPException):
message = u._("Invalid CMC Data")
client_message = message
status_code = 400
class InvalidPKCS10Data(BarbicanHTTPException):
message = u._("Invalid PKCS10 Data: %(reason)s")
client_message = message
status_code = 400
class InvalidCertificateRequestType(BarbicanHTTPException):
message = u._("Invalid Certificate Request Type")
client_message = message
status_code = 400
class CertificateExtensionsNotSupported(BarbicanHTTPException):
message = u._("Extensions are not yet supported. "
"Specify a valid profile instead.")
client_message = message
status_code = 400
class FullCMCNotSupported(BarbicanHTTPException):
message = u._("Full CMC Requests are not yet supported.")
client_message = message
status_code = 400
class NotFound(BarbicanException):
message = u._("An object with the specified identifier was not found.")
class ConstraintCheck(BarbicanException):
message = u._("A defined SQL constraint check failed: %(error)s")
class NotSupported(BarbicanException):
message = u._("Operation is not supported.")
class Invalid(BarbicanException):
message = u._("Data supplied was not valid.")
class NoDataToProcess(BarbicanHTTPException):
message = u._("No data supplied to process.")
client_message = message
status_code = 400
class LimitExceeded(BarbicanHTTPException):
message = u._("The request returned a 413 Request Entity Too Large. This "
"generally means that rate limiting or a quota threshold "
"was breached.")
client_message = u._("Provided information too large to process")
status_code = 413
def __init__(self, *args, **kwargs):
super(LimitExceeded, self).__init__(*args, **kwargs)
self.retry_after = (int(kwargs['retry']) if kwargs.get('retry')
else None)
class InvalidObject(BarbicanHTTPException):
status_code = 400
def __init__(self, *args, **kwargs):
self.invalid_property = kwargs.get('property')
self.message = u._("Failed to validate JSON information: ")
self.client_message = u._("Provided object does not match "
"schema '{schema}': "
"{reason}. Invalid property: "
"'{property}'").format(*args, **kwargs)
self.message = self.message + self.client_message
super(InvalidObject, self).__init__(*args, **kwargs)
class PayloadDecodingError(BarbicanHTTPException):
status_code = 400
message = u._("Error while attempting to decode payload.")
client_message = u._("Unable to decode request data.")
class UnsupportedField(BarbicanHTTPException):
message = u._("No support for value set on field '%(field)s' on "
"schema '%(schema)s': %(reason)s")
client_message = u._("Provided field value is not supported")
status_code = 400
def __init__(self, *args, **kwargs):
super(UnsupportedField, self).__init__(*args, **kwargs)
self.invalid_field = kwargs.get('field')
class FeatureNotImplemented(BarbicanException):
message = u._("Feature not implemented for value set on field "
"'%(field)s' on " "schema '%(schema)s': %(reason)s")
def __init__(self, *args, **kwargs):
super(FeatureNotImplemented, self).__init__(*args, **kwargs)
self.invalid_field = kwargs.get('field')
class StoredKeyContainerNotFound(BarbicanException):
message = u._("Container %(container_id)s does not exist for stored "
"key certificate generation.")
class StoredKeyPrivateKeyNotFound(BarbicanException):
message = u._("Container %(container_id)s does not reference a private "
"key needed for stored key certificate generation.")
class ProvidedTransportKeyNotFound(BarbicanHTTPException):
message = u._("Provided Transport key %(transport_key_id)s "
"could not be found")
client_message = u._("Provided transport key was not found.")
status_code = 400
class InvalidCAID(BarbicanHTTPException):
message = u._("Invalid CA_ID: %(ca_id)s")
client_message = u._("The ca_id provided in the request is invalid")
status_code = 400
class CANotDefinedForProject(BarbicanHTTPException):
message = u._("CA specified by ca_id %(ca_id)s not defined for project: "
"%(project_id)s")
client_message = u._("The ca_id provided in the request is not defined "
"for this project")
status_code = 403
class QuotaReached(BarbicanHTTPException):
message = u._("Quota reached for project %(external_project_id)s. Only "
"%(quota)s %(resource_type)s are allowed.")
client_message = u._("Creation not allowed because a quota has "
"been reached")
status_code = 403
def __init__(self, *args, **kwargs):
super(QuotaReached, self).__init__(*args, **kwargs)
self.external_project_id = kwargs.get('external_project_id')
self.quota = kwargs.get('quota')
self.resource_type = kwargs.get('resource_type')
class InvalidParentCA(BarbicanHTTPException):
message = u._("Invalid Parent CA: %(parent_ca_ref)s")
client_message = message
status_code = 400
class SubCAsNotSupported(BarbicanHTTPException):
message = u._("Plugin does not support generation of subordinate CAs")
client_message = message
status_code = 400
class SubCANotCreated(BarbicanHTTPException):
message = u._("Errors in creating subordinate CA: %(name)")
client_message = message
class CannotDeleteBaseCA(BarbicanHTTPException):
message = u._("Only subordinate CAs can be deleted.")
status_code = 403
class UnauthorizedSubCA(BarbicanHTTPException):
message = u._("Subordinate CA is not owned by this project")
client_message = message
status_code = 403
class CannotDeletePreferredCA(BarbicanHTTPException):
message = u._("A new project preferred CA must be set "
"before this one can be deleted.")
status_code = 409
class BadSubCACreationRequest(BarbicanHTTPException):
message = u._("Errors returned by CA when attempting to "
"create subordinate CA: %(reason)s")
client_message = message
status_code = 400
class SubCACreationErrors(BarbicanHTTPException):
message = u._("Errors returned by CA when attempting to create "
"subordinate CA: %(reason)s")
client_message = message
class SubCADeletionErrors(BarbicanHTTPException):
message = u._("Errors returned by CA when attempting to delete "
"subordinate CA: %(reason)s")
client_message = message
class PKCS11Exception(BarbicanException):
message = u._("There was an error with the PKCS#11 library.")
class P11CryptoPluginKeyException(PKCS11Exception):
message = u._("More than one key found for label")
class P11CryptoPluginException(PKCS11Exception):
message = u._("General exception")
class P11CryptoKeyHandleException(PKCS11Exception):
message = u._("No key handle was found")
class P11CryptoTokenException(PKCS11Exception):
message = u._("No token was found in slot %(slot_id)s")
class MultipleStorePreferredPluginMissing(BarbicanException):
"""Raised when a preferred plugin is missing in service configuration."""
def __init__(self, store_name):
super(MultipleStorePreferredPluginMissing, self).__init__(
u._("Preferred Secret Store plugin '{store_name}' is not "
"currently set in service configuration. This is probably a "
"server misconfiguration.").format(
store_name=store_name)
)
self.store_name = store_name
class MultipleStorePluginStillInUse(BarbicanException):
"""Raised when a used plugin is missing in service configuration."""
def __init__(self, store_name):
super(MultipleStorePluginStillInUse, self).__init__(
u._("Secret Store plugin '{store_name}' is still in use and can "
"not be removed. Its missing in service configuration. This is"
" probably a server misconfiguration.").format(
store_name=store_name)
)
self.store_name = store_name
class MultipleSecretStoreLookupFailed(BarbicanException):
"""Raised when a plugin lookup suffix is missing during config read."""
def __init__(self):
msg = u._("Plugin lookup property 'stores_lookup_suffix' is not "
"defined in service configuration")
super(MultipleSecretStoreLookupFailed, self).__init__(msg)
class MultipleStoreIncorrectGlobalDefault(BarbicanException):
"""Raised when a global default for only one plugin is not set to True."""
def __init__(self, occurrence):
msg = None
if occurrence > 1:
msg = u._("There are {count} plugins with global default as "
"True in service configuration. Only one plugin can have"
" this as True").format(count=occurrence)
else:
msg = u._("There is no plugin defined with global default as True."
" One of plugin must be identified as global default")
super(MultipleStoreIncorrectGlobalDefault, self).__init__(msg)
class MultipleStorePluginValueMissing(BarbicanException):
"""Raised when a store plugin value is missing in service configuration."""
def __init__(self, section_name):
super(MultipleStorePluginValueMissing, self).__init__(
u._("In section '{0}', secret_store_plugin value is missing"
).format(section_name)
)
self.section_name = section_name

View File

@@ -1,172 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from barbican.common import utils
def convert_resource_id_to_href(resource_slug, resource_id):
"""Convert the resource ID to a HATEOAS-style href with resource slug."""
if resource_id:
resource = '{slug}/{id}'.format(slug=resource_slug, id=resource_id)
else:
resource = '{slug}/????'.format(slug=resource_slug)
return utils.hostname_for_refs(resource=resource)
def convert_secret_to_href(secret_id):
"""Convert the secret IDs to a HATEOAS-style href."""
return convert_resource_id_to_href('secrets', secret_id)
def convert_order_to_href(order_id):
"""Convert the order IDs to a HATEOAS-style href."""
return convert_resource_id_to_href('orders', order_id)
def convert_container_to_href(container_id):
"""Convert the container IDs to a HATEOAS-style href."""
return convert_resource_id_to_href('containers', container_id)
def convert_transport_key_to_href(transport_key_id):
"""Convert the transport key IDs to a HATEOAS-style href."""
return convert_resource_id_to_href('transport_keys', transport_key_id)
def convert_consumer_to_href(consumer_id):
"""Convert the consumer ID to a HATEOAS-style href."""
return convert_resource_id_to_href('consumers', consumer_id) + '/consumers'
def convert_user_meta_to_href(secret_id):
"""Convert the consumer ID to a HATEOAS-style href."""
return convert_resource_id_to_href('secrets', secret_id) + '/metadata'
def convert_certificate_authority_to_href(ca_id):
"""Convert the ca ID to a HATEOAS-style href."""
return convert_resource_id_to_href('cas', ca_id)
def convert_secret_stores_to_href(secret_store_id):
"""Convert the secret-store ID to a HATEOAS-style href."""
return convert_resource_id_to_href('secret-stores', secret_store_id)
# TODO(hgedikli) handle list of fields in here
def convert_to_hrefs(fields):
"""Convert id's within a fields dict to HATEOAS-style hrefs."""
if 'secret_id' in fields:
fields['secret_ref'] = convert_secret_to_href(fields['secret_id'])
del fields['secret_id']
if 'order_id' in fields:
fields['order_ref'] = convert_order_to_href(fields['order_id'])
del fields['order_id']
if 'container_id' in fields:
fields['container_ref'] = convert_container_to_href(
fields['container_id'])
del fields['container_id']
if 'transport_key_id' in fields:
fields['transport_key_ref'] = convert_transport_key_to_href(
fields['transport_key_id'])
del fields['transport_key_id']
return fields
def convert_list_to_href(resources_name, offset, limit):
"""Supports pretty output of paged-list hrefs.
Convert the offset/limit info to a HATEOAS-style href
suitable for use in a list navigation paging interface.
"""
resource = '{0}?limit={1}&offset={2}'.format(resources_name, limit,
offset)
return utils.hostname_for_refs(resource=resource)
def previous_href(resources_name, offset, limit):
"""Supports pretty output of previous-page hrefs.
Create a HATEOAS-style 'previous' href suitable for use in a list
navigation paging interface, assuming the provided values are the
currently viewed page.
"""
offset = max(0, offset - limit)
return convert_list_to_href(resources_name, offset, limit)
def next_href(resources_name, offset, limit):
"""Supports pretty output of next-page hrefs.
Create a HATEOAS-style 'next' href suitable for use in a list
navigation paging interface, assuming the provided values are the
currently viewed page.
"""
offset = offset + limit
return convert_list_to_href(resources_name, offset, limit)
def add_nav_hrefs(resources_name, offset, limit,
total_elements, data):
"""Adds next and/or previous hrefs to paged list responses.
:param resources_name: Name of api resource
:param offset: Element number (ie. index) where current page starts
:param limit: Max amount of elements listed on current page
:param total_elements: Total number of elements
:returns: augmented dictionary with next and/or previous hrefs
"""
if offset > 0:
data.update({'previous': previous_href(resources_name,
offset,
limit)})
if total_elements > (offset + limit):
data.update({'next': next_href(resources_name,
offset,
limit)})
return data
def get_container_id_from_ref(container_ref):
"""Parse a container reference and return the container ID
TODO(Dave) Add some extra checking for valid prefix
The container ID is the right-most element of the URL
:param container_ref: HTTP reference of container
:return: a string containing the ID of the container
"""
container_id = container_ref.rsplit('/', 1)[1]
return container_id
def get_secret_id_from_ref(secret_ref):
"""Parse a secret reference and return the secret ID
:param secret_ref: HTTP reference of secret
:return: a string containing the ID of the secret
"""
secret_id = secret_ref.rsplit('/', 1)[1]
return secret_id
def get_ca_id_from_ref(ca_ref):
"""Parse a ca_ref and return the CA ID
:param ca_ref: HHTO reference of the CA
:return: a string containing the ID of the CA
"""
ca_id = ca_ref.rsplit('/', 1)[1]
return ca_id

View File

@@ -1,43 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import itertools
from barbican.common.policies import acls
from barbican.common.policies import base
from barbican.common.policies import cas
from barbican.common.policies import consumers
from barbican.common.policies import containers
from barbican.common.policies import orders
from barbican.common.policies import quotas
from barbican.common.policies import secretmeta
from barbican.common.policies import secrets
from barbican.common.policies import secretstores
from barbican.common.policies import transportkeys
from barbican.common.policies import versions
def list_rules():
return itertools.chain(
acls.list_rules(),
base.list_rules(),
cas.list_rules(),
consumers.list_rules(),
containers.list_rules(),
orders.list_rules(),
quotas.list_rules(),
secretmeta.list_rules(),
secrets.list_rules(),
secretstores.list_rules(),
transportkeys.list_rules(),
versions.list_rules(),
)

View File

@@ -1,38 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('secret_acls:put_patch',
'rule:secret_project_admin or '
'rule:secret_project_creator'),
policy.RuleDefault('secret_acls:delete',
'rule:secret_project_admin or '
'rule:secret_project_creator'),
policy.RuleDefault('secret_acls:get',
'rule:all_but_audit and '
'rule:secret_project_match'),
policy.RuleDefault('container_acls:put_patch',
'rule:container_project_admin or '
'rule:container_project_creator'),
policy.RuleDefault('container_acls:delete',
'rule:container_project_admin or '
'rule:container_project_creator'),
policy.RuleDefault('container_acls:get',
'rule:all_but_audit and rule:container_project_match'),
]
def list_rules():
return rules

View File

@@ -1,77 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('admin',
'role:admin'),
policy.RuleDefault('observer',
'role:observer'),
policy.RuleDefault('creator',
'role:creator'),
policy.RuleDefault('audit',
'role:audit'),
policy.RuleDefault('service_admin',
'role:key-manager:service-admin'),
policy.RuleDefault('admin_or_user_does_not_work',
'project_id:%(project_id)s'),
policy.RuleDefault('admin_or_user',
'rule:admin or project_id:%(project_id)s'),
policy.RuleDefault('admin_or_creator',
'rule:admin or rule:creator'),
policy.RuleDefault('all_but_audit',
'rule:admin or rule:observer or rule:creator'),
policy.RuleDefault('all_users',
'rule:admin or rule:observer or rule:creator or '
'rule:audit or rule:service_admin'),
policy.RuleDefault('secret_project_match',
'project:%(target.secret.project_id)s'),
policy.RuleDefault('secret_acl_read',
"'read':%(target.secret.read)s"),
policy.RuleDefault('secret_private_read',
"'False':%(target.secret.read_project_access)s"),
policy.RuleDefault('secret_creator_user',
"user:%(target.secret.creator_id)s"),
policy.RuleDefault('container_project_match',
"project:%(target.container.project_id)s"),
policy.RuleDefault('container_acl_read',
"'read':%(target.container.read)s"),
policy.RuleDefault('container_private_read',
"'False':%(target.container.read_project_access)s"),
policy.RuleDefault('container_creator_user',
"user:%(target.container.creator_id)s"),
policy.RuleDefault('secret_non_private_read',
"rule:all_users and rule:secret_project_match and "
"not rule:secret_private_read"),
policy.RuleDefault('secret_decrypt_non_private_read',
"rule:all_but_audit and rule:secret_project_match and "
"not rule:secret_private_read"),
policy.RuleDefault('container_non_private_read',
"rule:all_users and rule:container_project_match and "
"not rule:container_private_read"),
policy.RuleDefault('secret_project_admin',
"rule:admin and rule:secret_project_match"),
policy.RuleDefault('secret_project_creator',
"rule:creator and rule:secret_project_match and "
"rule:secret_creator_user"),
policy.RuleDefault('container_project_admin',
"rule:admin and rule:container_project_match"),
policy.RuleDefault('container_project_creator',
"rule:creator and rule:container_project_match and "
"rule:container_creator_user"),
]
def list_rules():
return rules

View File

@@ -1,51 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('certificate_authorities:get_limited',
'rule:all_users'),
policy.RuleDefault('certificate_authorities:get_all',
'rule:admin'),
policy.RuleDefault('certificate_authorities:post',
'rule:admin'),
policy.RuleDefault('certificate_authorities:get_preferred_ca',
'rule:all_users'),
policy.RuleDefault('certificate_authorities:get_global_preferred_ca',
'rule:service_admin'),
policy.RuleDefault('certificate_authorities:unset_global_preferred',
'rule:service_admin'),
policy.RuleDefault('certificate_authority:delete',
'rule:admin'),
policy.RuleDefault('certificate_authority:get',
'rule:all_users'),
policy.RuleDefault('certificate_authority:get_cacert',
'rule:all_users'),
policy.RuleDefault('certificate_authority:get_ca_cert_chain',
'rule:all_users'),
policy.RuleDefault('certificate_authority:get_projects',
'rule:service_admin'),
policy.RuleDefault('certificate_authority:add_to_project',
'rule:admin'),
policy.RuleDefault('certificate_authority:remove_from_project',
'rule:admin'),
policy.RuleDefault('certificate_authority:set_preferred',
'rule:admin'),
policy.RuleDefault('certificate_authority:set_global_preferred',
'rule:service_admin'),
]
def list_rules():
return rules

View File

@@ -1,43 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('consumer:get',
'rule:admin or rule:observer or rule:creator or '
'rule:audit or rule:container_non_private_read or '
'rule:container_project_creator or '
'rule:container_project_admin or '
'rule:container_acl_read'),
policy.RuleDefault('consumers:get',
'rule:admin or rule:observer or rule:creator or '
'rule:audit or rule:container_non_private_read or '
'rule:container_project_creator or '
'rule:container_project_admin or '
'rule:container_acl_read'),
policy.RuleDefault('consumers:post',
'rule:admin or rule:container_non_private_read or '
'rule:container_project_creator or '
'rule:container_project_admin or '
'rule:container_acl_read'),
policy.RuleDefault('consumers:delete',
'rule:admin or rule:container_non_private_read or '
'rule:container_project_creator or '
'rule:container_project_admin or '
'rule:container_acl_read'),
]
def list_rules():
return rules

View File

@@ -1,37 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('containers:post',
'rule:admin_or_creator'),
policy.RuleDefault('containers:get',
'rule:all_but_audit'),
policy.RuleDefault('container:get',
'rule:container_non_private_read or '
'rule:container_project_creator or '
'rule:container_project_admin or '
'rule:container_acl_read'),
policy.RuleDefault('container:delete',
'rule:container_project_admin or '
'rule:container_project_creator'),
policy.RuleDefault('container_secret:post',
'rule:admin'),
policy.RuleDefault('container_secret:delete',
'rule:admin'),
]
def list_rules():
return rules

View File

@@ -1,31 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('orders:post',
'rule:admin_or_creator'),
policy.RuleDefault('orders:get',
'rule:all_but_audit'),
policy.RuleDefault('order:get',
'rule:all_users'),
policy.RuleDefault('order:put',
'rule:admin_or_creator'),
policy.RuleDefault('order:delete',
'rule:admin'),
]
def list_rules():
return rules

View File

@@ -1,29 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('quotas:get',
'rule:all_users'),
policy.RuleDefault('project_quotas:get',
'rule:service_admin'),
policy.RuleDefault('project_quotas:put',
'rule:service_admin'),
policy.RuleDefault('project_quotas:delete',
'rule:service_admin'),
]
def list_rules():
return rules

View File

@@ -1,29 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('secret_meta:get',
'rule:all_but_audit'),
policy.RuleDefault('secret_meta:post',
'rule:admin_or_creator'),
policy.RuleDefault('secret_meta:put',
'rule:admin_or_creator'),
policy.RuleDefault('secret_meta:delete',
'rule:admin_or_creator'),
]
def list_rules():
return rules

View File

@@ -1,41 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('secret:decrypt',
'rule:secret_decrypt_non_private_read or '
'rule:secret_project_creator or '
'rule:secret_project_admin or '
'rule:secret_acl_read'),
policy.RuleDefault('secret:get',
'rule:secret_non_private_read or '
'rule:secret_project_creator or '
'rule:secret_project_admin or '
'rule:secret_acl_read'),
policy.RuleDefault('secret:put',
'rule:admin_or_creator and '
'rule:secret_project_match'),
policy.RuleDefault('secret:delete',
'rule:secret_project_admin or '
'rule:secret_project_creator'),
policy.RuleDefault('secrets:post',
'rule:admin_or_creator'),
policy.RuleDefault('secrets:get',
'rule:all_but_audit'),
]
def list_rules():
return rules

View File

@@ -1,33 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('secretstores:get',
'rule:admin'),
policy.RuleDefault('secretstores:get_global_default',
'rule:admin'),
policy.RuleDefault('secretstores:get_preferred',
'rule:admin'),
policy.RuleDefault('secretstore_preferred:post',
'rule:admin'),
policy.RuleDefault('secretstore_preferred:delete',
'rule:admin'),
policy.RuleDefault('secretstore:get',
'rule:admin'),
]
def list_rules():
return rules

View File

@@ -1,29 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('transport_key:get',
'rule:all_users'),
policy.RuleDefault('transport_key:delete',
'rule:admin'),
policy.RuleDefault('transport_keys:get',
'rule:all_users'),
policy.RuleDefault('transport_keys:post',
'rule:admin'),
]
def list_rules():
return rules

View File

@@ -1,23 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_policy import policy
rules = [
policy.RuleDefault('version:get',
'@')
]
def list_rules():
return rules

View File

@@ -1,203 +0,0 @@
# Copyright (c) 2015 Cisco Systems
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from barbican.common import config
from barbican.common import exception
from barbican.common import hrefs
from barbican.common import resources as res
from barbican.model import repositories as repo
# All negative values will be treated as unlimited
UNLIMITED_VALUE = -1
DISABLED_VALUE = 0
CONF = config.CONF
class QuotaDriver(object):
"""Driver to enforce quotas and obtain quota information."""
def __init__(self):
self.repo = repo.get_project_quotas_repository()
def _get_resources(self):
"""List of resources that can be constrained by a quota"""
return ['secrets', 'orders', 'containers', 'consumers', 'cas']
def _get_defaults(self):
"""Return list of default quotas"""
quotas = {
'secrets': CONF.quotas.quota_secrets,
'orders': CONF.quotas.quota_orders,
'containers': CONF.quotas.quota_containers,
'consumers': CONF.quotas.quota_consumers,
'cas': CONF.quotas.quota_cas
}
return quotas
def _extract_project_quotas(self, project_quotas_model):
"""Convert project quotas model to Python dict
:param project_quotas_model: Model containing quota information
:return: Python dict containing quota information
"""
resp_quotas = {}
for resource in self._get_resources():
resp_quotas[resource] = getattr(project_quotas_model, resource)
return resp_quotas
def _compute_effective_quotas(self, configured_quotas):
"""Merge configured and default quota information
When a quota value is not set, use the default value
:param configured_quotas: configured quota values
:return: effective quotas
"""
default_quotas = self._get_defaults()
resp_quotas = dict(configured_quotas)
for resource, quota in resp_quotas.items():
if quota is None:
resp_quotas[resource] = default_quotas[resource]
return resp_quotas
def get_effective_quotas(self, external_project_id):
"""Collect and return the effective quotas for a project
:param external_project_id: external ID of current project
:return: dict with effective quotas
"""
try:
retrieved_project_quotas = self.repo.get_by_external_project_id(
external_project_id)
except exception.NotFound:
resp_quotas = self._get_defaults()
else:
resp_quotas = self._compute_effective_quotas(
self._extract_project_quotas(retrieved_project_quotas))
return resp_quotas
def is_unlimited_value(self, v):
"""A helper method to check for unlimited value."""
return v <= UNLIMITED_VALUE
def is_disabled_value(self, v):
"""A helper method to check for disabled value."""
return v == DISABLED_VALUE
def set_project_quotas(self, external_project_id, parsed_project_quotas):
"""Create a new database entry, or update existing one
:param external_project_id: ID of project whose quotas are to be set
:param parsed_project_quotas: quota values to save in database
:return: None
"""
project = res.get_or_create_project(external_project_id)
self.repo.create_or_update_by_project_id(project.id,
parsed_project_quotas)
# commit to DB to avoid async issues if the enforcer is called from
# another thread
repo.commit()
def get_project_quotas(self, external_project_id):
"""Retrieve configured quota information from database
:param external_project_id: ID of project for whose values are wanted
:return: the values
"""
try:
retrieved_project_quotas = self.repo.get_by_external_project_id(
external_project_id)
except exception.NotFound:
return None
resp_quotas = self._extract_project_quotas(retrieved_project_quotas)
resp = {'project_quotas': resp_quotas}
return resp
def get_project_quotas_list(self, offset_arg=None, limit_arg=None):
"""Return a dict and list of all configured quota information
:return: a dict and list of a page of quota config info
"""
retrieved_project_quotas, offset, limit, total =\
self.repo.get_by_create_date(offset_arg=offset_arg,
limit_arg=limit_arg,
suppress_exception=True)
resp_quotas = []
for quotas in retrieved_project_quotas:
list_item = {'project_id': quotas.project.external_id,
'project_quotas':
self._extract_project_quotas(quotas)}
resp_quotas.append(list_item)
resp = {'project_quotas': resp_quotas}
resp_overall = hrefs.add_nav_hrefs(
'project_quotas', offset, limit, total, resp)
resp_overall.update({'total': total})
return resp_overall
def delete_project_quotas(self, external_project_id):
"""Remove configured quota information from database
:param external_project_id: ID of project whose quotas will be deleted
:raises NotFound: if project has no configured values
:return: None
"""
self.repo.delete_by_external_project_id(external_project_id)
def get_quotas(self, external_project_id):
"""Get the effective quotas for a project
Effective quotas are based on both configured and default values
:param external_project_id: ID of project for which to get quotas
:return: dict of effective quota values
"""
resp_quotas = self.get_effective_quotas(external_project_id)
resp = {'quotas': resp_quotas}
return resp
class QuotaEnforcer(object):
"""Checks quotas limits and current resource usage levels"""
def __init__(self, resource_type, resource_repo):
self.quota_driver = QuotaDriver()
self.resource_type = resource_type
self.resource_repo = resource_repo
def enforce(self, project):
"""Enforce the quota limit for the resource
:param project: the project object corresponding to the sender
:raises QuotaReached: exception raised if quota forbids request
:return: None
"""
quotas = self.quota_driver.get_effective_quotas(project.external_id)
quota = quotas[self.resource_type]
reached = False
count = 0
if self.quota_driver.is_unlimited_value(quota):
pass
elif self.quota_driver.is_disabled_value(quota):
reached = True
else:
count = self.resource_repo.get_count(project.id)
if count >= quota:
reached = True
if reached:
raise exception.QuotaReached(
external_project_id=project.external_id,
resource_type=self.resource_type,
quota=quota)

View File

@@ -1,50 +0,0 @@
# Copyright (c) 2013-2014 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Shared business logic.
"""
from barbican.common import utils
from barbican.model import models
from barbican.model import repositories
LOG = utils.getLogger(__name__)
GLOBAL_PREFERRED_PROJECT_ID = "GLOBAL_PREFERRED"
def get_or_create_global_preferred_project():
return get_or_create_project(GLOBAL_PREFERRED_PROJECT_ID)
def get_or_create_project(project_id):
"""Returns project with matching project_id.
Creates it if it does not exist.
:param project_id: The external-to-Barbican ID for this project.
:param project_repo: Project repository.
:return: Project model instance
"""
project_repo = repositories.get_project_repository()
project = project_repo.find_by_external_project_id(project_id,
suppress_exception=True)
if not project:
LOG.debug('Creating project for %s', project_id)
project = models.Project()
project.external_id = project_id
project.status = models.States.ACTIVE
project_repo.create_from(project)
return project

View File

@@ -1,207 +0,0 @@
# Copyright (c) 2013-2014 Rackspace, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Common utilities for Barbican.
"""
import collections
import importlib
import mimetypes
import uuid
from oslo_log import log
from oslo_utils import uuidutils
import pecan
import six
from six.moves.urllib import parse
from barbican.common import config
from barbican import i18n as u
CONF = config.CONF
# Current API version
API_VERSION = 'v1'
# Added here to remove cyclic dependency.
# In barbican.model.models module SecretType.OPAQUE was imported from
# barbican.plugin.interface.secret_store which introduces a cyclic dependency
# if `secret_store` plugin needs to use db model classes. So moving shared
# value to another common python module which is already imported in both.
SECRET_TYPE_OPAQUE = "opaque"
def _do_allow_certain_content_types(func, content_types_list=[]):
# Allows you to bypass pecan's content-type restrictions
cfg = pecan.util._cfg(func)
cfg.setdefault('content_types', {})
cfg['content_types'].update((value, '')
for value in content_types_list)
return func
def allow_certain_content_types(*content_types_list):
def _wrapper(func):
return _do_allow_certain_content_types(func, content_types_list)
return _wrapper
def allow_all_content_types(f):
return _do_allow_certain_content_types(f, mimetypes.types_map.values())
def get_base_url_from_request():
"""Derive base url from wsgi request if CONF.host_href is not set
Use host.href as base URL if its set in barbican.conf.
If its not set, then derives value from wsgi request. WSGI request uses
HOST header or HTTP_X_FORWARDED_FOR header (in case of proxy) for host +
port part of its url. Proxies can also set HTTP_X_FORWARDED_PROTO header
for indicating http vs https.
Some of unit tests does not have pecan context that's why using request
attr check on pecan instance.
"""
if not CONF.host_href and hasattr(pecan.request, 'url'):
p_url = parse.urlsplit(pecan.request.url)
base_url = '%s://%s' % (p_url.scheme, p_url.netloc)
return base_url
else: # when host_href is set or flow is not within wsgi request context
return CONF.host_href
def hostname_for_refs(resource=None):
"""Return the HATEOAS-style return URI reference for this service."""
base_url = get_base_url_from_request()
ref = ['{base}/{version}'.format(base=base_url, version=API_VERSION)]
if resource:
ref.append('/' + resource)
return ''.join(ref)
# Return a logger instance.
# Note: Centralize access to the logger to avoid the dreaded
# 'ArgsAlreadyParsedError: arguments already parsed: cannot
# register CLI option'
# error.
def getLogger(name):
return log.getLogger(name)
def get_accepted_encodings(req):
"""Returns a list of client acceptable encodings sorted by q value.
For details see: http://tools.ietf.org/html/rfc2616#section-14.3
:param req: request object
:returns: list of client acceptable encodings sorted by q value.
"""
header = req.get_header('Accept-Encoding')
return get_accepted_encodings_direct(header)
def get_accepted_encodings_direct(content_encoding_header):
"""Returns a list of client acceptable encodings sorted by q value.
For details see: http://tools.ietf.org/html/rfc2616#section-14.3
:param req: request object
:returns: list of client acceptable encodings sorted by q value.
"""
if content_encoding_header is None:
return None
Encoding = collections.namedtuple('Encoding', ['coding', 'quality'])
encodings = list()
for enc in content_encoding_header.split(','):
if ';' in enc:
coding, qvalue = enc.split(';')
try:
qvalue = qvalue.split('=')[1]
quality = float(qvalue.strip())
except ValueError:
# can't convert quality to float
return None
if quality > 1.0 or quality < 0.0:
# quality is outside valid range
return None
if quality > 0.0:
encodings.append(Encoding(coding.strip(), quality))
else:
encodings.append(Encoding(enc.strip(), 1))
# Sort the encodings by quality
encodings = sorted(encodings, key=lambda e: e.quality, reverse=True)
return [encoding.coding for encoding in encodings]
def generate_fullname_for(instance):
"""Produce a fully qualified class name for the specified instance.
:param instance: The instance to generate information from.
:return: A string providing the package.module information for the
instance.
:raises: ValueError if the given instance is null
"""
if not instance:
raise ValueError(u._("Cannot generate a fullname for a null instance"))
module = type(instance).__module__
class_name = type(instance).__name__
if module is None or module == six.moves.builtins.__name__:
return class_name
return "{module}.{class_name}".format(module=module, class_name=class_name)
def get_class_for(module_name, class_name):
"""Create a Python class from its text-specified components."""
# Load the module via name, raising ImportError if module cannot be
# loaded.
python_module = importlib.import_module(module_name)
# Load and return the resolved Python class, raising AttributeError if
# class cannot be found.
return getattr(python_module, class_name)
def generate_uuid():
return uuidutils.generate_uuid()
def is_multiple_backends_enabled():
secretstore_conf = config.get_module_config('secretstore')
return secretstore_conf.secretstore.enable_multiple_secret_stores
def validate_id_is_uuid(input_id, version=4):
"""Validates provided id is uuid4 format value.
Returns true when provided id is a valid version 4 uuid otherwise
returns False.
This validation is to be used only for ids which are generated by barbican
(e.g. not for keystone project_id)
"""
try:
value = uuid.UUID(input_id, version=version)
except Exception:
return False
return str(value) == input_id

File diff suppressed because it is too large Load Diff

View File

@@ -1,53 +0,0 @@
# Copyright 2011-2012 OpenStack LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_context
from oslo_policy import policy
from barbican.common import config
CONF = config.CONF
class RequestContext(oslo_context.context.RequestContext):
"""User security context object
Stores information about the security context under which the user
accesses the system, as well as additional request information.
"""
def __init__(self, policy_enforcer=None, project=None, **kwargs):
# prefer usage of 'project' instead of 'tenant'
if project:
kwargs['tenant'] = project
self.project = project
self.policy_enforcer = policy_enforcer or policy.Enforcer(CONF)
super(RequestContext, self).__init__(**kwargs)
def to_dict(self):
out_dict = super(RequestContext, self).to_dict()
out_dict['roles'] = self.roles
# NOTE(jaosorior): For now, the oslo_context library uses 'tenant'
# instead of project. But in case this changes, this will still issue
# the dict we expect, which would contain 'project'.
if out_dict.get('tenant'):
out_dict['project'] = out_dict['tenant']
out_dict.pop('tenant')
return out_dict
@classmethod
def from_dict(cls, values):
return cls(**values)

View File

@@ -1,293 +0,0 @@
# Copyright (c) 2016, GohighSec
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import ast
import re
import six
import pep8
"""
Guidelines for writing new hacking checks
- Use only for Barbican specific tests. OpenStack general tests
should be submitted to the common 'hacking' module.
- Pick numbers in the range B3xx. Find the current test with
the highest allocated number and then pick the next value.
- Keep the test method code in the source file ordered based
on the B3xx value.
- List the new rule in the top level HACKING.rst file
- Add test cases for each new rule to barbican/tests/test_hacking.py
"""
oslo_namespace_imports = re.compile(r"from[\s]*oslo[.](.*)")
dict_constructor_with_list_copy_re = re.compile(r".*\bdict\((\[)?(\(|\[)")
assert_no_xrange_re = re.compile(r"\s*xrange\s*\(")
assert_True = re.compile(r".*assertEqual\(True, .*\)")
assert_None = re.compile(r".*assertEqual\(None, .*\)")
assert_Not_Equal = re.compile(r".*assertNotEqual\(None, .*\)")
assert_Is_Not = re.compile(r".*assertIsNot\(None, .*\)")
no_log_warn = re.compile(r".*LOG.warn\(.*\)")
class BaseASTChecker(ast.NodeVisitor):
"""Provides a simple framework for writing AST-based checks.
Subclasses should implement visit_* methods like any other AST visitor
implementation. When they detect an error for a particular node the
method should call ``self.add_error(offending_node)``. Details about
where in the code the error occurred will be pulled from the node
object.
Subclasses should also provide a class variable named CHECK_DESC to
be used for the human readable error message.
"""
CHECK_DESC = 'No check message specified'
def __init__(self, tree, filename):
"""This object is created automatically by pep8.
:param tree: an AST tree
:param filename: name of the file being analyzed
(ignored by our checks)
"""
self._tree = tree
self._errors = []
def run(self):
"""Called automatically by pep8."""
self.visit(self._tree)
return self._errors
def add_error(self, node, message=None):
"""Add an error caused by a node to the list of errors for pep8."""
message = message or self.CHECK_DESC
error = (node.lineno, node.col_offset, message, self.__class__)
self._errors.append(error)
def _check_call_names(self, call_node, names):
if isinstance(call_node, ast.Call):
if isinstance(call_node.func, ast.Name):
if call_node.func.id in names:
return True
return False
class CheckLoggingFormatArgs(BaseASTChecker):
"""Check for improper use of logging format arguments.
LOG.debug("Volume %s caught fire and is at %d degrees C and climbing.",
('volume1', 500))
The format arguments should not be a tuple as it is easy to miss.
"""
CHECK_DESC = 'B310 Log method arguments should not be a tuple.'
LOG_METHODS = [
'debug', 'info',
'warn', 'warning',
'error', 'exception',
'critical', 'fatal',
'trace', 'log'
]
def _find_name(self, node):
"""Return the fully qualified name or a Name or Attribute."""
if isinstance(node, ast.Name):
return node.id
elif (isinstance(node, ast.Attribute)
and isinstance(node.value, (ast.Name, ast.Attribute))):
method_name = node.attr
obj_name = self._find_name(node.value)
if obj_name is None:
return None
return obj_name + '.' + method_name
elif isinstance(node, six.string_types):
return node
else: # could be Subscript, Call or many more
return None
def visit_Call(self, node):
"""Look for the 'LOG.*' calls."""
# extract the obj_name and method_name
if isinstance(node.func, ast.Attribute):
obj_name = self._find_name(node.func.value)
if isinstance(node.func.value, ast.Name):
method_name = node.func.attr
elif isinstance(node.func.value, ast.Attribute):
obj_name = self._find_name(node.func.value)
method_name = node.func.attr
else: # could be Subscript, Call or many more
return super(CheckLoggingFormatArgs, self).generic_visit(node)
# obj must be a logger instance and method must be a log helper
if (obj_name != 'LOG'
or method_name not in self.LOG_METHODS):
return super(CheckLoggingFormatArgs, self).generic_visit(node)
# the call must have arguments
if not len(node.args):
return super(CheckLoggingFormatArgs, self).generic_visit(node)
# any argument should not be a tuple
for arg in node.args:
if isinstance(arg, ast.Tuple):
self.add_error(arg)
return super(CheckLoggingFormatArgs, self).generic_visit(node)
class CheckForStrUnicodeExc(BaseASTChecker):
"""Checks for the use of str() or unicode() on an exception.
This currently only handles the case where str() or unicode()
is used in the scope of an exception handler. If the exception
is passed into a function, returned from an assertRaises, or
used on an exception created in the same scope, this does not
catch it.
"""
CHECK_DESC = ('B314 str() and unicode() cannot be used on an '
'exception. Remove or use six.text_type()')
def __init__(self, tree, filename):
super(CheckForStrUnicodeExc, self).__init__(tree, filename)
self.name = []
self.already_checked = []
# Python 2
def visit_TryExcept(self, node):
for handler in node.handlers:
if handler.name:
self.name.append(handler.name.id)
super(CheckForStrUnicodeExc, self).generic_visit(node)
self.name = self.name[:-1]
else:
super(CheckForStrUnicodeExc, self).generic_visit(node)
# Python 3
def visit_ExceptHandler(self, node):
if node.name:
self.name.append(node.name)
super(CheckForStrUnicodeExc, self).generic_visit(node)
self.name = self.name[:-1]
else:
super(CheckForStrUnicodeExc, self).generic_visit(node)
def visit_Call(self, node):
if self._check_call_names(node, ['str', 'unicode']):
if node not in self.already_checked:
self.already_checked.append(node)
if isinstance(node.args[0], ast.Name):
if node.args[0].id in self.name:
self.add_error(node.args[0])
super(CheckForStrUnicodeExc, self).generic_visit(node)
def check_oslo_namespace_imports(logical_line, physical_line, filename):
"""'oslo_' should be used instead of 'oslo.'
B317
"""
if pep8.noqa(physical_line):
return
if re.match(oslo_namespace_imports, logical_line):
msg = ("B317: '%s' must be used instead of '%s'.") % (
logical_line.replace('oslo.', 'oslo_'),
logical_line)
yield(0, msg)
def dict_constructor_with_list_copy(logical_line):
"""Use a dict comprehension instead of a dict constructor
B318
"""
msg = ("B318: Must use a dict comprehension instead of a dict constructor"
" with a sequence of key-value pairs."
)
if dict_constructor_with_list_copy_re.match(logical_line):
yield (0, msg)
def no_xrange(logical_line):
"""Do not use 'xrange'
B319
"""
if assert_no_xrange_re.match(logical_line):
yield(0, "B319: Do not use xrange().")
def validate_assertTrue(logical_line):
"""Use 'assertTrue' instead of 'assertEqual'
B312
"""
if re.match(assert_True, logical_line):
msg = ("B312: Unit tests should use assertTrue(value) instead"
" of using assertEqual(True, value).")
yield(0, msg)
def validate_assertIsNone(logical_line):
"""Use 'assertIsNone' instead of 'assertEqual'
B311
"""
if re.match(assert_None, logical_line):
msg = ("B311: Unit tests should use assertIsNone(value) instead"
" of using assertEqual(None, value).")
yield(0, msg)
def no_log_warn_check(logical_line):
"""Disallow 'LOG.warn'
B320
"""
msg = ("B320: LOG.warn is deprecated, please use LOG.warning!")
if re.match(no_log_warn, logical_line):
yield(0, msg)
def validate_assertIsNotNone(logical_line):
"""Use 'assertIsNotNone'
B321
"""
if re.match(assert_Not_Equal, logical_line) or \
re.match(assert_Is_Not, logical_line):
msg = ("B321: Unit tests should use assertIsNotNone(value) instead"
" of using assertNotEqual(None, value) or"
" assertIsNot(None, value).")
yield(0, msg)
def factory(register):
register(CheckForStrUnicodeExc)
register(CheckLoggingFormatArgs)
register(check_oslo_namespace_imports)
register(dict_constructor_with_list_copy)
register(no_xrange)
register(validate_assertTrue)
register(validate_assertIsNone)
register(no_log_warn_check)
register(validate_assertIsNotNone)

View File

@@ -1,20 +0,0 @@
# Copyright 2010-2011 OpenStack LLC.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import oslo_i18n as i18n
_translators = i18n.TranslatorFactory(domain='barbican')
# The translation function using the well-known name "_"
_ = _translators.primary

View File

@@ -1,37 +0,0 @@
# Robert Simai <robert.simai@suse.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: barbican 3.0.1.dev34\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-10-19 06:57+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-10-14 01:57+0000\n"
"Last-Translator: Robert Simai <robert.simai@suse.com>\n"
"Language-Team: German\n"
"Language: de\n"
"X-Generator: Zanata 3.7.3\n"
"Plural-Forms: nplurals=2; plural=(n != 1)\n"
#, python-format
msgid "SQL connection failed. %d attempts left."
msgstr "SQL-Verbindung fehlgeschlagen. %d Versuche übrig."
msgid "TLSv1_2 is not present on the System"
msgstr "TLSv1_2 ist auf dem System nicht vorhanden"
msgid ""
"This plugin is NOT meant for a production environment. This is meant just "
"for development and testing purposes. Please use another plugin for "
"production."
msgstr ""
"Dieses Plugin ist nicht für Produktionsumgebungen geeignet. Es ist gedacht "
"für Entwicklung und Testzwecke. Bitte verwenden Sie ein anderes Plugin."
msgid ""
"nss_db_path was not provided so the crypto provider functions were not "
"initialized."
msgstr ""
"nss_db_path wurde nicht angegeben. Somit wurden die Crypto-"
"Anbieterfunktionen nicht initialisiert."

View File

@@ -1,125 +0,0 @@
# Translations template for barbican.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the barbican project.
#
# Translators:
# DuanXin <1145833162@qq.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# Jiong Liu <liujiong@gohighsec.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: barbican 3.0.0.0b3.dev1\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-07-17 22:54+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-07-10 12:05+0000\n"
"Last-Translator: Jiong Liu <liujiong@gohighsec.com>\n"
"Language: zh-CN\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Chinese (China)\n"
#, python-format
msgid "Could not perform processing for task '%s'."
msgstr "无法为任务 '%s' 执行处理。"
#, python-format
msgid "Could not process after successfully executing task '%s'."
msgstr "在成功执行任务 '%s' 后无法处理。"
#, python-format
msgid "Could not retrieve information needed to process task '%s'."
msgstr "无法检索到所需的信息来处理任务 '%s'。"
#, python-format
msgid "ERROR adding CA from plugin: %s"
msgstr "向插件:%s添加CA出错"
#, python-format
msgid "ERROR from KMIP server: %s"
msgstr "来自KMIP服务器%s的错误"
#, python-format
msgid "ERROR getting CA from plugin: %s"
msgstr "从插件:%s获取CA出错"
msgid "Error opening or writing to client"
msgstr "打开或者写入客户端错误"
#, python-format
msgid ""
"Error processing Keystone event, project_id=%(project_id)s, event resource="
"%(resource)s, event operation=%(operation)s, status=%(status)s, error "
"message=%(message)s"
msgstr ""
"处理 Keystone 事件错误项目id=%(project_id)s事件资源=%(resource)s事件操"
"作=%(operation)s状态=%(status)s错误信息=%(message)s"
#, python-format
msgid "Not found for %s"
msgstr "没有找到%s"
msgid "Problem deleting consumer"
msgstr "删除用户时出错"
msgid "Problem deleting container"
msgstr "删除容器时出错"
msgid "Problem deleting transport_key"
msgstr "删除传输密匙时出错"
#, python-format
msgid ""
"Problem enqueuing method '%(name)s' with args '%(args)s' and kwargs "
"'%(kwargs)s'."
msgstr ""
"队列问题的方法 '%(name)s' 和 args '%(args)s' 和 kwargs '%(kwargs)s'。"
msgid "Problem finding project related entity to delete"
msgstr "寻找项目相关的实体来删除时出错"
#, python-format
msgid "Problem getting Project %s"
msgstr "获取项目%s时出错"
#, python-format
msgid "Problem getting container %s"
msgstr "问题获得容器 %s"
#, python-format
msgid "Problem getting secret %s"
msgstr "获取秘密%s出现问题"
#, python-format
msgid "Problem handling an error for task '%s', raising original exception."
msgstr "为任务 '%s' 处理错误时出错,原始异常增加。"
msgid "Problem loading request JSON."
msgstr "JSON请求加载时出错"
msgid "Problem reading request JSON stream."
msgstr "JSON请求流中读取数据时出错。"
msgid "Problem saving entity for create"
msgstr "为创建保存实体时出错"
#, python-format
msgid "Problem seen creating plugin: '%s'"
msgstr "创建插件:'%s'出错"
msgid "Problem seen processing scheduled retry tasks"
msgstr "处理调度的重试任务有问题"
#, python-format
msgid "Problem seen processing worker task: '%s'"
msgstr "处理工作任务:'%s' 时出现问题。"
#, python-format
msgid "Suppressing exception while trying to process task '%s'."
msgstr "当尝试去处理任务 '%s' 时清除异常。"
msgid "Webob error seen"
msgstr "Webob出错"

View File

@@ -1,231 +0,0 @@
# Translations template for barbican.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the barbican project.
#
# Translators:
# DuanXin <1145833162@qq.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# Jiong Liu <liujiong@gohighsec.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: barbican 3.0.0.0b3.dev1\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-07-17 22:54+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-07-11 10:21+0000\n"
"Last-Translator: Jiong Liu <liujiong@gohighsec.com>\n"
"Language: zh-CN\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Chinese (China)\n"
msgid "Auto-creating barbican registry DB"
msgstr "自动创建 barbican 注册表数据库"
msgid "Barbican app created and initialized"
msgstr "barbican应用已创建和初始化"
#, python-format
msgid "Completed worker task (post-commit): '%s'"
msgstr "完成worker任务post-commit:'%s'"
#, python-format
msgid "Completed worker task: '%s'"
msgstr "完成worker任务'%s'"
#, python-format
msgid "Created a consumer for project: %s"
msgstr "为项目:%s创建消费者"
#, python-format
msgid "Created a container for project: %s"
msgstr "为项目:%s创建容器"
#, python-format
msgid "Created a container secret for project: %s"
msgstr "为项目:%s创建一个容器秘密"
#, python-format
msgid "Created a secret for project: %s"
msgstr "为项目:%s创建一个秘密"
#, python-format
msgid "Created a sub CA for project: %s"
msgstr "为项目:%s创建子CA"
msgid "Delete Project Quotas"
msgstr "删除项目配额"
msgid "Delete Project Quotas - Project not found"
msgstr "删除项目配额-项目未找到"
#, python-format
msgid "Deleted CA for project: %s"
msgstr "从项目:%s中删除CA"
#, python-format
msgid "Deleted a consumer for project: %s"
msgstr "从项目:%s中删除消费者"
#, python-format
msgid "Deleted container for project: %s"
msgstr "从项目:%s中删除容器"
#, python-format
msgid "Deleted container secret for project: %s"
msgstr "从项目:%s中删除容器秘密"
#, python-format
msgid "Deleted secret for project: %s"
msgstr "从项目:%s中删除秘密"
#, python-format
msgid ""
"Done processing '%(total)s' tasks, will check again in '%(next)s' seconds."
msgstr "'%(total)s'个任务处理完成,在'%(next)s'秒之后再次检查。"
msgid "Going to use TLS1.2..."
msgstr "即将使用TLS1.2..."
msgid "Halting the TaskServer"
msgstr "停止任务服务器"
msgid "Invalid data read from expiration file"
msgstr "预留文件中读取到无效的数据"
msgid "Invoking cancel_certificate_request()"
msgstr "调用 cancel_certificate_request()"
msgid "Invoking check_certificate_status()"
msgstr "调用 check_certificate_status()"
msgid "Invoking issue_certificate_request()"
msgstr "调用 issue_certificate_request()"
msgid "Invoking modify_certificate_request()"
msgstr "调用 modify_certificate_request()"
msgid "Invoking notify_ca_is_unavailable()"
msgstr "调用 notify_ca_is_unavailable()"
msgid "Invoking notify_certificate_is_ready()"
msgstr "调用 notify_certificate_is_ready()"
#, python-format
msgid ""
"No action is needed as there are no Barbican resources present for Keystone "
"project_id=%s"
msgstr "没有需要的行动因为没有 Barbican 资源来提供给id=%s的Keystone项目"
msgid "Not auto-creating barbican registry DB"
msgstr "没有自动创建 barbican 注册表数据库"
msgid "Processed request"
msgstr "处理请求"
#, python-format
msgid ""
"Processing check certificate status on order: order ID is '%(order)s' and "
"request ID is '%(request)s'"
msgstr ""
"处理order的证书状态检查任务order id是'%(order)s'请求id是'%(request)s'"
msgid "Processing scheduled retry tasks:"
msgstr "处理调度重试任务:"
#, python-format
msgid ""
"Processing type order: order ID is '%(order)s' and request ID is "
"'%(request)s'"
msgstr "处理类型orderorder id是'%(order)s'请求id是'%(request)s'"
#, python-format
msgid ""
"Processing update order: order ID is '%(order)s' and request ID is "
"'%(request)s'"
msgstr "处理更新orderorder id是'%(order)s'请求id是'%(request)s'"
msgid "Put Project Quotas"
msgstr "更新项目配额"
#, python-format
msgid "Retrieved a consumer for project: %s"
msgstr "从项目:%s中检索消费者"
#, python-format
msgid "Retrieved a consumer list for project: %s"
msgstr "从项目:%s中检索消费者列表"
#, python-format
msgid "Retrieved container for project: %s"
msgstr "从项目:%s中检索容器"
#, python-format
msgid "Retrieved container list for project: %s"
msgstr "从项目:%s中检索容器列表"
#, python-format
msgid "Retrieved secret list for project: %s"
msgstr "检索项目:%s的秘密列表"
#, python-format
msgid "Retrieved secret metadata for project: %s"
msgstr "检索项目:%s的秘密元数据"
#, python-format
msgid "Retrieved secret payload for project: %s"
msgstr "检索项目:%s的秘密的内容"
#, python-format
msgid "Scheduled RPC method for retry: '%s'"
msgstr "重试:'%s'的调度RPC方法"
msgid "Starting the TaskServer"
msgstr "启动任务服务器"
msgid "Sub-CA already deleted"
msgstr "子CA已经删除"
msgid "Sub-CAs are enabled by Dogtag server"
msgstr "子CA已被Dogtag服务激活"
msgid "Sub-CAs are not enabled by Dogtag server"
msgstr "子CA未被Dogtag服务激活"
msgid "Sub-CAs are not supported by Dogtag client"
msgstr "Dogtag客户端不支持子CA"
msgid "Sub-CAs are not supported by Dogtag server"
msgstr "Dogtag服务不支持子CA"
#, python-format
msgid ""
"Successfully completed Barbican resources cleanup for Keystone project_id=%s"
msgstr "成功完成Barbican 资源为id=%s的Keystone项目清理"
#, python-format
msgid ""
"Successfully handled Keystone event, project_id=%(project_id)s, event "
"resource=%(resource)s, event operation=%(operation)s"
msgstr ""
"成功处理Keystone事件项目id=%(project_id)s事件资源=%(resource)s事件操作="
"%(operation)s"
#, python-format
msgid "Task '%s' did not have to be retried"
msgstr "任务'%s'不必重试"
msgid ""
"The nss_db_path provided already exists, so the database is assumed to be "
"already set up."
msgstr "nss_db_path参数已经存在因此数据库应该已经设置完毕。"
#, python-format
msgid "Updated secret for project: %s"
msgstr "更新项目:%s的秘密"
msgid "Updating schema to latest version"
msgstr "最新版本更新模式"

View File

@@ -1,46 +0,0 @@
# Translations template for barbican.
# Copyright (C) 2015 ORGANIZATION
# This file is distributed under the same license as the barbican project.
#
# Translators:
# DuanXin <1145833162@qq.com>, 2015
# Andreas Jaeger <jaegerandi@gmail.com>, 2016. #zanata
# Jiong Liu <liujiong@gohighsec.com>, 2016. #zanata
msgid ""
msgstr ""
"Project-Id-Version: barbican 3.0.0.0b3.dev1\n"
"Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
"POT-Creation-Date: 2016-07-17 22:54+0000\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"PO-Revision-Date: 2016-07-11 10:03+0000\n"
"Last-Translator: Jiong Liu <liujiong@gohighsec.com>\n"
"Language: zh-CN\n"
"Plural-Forms: nplurals=1; plural=0;\n"
"Generated-By: Babel 2.0\n"
"X-Generator: Zanata 3.7.3\n"
"Language-Team: Chinese (China)\n"
#, python-format
msgid "Decrypted secret %s requested using deprecated API call."
msgstr "使用弃用的API调用解密秘密%s"
#, python-format
msgid "SQL connection failed. %d attempts left."
msgstr "SQL连接失败。剩余 %d 次尝试。"
msgid "TLSv1_2 is not present on the System"
msgstr "TLSv1_2未在系统中呈现"
msgid ""
"This plugin is NOT meant for a production environment. This is meant just "
"for development and testing purposes. Please use another plugin for "
"production."
msgstr ""
"这个插件不能用于生产环境。这只是用来开发和测试用的。请使用其他插件用于生产。"
msgid ""
"nss_db_path was not provided so the crypto provider functions were not "
"initialized."
msgstr "由于没有提供nss_db_path参数秘密创建功能无法初始化。"

File diff suppressed because it is too large Load Diff

View File

@@ -1,375 +0,0 @@
# Copyright (c) 2016 IBM
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from barbican.common import config
from barbican.model import models
from barbican.model import repositories as repo
from oslo_log import log
from oslo_utils import timeutils
from sqlalchemy import sql as sa_sql
import datetime
# Import and configure logging.
CONF = config.CONF
log.setup(CONF, 'barbican')
LOG = log.getLogger(__name__)
def cleanup_unassociated_projects():
"""Clean up unassociated projects.
This looks for projects that have no children entries on the dependent
tables and removes them.
"""
LOG.debug("Cleaning up unassociated projects")
session = repo.get_session()
project_children_tables = [models.Order,
models.KEKDatum,
models.Secret,
models.ContainerConsumerMetadatum,
models.Container,
models.PreferredCertificateAuthority,
models.CertificateAuthority,
models.ProjectCertificateAuthority,
models.ProjectQuotas]
children_names = map(lambda child: child.__name__, project_children_tables)
LOG.debug("Children tables for Project table being checked: %s",
str(children_names))
sub_query = session.query(models.Project.id)
for model in project_children_tables:
sub_query = sub_query.outerjoin(model,
models.Project.id == model.project_id)
sub_query = sub_query.filter(model.id == None) # nopep8
sub_query = sub_query.subquery()
sub_query = sa_sql.select([sub_query])
query = session.query(models.Project)
query = query.filter(models.Project.id.in_(sub_query))
delete_count = query.delete(synchronize_session='fetch')
LOG.info("Cleaned up %(delete_count)s entries for "
"%(project_name)s",
{'delete_count': str(delete_count),
'project_name': models.Project.__name__})
return delete_count
def cleanup_parent_with_no_child(parent_model, child_model,
threshold_date=None):
"""Clean up soft deletions in parent that do not have references in child.
Before running this function, the child table should be cleaned of
soft deletions. This function left outer joins the parent and child
tables and finds the parent entries that do not have a foreign key
reference in the child table. Then the results are filtered by soft
deletions and are cleaned up.
:param parent_model: table class for parent
:param child_model: table class for child which restricts parent deletion
:param threshold_date: soft deletions older than this date will be removed
:returns: total number of entries removed from database
"""
LOG.debug("Cleaning soft deletes for %(parent_name)s without "
"a child in %(child_name)s" %
{'parent_name': parent_model.__name__,
'child_name': child_model.__name__})
session = repo.get_session()
sub_query = session.query(parent_model.id)
sub_query = sub_query.outerjoin(child_model)
sub_query = sub_query.filter(child_model.id == None) # nopep8
sub_query = sub_query.subquery()
sub_query = sa_sql.select([sub_query])
query = session.query(parent_model)
query = query.filter(parent_model.id.in_(sub_query))
query = query.filter(parent_model.deleted)
if threshold_date:
query = query.filter(parent_model.deleted_at <= threshold_date)
delete_count = query.delete(synchronize_session='fetch')
LOG.info("Cleaned up %(delete_count)s entries for %(parent_name)s "
"with no children in %(child_name)s",
{'delete_count': delete_count,
'parent_name': parent_model.__name__,
'child_name': child_model.__name__})
return delete_count
def cleanup_softdeletes(model, threshold_date=None):
"""Remove soft deletions from a table.
:param model: table class to remove soft deletions
:param threshold_date: soft deletions older than this date will be removed
:returns: total number of entries removed from the database
"""
LOG.debug("Cleaning soft deletes: %s", model.__name__)
session = repo.get_session()
query = session.query(model)
query = query.filter_by(deleted=True)
if threshold_date:
query = query.filter(model.deleted_at <= threshold_date)
delete_count = query.delete()
LOG.info("Cleaned up %(delete_count)s entries for %(model_name)s",
{'delete_count': delete_count,
'model_name': model.__name__})
return delete_count
def cleanup_all(threshold_date=None):
"""Clean up the main soft deletable resources.
This function contains an order of calls to
clean up the soft-deletable resources.
:param threshold_date: soft deletions older than this date will be removed
:returns: total number of entries removed from the database
"""
LOG.debug("Cleaning up soft deletions where deletion date"
" is older than %s", str(threshold_date))
total = 0
total += cleanup_softdeletes(models.TransportKey,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.OrderBarbicanMetadatum,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.OrderRetryTask,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.OrderPluginMetadatum,
threshold_date=threshold_date)
total += cleanup_parent_with_no_child(models.Order, models.OrderRetryTask,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.EncryptedDatum,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.SecretUserMetadatum,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.SecretStoreMetadatum,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.ContainerSecret,
threshold_date=threshold_date)
total += cleanup_parent_with_no_child(models.Secret, models.Order,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.ContainerConsumerMetadatum,
threshold_date=threshold_date)
total += cleanup_parent_with_no_child(models.Container, models.Order,
threshold_date=threshold_date)
total += cleanup_softdeletes(models.KEKDatum,
threshold_date=threshold_date)
# TODO(edtubill) Clean up projects that were soft deleted by
# the keystone listener
LOG.info("Cleaned up %s soft deleted entries", total)
return total
def _soft_delete_expired_secrets(threshold_date):
"""Soft delete expired secrets.
:param threshold_date: secrets that have expired past this date
will be soft deleted
:returns: total number of secrets that were soft deleted
"""
current_time = timeutils.utcnow()
session = repo.get_session()
query = session.query(models.Secret.id)
query = query.filter(~models.Secret.deleted)
query = query.filter(
models.Secret.expiration <= threshold_date
)
update_count = query.update(
{
models.Secret.deleted: True,
models.Secret.deleted_at: current_time
},
synchronize_session='fetch')
return update_count
def _hard_delete_acls_for_soft_deleted_secrets():
"""Remove acl entries for secrets that have been soft deleted.
Removes entries in SecretACL and SecretACLUser which are for secrets
that have been soft deleted.
"""
session = repo.get_session()
acl_user_sub_query = session.query(models.SecretACLUser.id)
acl_user_sub_query = acl_user_sub_query.join(models.SecretACL)
acl_user_sub_query = acl_user_sub_query.join(models.Secret)
acl_user_sub_query = acl_user_sub_query.filter(models.Secret.deleted)
acl_user_sub_query = acl_user_sub_query.subquery()
acl_user_sub_query = sa_sql.select([acl_user_sub_query])
acl_user_query = session.query(models.SecretACLUser)
acl_user_query = acl_user_query.filter(
models.SecretACLUser.id.in_(acl_user_sub_query))
acl_total = acl_user_query.delete(synchronize_session='fetch')
acl_sub_query = session.query(models.SecretACL.id)
acl_sub_query = acl_sub_query.join(models.Secret)
acl_sub_query = acl_sub_query.filter(models.Secret.deleted)
acl_sub_query = acl_sub_query.subquery()
acl_sub_query = sa_sql.select([acl_sub_query])
acl_query = session.query(models.SecretACL)
acl_query = acl_query.filter(
models.SecretACL.id.in_(acl_sub_query))
acl_total += acl_query.delete(synchronize_session='fetch')
return acl_total
def _soft_delete_expired_secret_children(threshold_date):
"""Soft delete the children tables of expired secrets.
Soft deletes the children tables and hard deletes the ACL children
tables of the expired secrets.
:param threshold_date: threshold date for secret expiration
:returns: returns a pair for number of soft delete children and deleted
ACLs
"""
current_time = timeutils.utcnow()
secret_children = [models.SecretStoreMetadatum,
models.SecretUserMetadatum,
models.EncryptedDatum,
models.ContainerSecret]
children_names = map(lambda child: child.__name__, secret_children)
LOG.debug("Children tables for Secret table being checked: %s",
str(children_names))
session = repo.get_session()
update_count = 0
for table in secret_children:
# Go through children and soft delete them
sub_query = session.query(table.id)
sub_query = sub_query.join(models.Secret)
sub_query = sub_query.filter(
models.Secret.expiration <= threshold_date
)
sub_query = sub_query.subquery()
sub_query = sa_sql.select([sub_query])
query = session.query(table)
query = query.filter(table.id.in_(sub_query))
current_update_count = query.update(
{
table.deleted: True,
table.deleted_at: current_time
},
synchronize_session='fetch')
update_count += current_update_count
session.flush()
acl_total = _hard_delete_acls_for_soft_deleted_secrets()
return update_count, acl_total
def soft_delete_expired_secrets(threshold_date):
"""Soft deletes secrets that are past expiration date.
The expired secrets and its children are marked for deletion.
ACLs are soft deleted and then purged from the database.
:param threshold_date: secrets that have expired past this date
will be soft deleted
:returns: the sum of soft deleted entries and hard deleted acl entries
"""
# Note: sqllite does not support multiple table updates so
# several db updates are used instead
LOG.debug('Soft deleting expired secrets older than: %s',
str(threshold_date))
update_count = _soft_delete_expired_secrets(threshold_date)
children_count, acl_total = _soft_delete_expired_secret_children(
threshold_date)
update_count += children_count
LOG.info("Soft deleted %(update_count)s entries due to secret "
"expiration and %(acl_total)s secret acl entries "
"were removed from the database",
{'update_count': update_count,
'acl_total': acl_total})
return update_count + acl_total
def clean_command(sql_url, min_num_days, do_clean_unassociated_projects,
do_soft_delete_expired_secrets, verbose, log_file):
"""Clean command to clean up the database.
:param sql_url: sql connection string to connect to a database
:param min_num_days: clean up soft deletions older than this date
:param do_clean_unassociated_projects: If True, clean up
unassociated projects
:param do_soft_delete_expired_secrets: If True, soft delete secrets
that have expired
:param verbose: If True, log and print more information
:param log_file: If set, override the log_file configured
"""
if verbose:
# The verbose flag prints out log events to the screen, otherwise
# the log events will only go to the log file
CONF.set_override('debug', True)
if log_file:
CONF.set_override('log_file', log_file)
LOG.info("Cleaning up soft deletions in the barbican database")
log.setup(CONF, 'barbican')
cleanup_total = 0
current_time = timeutils.utcnow()
stop_watch = timeutils.StopWatch()
stop_watch.start()
try:
if sql_url:
CONF.set_override('sql_connection', sql_url)
repo.setup_database_engine_and_factory()
if do_clean_unassociated_projects:
cleanup_total += cleanup_unassociated_projects()
if do_soft_delete_expired_secrets:
cleanup_total += soft_delete_expired_secrets(
threshold_date=current_time)
threshold_date = None
if min_num_days >= 0:
threshold_date = current_time - datetime.timedelta(
days=min_num_days)
else:
threshold_date = current_time
cleanup_total += cleanup_all(threshold_date=threshold_date)
repo.commit()
except Exception as ex:
LOG.exception('Failed to clean up soft deletions in database.')
repo.rollback()
cleanup_total = 0 # rollback happened, no entries affected
raise ex
finally:
stop_watch.stop()
elapsed_time = stop_watch.elapsed()
if verbose:
CONF.clear_override('debug')
if log_file:
CONF.clear_override('log_file')
repo.clear()
if sql_url:
CONF.clear_override('sql_connection')
log.setup(CONF, 'barbican') # reset the overrides
LOG.info("Cleaning of database affected %s entries", cleanup_total)
LOG.info('DB clean up finished in %s seconds', elapsed_time)

View File

@@ -1,57 +0,0 @@
# A generic, single database configuration
[alembic]
# path to migration scripts
script_location = %(here)s/alembic_migrations
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# default to an empty string because the Barbican migration process will
# extract the correct value and set it programmatically before alembic is fully
# invoked.
sqlalchemy.url =
#sqlalchemy.url = driver://user:pass@localhost/dbname
#sqlalchemy.url = sqlite:///barbican.sqlite
#sqlalchemy.url = sqlite:////var/lib/barbican/barbican.sqlite
#sqlalchemy.url = postgresql+psycopg2://postgres:postgres@localhost:5432/barbican_api
# Logging configuration
[loggers]
keys = alembic
#keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = DEBUG
handlers = console
qualname =
[logger_sqlalchemy]
level = DEBUG
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

Some files were not shown because too many files have changed in this diff Show More