Format based on README template

This is a move towards a more standardised README.

The Series Upgrade section should probably eventually
be moved to the series-upgrade appendix in the CDG.

Change-Id: I9ca724dc9b3db02dc32f6462f5e24cc453d0dd3e
This commit is contained in:
Peter Matulis
2020-05-12 22:20:19 -04:00
parent b475200016
commit 2f69eeb003

236
README.md
View File

@@ -5,34 +5,86 @@ MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the
Galera library of MySQL high availability solutions in a single product package
which enables you to create a cost-effective MySQL cluster.
This charm deploys Percona XtraDB Cluster onto Ubuntu.
The percona-cluster charm deploys Percona XtraDB Cluster and provides DB
services to those charms that support the 'mysql-shared' interface. The current
list of such charms can be obtained from the [Charm
Store][charms-requires-mysql-shared] (the charms officially supported by the
OpenStack Charms project are published by 'openstack-charmers').
# Usage
## Configuration
This section covers common configuration options. See file `config.yaml` for
the full list of options, along with their descriptions and default values.
#### `max-connections`
The `max-connections` option set the maximum number of allowed connections.
The default is 600. This is an important option and is discussed in the Memory
section below.
#### `min-cluster-size`
The `min-cluster-size` option sets the number of percona-cluster units required
to form its cluster. It is best practice to use this option as doing so ensures
that the charm will wait until the cluster is up before accepting relations
from other client applications.
## Deployment
To deploy this charm:
To deploy a single percona-cluster unit:
juju deploy percona-cluster
Passwords required for the correct operation of the deployment are automatically
generated and stored by the lead unit (typically the first unit).
To make use of DB services, simply add a relation between percona-cluster and
an application that supports the 'mysql-shared' interface. For instance:
To expand the deployment:
juju add-relation percona-cluster:shared-db keystone:shared-db
juju add-unit -n 2 percona-cluster
See notes in the 'HA/Clustering' section on safely deploying a PXC cluster
in a single action.
The root password for mysql can be retrieved using the following command:
Passwords required for the correct operation of the deployment are
automatically generated and stored by the application leader. The root password
for mysql can be retrieved using the following command:
juju run --unit percona-cluster/0 leader-get root-password
This is only usable from within one of the units within the deployment
Root user DB access is only usable from within one of the deployed units
(access to root is restricted to localhost only).
## Memory Configuration
## High availability
When more than one unit is deployed with the hacluster application the charm
will bring up an HA active/active cluster. The `min-cluster-size` option
should be used (see description above).
To deploy a three-node cluster:
juju deploy -n 3 --config min-cluster-size=3 percona-cluster
There are two mutually exclusive high availability options: using virtual IP(s)
or DNS. In both cases the hacluster subordinate charm is used to provide the
Corosync and Pacemaker backend HA functionality.
See the [OpenStack high availability][cdg-app-ha-apps] appendix in the
[OpenStack Charms Deployment Guide][cdg] for details.
## Actions
This section lists Juju [actions][juju-docs-actions] supported by the charm.
Actions allow specific operations to be performed on a per-unit basis. To
display action descriptions run `juju actions percona-cluster`. If the charm is
not deployed then see file `actions.yaml`.
* `backup`
* `bootstrap-pxc`
* `complete-cluster-series-upgrade`
* `mysqldump`
* `notify-bootstrapped`
* `pause`
* `resume`
* `set-pxc-strict-mode`
## Memory
Percona Cluster is extremely memory sensitive. Setting memory values too low
will give poor performance. Setting them too high will create problems that are
@@ -42,87 +94,34 @@ configurations.
The Percona Cluster charm needs to be able to be deployed in small low memory
development environments as well as high performance production environments.
The charm configuration opinionated defaults favor the developer environment in
order to ease initial testing. Production environments need to consider
The charm configuration opinionated defaults favour the developer environment
in order to ease initial testing. Production environments need to consider
carefully the memory requirements for the hardware or cloud in use. Consult a
MySQL memory calculator [2] to understand the implications of the values.
[MySQL memory calculator][mysql-memory-calculator] to understand the
implications of the values.
Between the 5.5 and 5.6 releases a significant default was changed.
The performance schema [1] defaulted to on for 5.6 and later. This allocates
all the memory that would be required to handle max-connections plus several
other memory settings. With 5.5 memory was allocated during runtime as needed.
Between the 5.5 and 5.6 releases a significant default was changed. The
[performance schema][upstream-performance-schema] defaulted to on for 5.6 and
later. This allocates all the memory that would be required to handle
`max-connections` plus several other memory settings. With 5.5 memory was
allocated during run-time as needed.
The charm now makes performance schema configurable and defaults to off (False).
With the performance schema turned off memory is allocated when needed during
run time. It is important to understand this can lead to run time memory
exhaustion if the configuration values are set too high. Consult a MySQL memory
calculator [2] to understand the implications of the values.
The charm now makes performance schema configurable and defaults to off
(False). With the performance schema turned off memory is allocated when needed
during run-time. It is important to understand this can lead to run-time memory
exhaustion if the configuration values are set too high. Consult a [MySQL
memory calculator][mysql-memory-calculator] to understand the implications of
the values.
Particularly consider the max-connections setting, this value is a balance
between connection exhaustion and memory exhaustion. Occasionally connection
exhaustion occurs in large production HA clouds with max-connections less than
2000. The common practice became to set max-connections unrealistically high
near 10k or 20k. In the move to 5.6 on Xenial this became a problem as Percona
would fail to start up or behave erratically as memory exhaustion occurred on
the host due to performance schema being turned on. Even with the default now
turned off this value should be carefully considered against the production
requirements and resources available.
[1] http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-6.html#mysqld-5-6-6-performance-schema
[2] http://www.mysqlcalculator.com/
## HA/Clustering
When more than one unit of the charm is deployed with the hacluster charm
the percona charm will bring up an Active/Active cluster. The process of
clustering the units together takes some time. Due to the nature of
asynchronous hook execution it is possible client relationship hooks may
be executed before the cluster is complete. In some cases, this can lead
to client charm errors.
To guarantee client relation hooks will not be executed until clustering is
completed use the min-cluster-size configuration setting:
juju deploy -n 3 percona-cluster
juju config percona-cluster min-cluster-size=3
When min-cluster-size is not set the charm will still cluster, however,
there are no guarantees client relation hooks will not execute before it is
complete.
Single unit deployments behave as expected.
There are two mutually exclusive high availability options: using virtual
IP(s) or DNS. In both cases, a relationship to hacluster is required which
provides the corosync back end HA functionality.
To use virtual IP(s) the clustered nodes must be on the same subnet such that
the VIP is a valid IP on the subnet for one of the node's interfaces and each
node has an interface in said subnet. The VIP becomes a highly-available API
endpoint.
At a minimum, the config option 'vip' must be set in order to use virtual IP
HA. If multiple networks are being used, a VIP should be provided for each
network, separated by spaces. Optionally, vip_iface or vip_cidr may be
specified.
To use DNS high availability there are several prerequisites. However, DNS HA
does not require the clustered nodes to be on the same subnet.
Currently the DNS HA feature is only available for MAAS 2.0 or greater
environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must
have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s)
must be pre-registered in MAAS before use with DNS HA.
At a minimum, the config option 'dns-ha' must be set to true, the
'os-access-hostname' must be set, and the 'access' binding must be
defined in order to use DNS HA.
The charm will throw an exception in the following circumstances:
- If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster
- If both 'vip' and 'dns-ha' are set, as they are mutually exclusive
- If 'dns-ha' is set and 'os-access-hostname' is not set
- If the 'access' binding is not set and 'dns-ha' is set, consumers of the db may not be allowed to connect
The value of `max-connections` should strike a balance between connection
exhaustion and memory exhaustion. Occasionally connection exhaustion occurs in
large production HA clouds with a value of less than 2000. The common practice
became to set it unrealistically high (near 10k or 20k). In the move to 5.6 on
Xenial this became a problem as Percona would fail to start up or behave
erratically as memory exhaustion occurred on the host due to performance schema
being turned on. Even with the default now turned off this value should be
carefully considered against the production requirements and resources
available.
## MySQL asynchronous replication
@@ -138,26 +137,26 @@ setup master-slave replication of "example1" and "example2" databases between
and then relate them:
juju relate pxc1:master pxc2:slave
juju add-relation pxc1:master pxc2:slave
In order to setup master-master replication, add another relation:
juju relate pxc2:master pxc1:slave
juju add-relation pxc2:master pxc1:slave
In the same way circular replication can be setup between multiple clusters.
## Network Space support
This charm supports the use of Juju Network Spaces, allowing the charm to be bound
to network space configurations managed directly by Juju. This is only supported
with Juju 2.0 and above.
This charm supports the use of Juju Network Spaces, allowing the charm to be
bound to network space configurations managed directly by Juju. This is only
supported with Juju 2.0 and above.
You can ensure that database connections and cluster peer communication are bound to
specific network spaces by binding the appropriate interfaces:
You can ensure that database connections and cluster peer communication are
bound to specific network spaces by binding the appropriate interfaces:
juju deploy percona-cluster --bind "shared-db=internal-space cluster=internal-space"
alternatively these can also be provided as part of a juju native bundle configuration:
Alternatively, configuration can be provided as part of a bundle:
percona-cluster:
charm: cs:xenial/percona-cluster
@@ -171,12 +170,12 @@ within the percona-cluster deployment should use for communication with each
other; the 'shared-db' endpoint binding is used to determine which network
space should be used for access to MySQL databases services from other charms.
**NOTE:** Spaces must be configured in the underlying provider prior to
attempting to use them.
> **Note**: Spaces must be configured in the underlying provider prior to
attempting to use them.
**NOTE:** Existing deployments using the access-network configuration option
will continue to function; this option is preferred over any network space
binding provided for the 'shared-db' relation if set.
> **Note**: Existing deployments using the access-network configuration option
will continue to function; this option is preferred over any network space
binding provided for the 'shared-db' relation if set.
# Limitations
@@ -242,12 +241,11 @@ This also updates mysql configuration with all peers in the cluster.
juju set-series percona-cluster xenial
juju config mysql source=distro
## Documentation
* https://www.percona.com/doc/percona-xtradb-cluster/LATEST/howtos/upgrade_guide.html
* https://www.percona.com/doc/percona-xtradb-cluster/5.6/upgrading_guide_55_56.html
* https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/
## Upstream documentation
* [Upgrading Percona XtraDB Cluster][upstream-upgrading-percona]
* [Percona XtraDB Cluster In-Place Upgrading Guide: From 5.5 to 5.6][upstream-upgrading-55-to-56]
* [Galera replication - how to recover a PXC cluster][upstream-recovering]
# Cold Boot
@@ -342,4 +340,24 @@ state:
percona-cluster/2 active idle 3 10.5.0.27 3306/tcp Unit is ready
hacluster/2 active idle 10.5.0.27 Unit is ready and clustered
The percona-cluster is now back to a clustered and healthy state.
The percona-cluster application is now back to a clustered and healthy state.
# Bugs
Please report bugs on [Launchpad][lp-bugs-charm-percona-cluster].
For general charm questions refer to the [OpenStack Charm Guide][cg].
<!-- LINKS -->
[cg]: https://docs.openstack.org/charm-guide
[cdg]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide
[cdg-app-ha-apps]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-ha.html#ha-applications
[charms-requires-mysql-shared]: https://jaas.ai/search?requires=mysql-shared
[mysql-memory-calculator]: http://www.mysqlcalculator.com/
[lp-bugs-charm-percona-cluster]: https://bugs.launchpad.net/charm-percona-cluster/+filebug
[upstream-performance-schema]: http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-6.html#mysqld-5-6-6-performance-schema
[upstream-upgrading-percona]: https://www.percona.com/doc/percona-xtradb-cluster/LATEST/howtos/upgrade_guide.html
[upstream-upgrading-55-to-56]: https://www.percona.com/doc/percona-xtradb-cluster/5.6/upgrading_guide_55_56.html
[upstream-recovering]: https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/
[juju-docs-actions]: https://jaas.ai/docs/actions