Production PostgreSQL for Kubernetes, from high availability Postgres clusters to full-scale database-as-a-service.
APACHE-2.0 License
Bot releases are hidden (Show)
Published by jkatz almost 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.5.1 on November 12, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
PostgreSQL Operator 4.5.1 release includes the following software versions upgrades:
pgcluster
resource. A user no longer has to provide a pgBackRest repository Secret: the Postgres Operator will now automatically generate this.pgo show cluster
command.pgo-config
ConfigMap is not created during the installation of the Postgres Operator, the Postgres Operator will generate one when it initializes.pgo_admin_password
in the installer is now optional. If no value is provided, the password for the initial administrative user is randomly generated.pgcluster
custom resource creation has been processed by its corresponding Postgres Operator controller. This prevents the custom resource from being run by the creation logic multiple times.initdb
(cluster reinitialization) from occurring if the PostgreSQL container cannot initialize while bootstrapping from an existing PGDATA directory.PGBACKREST_REPO1_TYPE
environmental variable. Reported by Alec Rooney (@alrooney).pgo show backup
will work regardless of state of any of the PostgreSQL clusters. This pulls the information directly from the pgBackRest Pod itself. Reported by (@saltenhub).pgbouncer
administrative user stays synchronized between an existing Kubernetes Secret and PostgreSQL should the pgBouncer be recreated.DEBUG
.Published by jkatz almost 4 years ago
Published by jkatz about 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.5.0 on October 2, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.5.0 release includes the following software versions upgrades:
Additionally, PostgreSQL Operator 4.5.0 introduces support for the CentOS 8 and UBI 8 base container images. In addition to using the newer operating systems, this enables support for TLS 1.3 when connecting to PostgreSQL.
This release also moves to building the containers to using Buildah 1.14.9 for the UBI 8 containers, and version 1.11.6 for the CentOS 7 & CentOS 8 & UBI 7 containers.
The monitoring stack for the PostgreSQL Operator has shifted to use upstream components as opposed to repackaging them. These are specified as part of the PostgreSQL Operator Installer. We have tested this release with the following versions of each component:
PostgreSQL Operator is tested with Kubernetes 1.15 - 1.19, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), Amazon EKS, and VMware Enterprise PKS 1.3+.
This release makes several changes to the PostgreSQL Operator Monitoring solution, notably making it much easier to set up a turnkey PostgreSQL monitoring solution with the PostgreSQL Operator using the open source pgMonitor project.
pgMonitor combines insightful queries for PostgreSQL with several proven tools for statistics collection, data visualization, and alerting to allow one to deploy a turnkey monitoring solution for PostgreSQL. The pgMonitor 4.4 release added support for Kubernetes environments, particularly with the pgnodemx that allows one to get host-like information from the Kubernetes Pod a PostgreSQL instance is deployed within.
PostgreSQL Operator 4.5 integrates with pgMonitor to take advantage of its Kubernetes support, and provides the following visualized metrics out-of-the-box:
More metrics and visualizations will be added in future releases. You can further customize these to meet the needs for your enviornment.
PostgreSQL Operator 4.5 uses the upstream packages for Prometheus, Grafana, and Alertmanager. Those using earlier versions of monitoring provided with the PostgreSQL Operator will need to switch to those packages. The tested versions of these packages for PostgreSQL Operator 4.5 include:
You can find out how to install PostgreSQL Operator Monitoring in the installation section:
https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/
pgBackRest powers the disaster recovery capabilities of PostgreSQL clusters deployed by the PostgreSQL Operator. While the PostgreSQL Operator provides many toggles to customize a pgBackRest configuration, it can be easier to do so directly using the pgBackRest configuration file format.
This release adds the ability to specify the pgBackRest configuration from either a ConfigMap or Secret by using the pgo create cluster --pgbackrest-custom-config
flag, or by setting the BackrestConfig
attributes in the pgcluster
CRD. Setting this allows any pgBackRest resource (Pod, Job etc.) to leverage this custom configuration.
Note that some settings will be overriden by the PostgreSQL Operator regardless of the settings of a customized pgBackRest configuration file due to the nature of how the PostgreSQL instances managed by the Operator access pgBackRest. However, these are typically not the settings that one wants to customize.
It is now possible to add custom annotations to the Deployments that the PostgreSQL Operator manages. These include:
Annotations are applied on a per-cluster basis, and can be set either for all the managed Deployments within a cluster or individual Deployment groups. The annotations can be set as part of the Annotations
section of the pgcluster specification.
This also introduces several flags to the pgo
client that help with the management of the annotations. These flags are available on pgo create cluster
and pgo update cluster
commands and include:
--annotation
- apply annotations on all managed Deployments--annotation-postgres
- applies annotations on all managed PostgreSQL Deployments--annotation-pgbackrest
- applies annotations on all managed pgBackRest Deployments--annotation-pgbouncer
- applies annotations on all managed pgBouncer DeploymentsThese flags work similarly to how one manages annotations and labels from kubectl
. To add an annotation, one follows the format:
--annotation=key=value
To remove an annotation, one follows the format:
--annotation=key-
crunchy-collect
container, used for metrics collection is renamed to crunchy-postgres-exporter
backrest-restore-<fromClusterName>-to-<toPVC>
pgtask has been renamed to backrest-restore-<clusterName>
. Additionally, the following parameters no longer need to be specified for the pgtask:
restored Primary created
instead of restored PVC created
.toPVC
parameter has been removed from the restore request endpoint.pg_restore
no longer have from-<pvcName>
in their names.pgo-backrest-restore
container has been retired.pgo load
command has been removed. This also retires the pgo-load
container.crunchy-prometheus
and crunchy-grafana
containers are now removed. Please use the corresponding upstream containers.pgo
client when using the following command-line arguments:
pgo create cluster --exporter-cpu
pgo update cluster --exporter-cpu
pgo create cluster --exporter-cpu-limit
pgo update cluster --exporter-cpu-limit
pgo create cluster --exporter-memory
pgo update cluster --exporter-memory
pgo create cluster --exporter-memory-limit
pgo update cluster --exporter-memory-limit
pgnodemx
extension which makes container-level metrics (CPU, memory, storage utilization) available via a PostgreSQL-based interface.pgo restore
methodology is changed to mirror the approach taken by pgo create cluster --restore-from
that was introduced in the previous release. While pgo restore
will still perform a "restore in-place", it will now take the following actions:
pgo restore
.pgo restore
functionality, some of which are captured further down in these release notes.postgres
database. If you have a pre-existing pgBouncer Deployment, the most convenient way to access this functionality is to redeploy pgBouncer for that PostgreSQL cluster (pgo delete pgbouncer
+ pgo create pgbouncer
). Suggested by (@lgarcia11).pgbouncer.ini
and pg_hba.conf
have been moved from the pgBouncer Secret to a ConfigMap whose name follows the pattern <clusterName>-pgbouncer-cm
. These are mounted as part of a project volume in conjunction with the current pgBouncer Secret.pgo df
command will round values over 1000 up to the next unit type, e.g. 1GiB
instead of 1024MiB
.pgo restore
. This fix is a result of the change in methodology for how a restore occurs.pgo scaledown
now allows for the removal of replicas that are not actively running.pgo scaledown --query
command now shows replicas that may not be in an active state.host
if it is not set.--keep-backups
flag, ensure that backups that were created via --backup-type=pgdump
are retained.pgo df
instead of timing out.--tls-only
flag is set. Reported by (@shuhanfan).pgo label
when applying multiple labels at once.pgo create pgorole
so that the expression --permissions=*
works.operator
container will no longer panic if all Deployments are scaled to 0
without using the pgo update cluster <mycluster> --shutdown
command.Published by jkatz about 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.5.0 on September XX, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.5.0 release includes the following software versions upgrades:
Additionally, PostgreSQL Operator 4.5.0 introduces support for the CentOS 8 and UBI 8 base container images. In addition to using the newer operating systems, this enables support for TLS 1.3 when connecting to PostgreSQL. This release also moves to building the containers using Buildah 1.14.9.
The monitoring stack for the PostgreSQL Operator has shifted to use upstream components as opposed to repackaging them. These are specified as part of the PostgreSQL Operator Installer. We have tested this release with the following versions of each component:
PostgreSQL Operator is tested with Kubernetes 1.15 - 1.19, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), Amazon EKS, and VMware Enterprise PKS 1.3+.
This release makes several changes to the PostgreSQL Operator Monitoring solution, notably making it much easier to set up a turnkey PostgreSQL monitoring solution with the PostgreSQL Operator using the open source pgMonitor project.
pgMonitor combines insightful queries for PostgreSQL with several proven tools for statistics collection, data visualization, and alerting to allow one to deploy a turnkey monitoring solution for PostgreSQL. The pgMonitor 4.4 release added support for Kubernetes environments, particularly with the pgnodemx that allows one to get host-like information from the Kubernetes Pod a PostgreSQL instance is deployed within.
PostgreSQL Operator 4.5 integrates with pgMonitor to take advantage of its Kubernetes support, and provides the following visualized metrics out-of-the-box:
More metrics and visualizations will be added in future releases. You can further customize these to meet the needs for your enviornment.
PostgreSQL Operator 4.5 uses the upstream packages for Prometheus, Grafana, and Alertmanager. Those using earlier versions of monitoring provided with the PostgreSQL Operator will need to switch to those packages. The tested versions of these packages for PostgreSQL Operator 4.5 include:
You can find out how to install PostgreSQL Operator Monitoring in the installation section:
https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/
pgBackRest powers the disaster recovery capabilities of PostgreSQL clusters deployed by the PostgreSQL Operator. While the PostgreSQL Operator provides many toggles to customize a pgBackRest configuration, it can be easier to do so directly using the pgBackRest configuration file format.
This release adds the ability to specify the pgBackRest configuration from either a ConfigMap or Secret by using the pgo create cluster --pgbackrest-custom-config
flag, or by setting the BackrestConfig
attributes in the pgcluster
CRD. Setting this allows any pgBackRest resource (Pod, Job etc.) to leverage this custom configuration.
Note that some settings will be overriden by the PostgreSQL Operator regardless of the settings of a customized pgBackRest configuration file due to the nature of how the PostgreSQL instances managed by the Operator access pgBackRest. However, these are typically not the settings that one wants to customize.
It is now possible to add custom annotations to the Deployments that the PostgreSQL Operator manages. These include:
Annotations are applied on a per-cluster basis, and can be set either for all the managed Deployments within a cluster or individual Deployment groups. The annotations can be set as part of the Annotations
section of the pgcluster specification.
This also introduces several flags to the pgo
client that help with the management of the annotations. These flags are available on pgo create cluster
and pgo update cluster
commands and include:
--annotation
- apply annotations on all managed Deployments--annotation-postgres
- applies annotations on all managed PostgreSQL Deployments--annotation-pgbackrest
- applies annotations on all managed pgBackRest Deployments--annotation-pgbouncer
- applies annotations on all managed pgBouncer DeploymentsThese flags work similarly to how one manages annotations and labels from kubectl
. To add an annotation, one follows the format:
--annotation=key=value
To remove an annotation, one follows the format:
--annotation=key-
crunchy-collect
container, used for metrics collection is renamed to crunchy-postgres-exporter
backrest-restore-<fromClusterName>-to-<toPVC>
pgtask has been renamed to backrest-restore-<clusterName>
. Additionally, the following parameters no longer need to be specified for the pgtask:
restored Primary created
instead of restored PVC created
.toPVC
parameter has been removed from the restore request endpoint.pg_restore
no longer have from-<pvcName>
in their names.pgo-backrest-restore
container has been retired.pgo load
command has been removed. This also retires the pgo-load
container.crunchy-prometheus
and crunchy-grafana
containers are now removed. Please use the corresponding upstream containers.pgo
client when using the following command-line arguments:
pgo create cluster --exporter-cpu
pgo update cluster --exporter-cpu
pgo create cluster --exporter-cpu-limit
pgo update cluster --exporter-cpu-limit
pgo create cluster --exporter-memory
pgo update cluster --exporter-memory
pgo create cluster --exporter-memory-limit
pgo update cluster --exporter-memory-limit
pgnodemx
extension which makes container-level metrics (CPU, memory, storage utilization) available via a PostgreSQL-based interface.pgo restore
methodology is changed to mirror the approach taken by pgo create cluster --restore-from
that was introduced in the previous release. While pgo restore
will still perform a "restore in-place", it will now take the following actions:
pgo restore
.pgo restore
functionality, some of which are captured further down in these release notes.postgres
database. If you have a pre-existing pgBouncer Deployment, the most convenient way to access this functionality is to redeploy pgBouncer for that PostgreSQL cluster (pgo delete pgbouncer
+ pgo create pgbouncer
). Suggested by (@lgarcia11).pgbouncer.ini
and pg_hba.conf
have been moved from the pgBouncer Secret to a ConfigMap whose name follows the pattern <clusterName>-pgbouncer-cm
. These are mounted as part of a project volume in conjunction with the current pgBouncer Secret.pgo df
command will round values over 1000 up to the next unit type, e.g. 1GiB
instead of 1024MiB
.pgo restore
. This fix is a result of the change in methodology for how a restore occurs.pgo scaledown
now allows for the removal of replicas that are not actively running.pgo scaledown --query
command now shows replicas that may not be in an active state.host
if it is not set.pgo df
instead of timing out.--tls-only
flag is set. Reported by (@shuhanfan).pgo label
when applying multiple labels at once.pgo create pgorole
so that the expression --permissions=*
works.operator
container will no longer panic if all Deployments are scaled to 0
without using the pgo update cluster <mycluster> --shutdown
command.Published by jkatz about 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.5.0 on September XX, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.5.0 release includes the following software versions upgrades:
Additionally, PostgreSQL Operator 4.5.0 introduces support for the CentOS 8 and UBI 8 base container images. In addition to using the newer operating systems, this enables support for TLS 1.3 when connecting to PostgreSQL. This release also moves to building the containers using Buildah 1.14.9.
The monitoring stack for the PostgreSQL Operator has shifted to use upstream components as opposed to repackaging them. These are specified as part of the PostgreSQL Operator Installer. We have tested this release with the following versions of each component:
PostgreSQL Operator is tested with Kubernetes 1.15 - 1.18, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
This release makes several changes to the PostgreSQL Operator Monitoring solution, notably making it much easier to set up a turnkey PostgreSQL monitoring solution with the PostgreSQL Operator using the open source pgMonitor project.
pgMonitor combines insightful queries for PostgreSQL with several proven tools for statistics collection, data visualization, and alerting to allow one to deploy a turnkey monitoring solution for PostgreSQL. The pgMonitor 4.4 release added support for Kubernetes environments, particularly with the pgnodemx that allows one to get host-like information from the Kubernetes Pod a PostgreSQL instance is deployed within.
PostgreSQL Operator 4.5 integrates with pgMonitor to take advantage of its Kubernetes support, and provides the following visualized metrics out-of-the-box:
More metrics and visualizations will be added in future releases. You can further customize these to meet the needs for your enviornment.
PostgreSQL Operator 4.5 uses the upstream packages for Prometheus, Grafana, and Alertmanager. Those using earlier versions of monitoring provided with the PostgreSQL Operator will need to switch to those packages. The tested versions of these packages for PostgreSQL Operator 4.5 include:
You can find out how to install PostgreSQL Operator Monitoring in the installation section:
https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/
pgBackRest powers the disaster recovery capabilities of PostgreSQL clusters deployed by the PostgreSQL Operator. While the PostgreSQL Operator provides many toggles to customize a pgBackRest configuration, it can be easier to do so directly using the pgBackRest configuration file format.
This release adds the ability to specify the pgBackRest configuration from either a ConfigMap or Secret by using the pgo create cluster --pgbackrest-custom-config
flag, or by setting the BackrestConfig
attributes in the pgcluster
CRD. Setting this allows any pgBackRest resource (Pod, Job etc.) to leverage this custom configuration.
Note that some settings will be overriden by the PostgreSQL Operator regardless of the settings of a customized pgBackRest configuration file due to the nature of how the PostgreSQL instances managed by the Operator access pgBackRest. However, these are typically not the settings that one wants to customize.
It is now possible to add custom annotations to the Deployments that the PostgreSQL Operator manages. These include:
Annotations are applied on a per-cluster basis, and can be set either for all the managed Deployments within a cluster or individual Deployment groups. The annotations can be set as part of the Annotations
section of the pgcluster specification.
This also introduces several flags to the pgo
client that help with the management of the annotations. These flags are available on pgo create cluster
and pgo update cluster
commands and include:
--annotation
- apply annotations on all managed Deployments--annotation-postgres
- applies annotations on all managed PostgreSQL Deployments--annotation-pgbackrest
- applies annotations on all managed pgBackRest Deployments--annotation-pgbouncer
- applies annotations on all managed pgBouncer DeploymentsThese flags work similarly to how one manages annotations and labels from kubectl
. To add an annotation, one follows the format:
--annotation=key=value
To remove an annotation, one follows the format:
--annotation=key-
crunchy-collect
container, used for metrics collection is renamed to crunchy-postgres-exporter
backrest-restore-<fromClusterName>-to-<toPVC>
pgtask has been renamed to backrest-restore-<clusterName>
. Additionally, the following parameters no longer need to be specified for the pgtask:
restored Primary created
instead of restored PVC created
.toPVC
parameter has been removed from the restore request endpoint.pg_restore
no longer have from-<pvcName>
in their names.pgo-backrest-restore
container has been retired.pgo load
command has been removed. This also retires the pgo-load
container.crunchy-prometheus
and crunchy-grafana
containers are now removed. Please use the corresponding upstream containers.pgo
client when using the following command-line arguments:
pgo create cluster --exporter-cpu
pgo update cluster --exporter-cpu
pgo create cluster --exporter-cpu-limit
pgo update cluster --exporter-cpu-limit
pgo create cluster --exporter-memory
pgo update cluster --exporter-memory
pgo create cluster --exporter-memory-limit
pgo update cluster --exporter-memory-limit
pgnodemx
extension which makes container-level metrics (CPU, memory, storage utilization) available via a PostgreSQL-based interface.pgo restore
methodology is changed to mirror the approach taken by pgo create cluster --restore-from
that was introduced in the previous release. While pgo restore
will still perform a "restore in-place", it will now take the following actions:
pgo restore
.pgo restore
functionality, some of which are captured further down in these release notes.pgo df
command will round values over 1000 up to the next unit type, e.g. 1GiB
instead of 1024MiB
.pgo restore
. This fix is a result of the change in methodology for how a restore occurs.host
if it is not set.pgo df
instead of timing out.pgo label
when applying multiple labels at once.pgo create pgorole
so that the expression --permissions=*
works.Published by jkatz about 4 years ago
Published by jkatz about 4 years ago
--client
flag to pgo version
to output the client version of pgo
Published by jkatz about 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.4.1 on August 18, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.4.1 release includes the following software versions upgrades:
PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
host
if it is not set.pgo label
when applying multiple labels at once.pgo create pgorole
so that the expression --permissions=*
works.Published by jkatz about 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.3.3 on August 18, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.3.3 release includes the following software versions upgrades:
PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
pg_dump
from a specific database using the --database
flag when using pgo backup
with --backup-type=pgdump
.pg_dump
to a specific database using the --pgdump-database
flag using pgo restore
when --backup-type=pgdump
is specified.--client
flag to pgo version
to output the client version of pgo
.PGMONITOR_PASSWORD
is now populated by an environmental variable secret. This environmental variable is only set on a primary instance as it is only needed at the time a PostgreSQL cluster is initialized.pgo status
as it is more convenient and accurate to get this information from kubectl
and the like, and it was not working due to RBAC privileges. (Reported by @mw-0).pgo-rmdata
container no longer runs as the root
user, but as daemon
(UID 2)expenv
binary that was included in the PostgreSQL Operator release. All expenv
calls were either replaced with the native envsubst
program or removed.watch
permissions to the pgo-deployer
ServiceAccount.client-setup.sh
works with when there is an existing pgo
client in the install pathlist
verb ClusterRole privileges to several Kubernetes objects.pgo update cluster --startup
is issued.pgo scale
would not work after pgo update cluster --shutdown
and pgo update cluster --startup
were run.pgo scaledown
deletes external WAL volumes from the replica that is removed.pgo upgrade
was running while a HA configuration map attempted to sync. (Reported by Paul Heinen @v3nturetheworld).gcc
from the postgres-ha
and pgadmin4
containers.pgo label
when applying multiple labels at once.Published by jkatz about 4 years ago
Published by jkatz about 4 years ago
Published by jkatz over 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.4.0 on July 17, 2020. Instructions for installing the Postgres Operator are here:
https://www.crunchydata.com/developers/download-postgres/containers/postgres-operator
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.4.0 release includes the following software versions upgrades:
PostgreSQL Operator is tested with Kubernetes 1.15 - 1.18, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
A technique frequently used in PostgreSQL data management is to have a pgBackRest repository that can be used to create new PostgreSQL clusters. This can be helpful for a variety of purposes:
and more.
This can be accomplished with the following new flags on pgo create cluster
:
--restore-from
: used to specify the name of the pgBackRest repository to restore from via the name of the PostgreSQL cluster (whether the PostgreSQL cluster is active or not).--restore-opts
: used to specify additional options like the ones specified to pgbackrest restore
(e.g. --type
and --target
if performing a point-in-time-recovery).Only one restore can be performed against a pgBackRest repository at a given time.
PostgreSQL Operator 4.3 introduced a change that allows for the Operator to manage the role-based access controls (RBAC) based upon the Namespace Operating mode that is selected. This ensures that the PostgreSQL Operator is able to function correctly within the Namespace or Namespaces that it is permitted to access. This includes Service Accounts, Roles, and Role Bindings within a Namespace.
PostgreSQL Operator 4.4 removes the requirements of granting the PostgreSQL Operator bind
and escalate
privileges for being able to reconcile its own RBAC, and further defines which RBAC is specifically required to use the PostgreSQL Operator (i.e. the removal of wildcard *
privileges). The permissions that the PostgreSQL Operator requires to perform the reconciliation are assigned when it is deployed and is a function of which NAMESPACE_MODE
is selected (dynamic
, readonly
, or disabled
).
This change renames the DYNAMIC_RBAC
parameter in the installer to RECONCILE_RBAC
and is set to true
by default.
For more information on how RBAC reconciliation works, please visit the RBAC reconciliation documentation.
Certificate-based authentication is a powerful PostgreSQL feature that allows for a PostgreSQL client to authenticate using a TLS certificate. While there are a variety of permutations for this can be set up, we can at least create a standardized way for enabling the replication connection to authenticate with a certificate, as we do have a known certificate authority.
PostgreSQL Operator 4.4 introduces the --replication-tls-secret
flag on the pgo create cluster
command, which, if specified and if the prerequisites are specified (--server-tls-secret
and --server-ca-secret
), then the replication account ("primaryuser") is configured to use certificate-based authentication. Combine with --tls-only
for powerful results.
Note that the common name (CN) on the certificate MUST be "primaryuser", otherwise one must specify a mapping in a pg_ident
configuration block to map to "primary" user.
When mounted to the container, the connection sslmode
that the replication user uses is set to verify-ca
by default. We can make that guarantee based on the certificate authority that is being mounted. Using verify-full
would cause the Operator to make assumptions about the cluster that we cannot make, and as such a custom pg_ident
configuration block is needed for that. However, using verify-full
allows for mutual authentication between primary and replica.
RECONCILE_RBAC
(from DYNAMIC_RBAC
).BackrestS3URIStyle
configuration parameter to the PostgreSQL Operator ConfigMap (pgo.yaml
), which accepts the values of host
or path
.--pgbackrest-s3-uri-style
flag to pgo create cluster
, which accepts values of host
or path
.BackrestS3VerifyTLS
configuration parameter to the PostgreSQL Operator ConfigMap (pgo.yaml
). Defaults to true
.--pgbackrest-s3-verify-tls
flag to pgo create cluster
, which accepts values of true
or false
.pg_dump
from a specific database using the --database
flag when using pgo backup
with --backup-type=pgdump
.pg_dump
to a specific database using the --pgdump-database
flag using pgo restore
when --backup-type=pgdump
is specified.pgha-config
(e.g. sslmode
). See the documentation for words of caution on using these.--client
flag to pgo version
to output the client version of pgo
.pgo clone
is now deprecated. For a better cloning experience, please use pgo create cluster --restore-from
PGMONITOR_PASSWORD
is now populated by an environmental variable secret. This environmental variable is only set on a primary instance as it is only needed at the time a PostgreSQL cluster is initialized.pgo status
as it is more convenient and accurate to get this information from kubectl
and the like, and it was not working due to RBAC privileges. (Reported by @mw-0).PrimaryHost
and SecretFrom
.pgo-rmdata
container no longer runs as the root
user, but as daemon
(UID 2)expenv
binary that was included in the PostgreSQL Operator release. All expenv
calls were either replaced with the native envsubst
program or removed.pgo show cluster
pgo-deployer
using the default file with hostpathstorage
will now successfully deploy PostgreSQL clusters without any adjustments.watch
permissions to the pgo-deployer
ServiceAccount.list
verb ClusterRole privileges to several Kubernetes objects.client-setup.sh
executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation method.client-setup.sh
works with when there is an existing pgo
client in the install path.CCP_IMAGE_PULL_SECRET_MANIFEST
and PGO_IMAGE_PULL_SECRET_MANIFEST
in the pgo-deployer
configuration.pgo update cluster --startup
is issued.pgo scale
would not work after pgo update cluster --shutdown
and pgo update cluster --startup
were run.pgo scaledown
deletes external WAL volumes from the replica that is removed.pgo-deployer
container. These include #1, #4, and #8.pgo upgrade
was running while a HA configuration map attempted to sync. (Reported by Paul Heinen @v3nturetheworld).gcc
from the postgres-ha
and pgadmin4
containers.Published by jkatz over 4 years ago
Published by jkatz over 4 years ago
Crunchy Data is pleased to announce the release of PostgreSQL Operator 4.4.0 Beta 2. We encourage you to download it and try it out.
pgo show cluster
(e.g. a repo sync pod from pgo clone
or creating a cluster from a pgBackRest repository) no longer show up.PrimaryHost
and SecretFrom
.pgo restart
)pgo
client when using the Ansible installer to ensure that it can actually download the installer.Published by andrewlecuyer over 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.4.0-beta.1 on July 2, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.4.0 release includes the following software versions upgrades:
PostgreSQL Operator is tested with Kubernetes 1.15 - 1.18, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
A technique frequently used in PostgreSQL data management is to have a pgBackRest repository that can be used to create new PostgreSQL clusters. This can be helpful for a variety of purposes:
and more.
This can be accomplished with the following new flags on pgo create cluster
:
--restore-from
: used to specify the name of the pgBackRest repository to store from via the name of the PostgreSQL cluster (whether the PostgreSQL cluster is active or not).--restore-opts
: used to specify additional options like the ones specified to pgbackrest restore
(e.g. --type
and --target
if performing a point-in-time-recovery).Only one restore can be performed against a pgBackRest repository at a given time.
PostgreSQL Operator 4.3 introduced a change that allows for the Operator to manage the role-based access controls (RBAC) based upon the Namespace Operating mode that is selected. This ensures that the PostgreSQL Operator is able to function correctly within the Namespace or Namespaces that it is permitted to access. This includes Service Accounts, Roles, and Role Bindings within a Namespace.
PostgreSQL Operator 4.4 removes the requirements of granting the PostgreSQL Operator bind
and escalate
privileges for being able to reconcile its own RBAC, and further defines which RBAC is specifically required to use the PostgreSQL Operator (i.e. the removal of wildcard *
privileges). The permissions that the PostgreSQL Operator requires to perform the reconciliation are assigned when it is deployed and is a function of which NAMESPACE_MODE
is selected (dynamic
, readonly
, or disabled
).
This change renames the DYNAMIC_RBAC
parameter in the installer to RECONCILE_RBAC
and is set to true
by default.
For more information on how RBAC reconciliation works, please visit the RBAC reconciliation documentation.
Certificate-based authentication is a powerful PostgreSQL feature that allows for a PostgreSQL client to authenticate using a TLS certificate. While there are a variety of permutations for this can be set up, we can at least create a standardized way for enabling the replication connection to authenticate with a certificate, as we do have a known certificate authority.
PostgreSQL Operator 4.4 introduces the --replication-tls-secret
flag on the pgo create cluster
command, which, if specified and if the prerequisites are specified (--server-tls-secret
and --server-ca-secret
), then the replication account ("primaryuser") is configured to use certificate-based authentication. Combine with --tls-only
for powerful results.
Note that the common name (CN) on the certificate MUST be "primaryuser", otherwise one must specify a mapping in a pg_ident
configuration block to map to "primary" user.
When mounted to the container, the connection sslmode
that the replication user uses is set to verify-ca
by default. We can make that guarantee based on the certificate authority that is being mounted. Using verify-full
would cause the Operator to make assumptions about the cluster that we cannot make, and as such a custom pg_ident
configuration block is needed for that. However, using verify-full
allows for mutual authentication between primary and replica.
RECONCILE_RBAC
(from DYNAMIC_RBAC
).BackrestS3URIStyle
configuration parameter to the PostgreSQL Operator ConfigMap (pgo.yaml
), which accepts the values of host
or path
.--pgbackrest-s3-uri-style
flag to pgo create cluster
, which accepts values of host
or path
.BackrestS3VerifyTLS
configuration parameter to the PostgreSQL Operator ConfigMap (pgo.yaml
). Defaults to true
.--pgbackrest-s3-verify-tls
flag to pgo create cluster
, which accepts values of true
or false
.pg_dump
from a specific database using the --database
flag when using pgo backup
with --backup-type=pgdump
.pg_dump
to a specific database using the --pgdump-database
flag using pgo restore
when --backup-type=pgdump
is specified.pgha-config
(e.g. sslmode
). See the documentation for words of caution on using these.--client
flag to pgo version
to output the client version of pgo
.PGMONITOR_PASSWORD
is now populated by an environmental variable secret. This environmental variable is only set on a primary instance as it is only needed at the time a PostgreSQL cluster is initialized.pgo status
as it is more convenient and accurate to get this information from kubectl
and the like, and it was not working due to RBAC privileges. (Reported by @mw-0).expenv
binary that was included in the PostgreSQL Operator release. All expenv
calls were either replaced with the native envsubst
program or removed.pgo show cluster
pgo-deployer
using the default file with hostpathstorage
will now successfully deploy PostgreSQL clusters without any adjustments.list
verb ClusterRole privileges to several Kubernetes objects.client-setup.sh
executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation method.CCP_IMAGE_PULL_SECRET_MANIFEST
and PGO_IMAGE_PULL_SECRET_MANIFEST
in the pgo-deployer
configuration.pgo-deployer
container. These include #1, #4, and #8.gcc
from the postgres-ha
and pgadmin4
containers.Published by jkatz over 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.3.2 on June 3, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
Version 4.3.2 of the PostgreSQL Operator contains bug fixes to the installer container and changes to how CPU/memory requests and limits can be specified.
PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
PostgreSQL Operator 4.3.0 introduced some new options to tune the resource requests for PostgreSQL instances under management and other associated deployments, including pgBackRest and pgBouncer. From some of our learnings of running PostgreSQL in Kubernetes, we heavily restricted how the limits on the Pods could be set, and tied them to be the same as the requests.
Due to feedback from a variety of sources, this caused more issues than it helped, and as such, we decided to introduce a breaking change into a patch release and remove the --enable-*-limit
and --disable-*-limit
series of flags and replace them with flags that allow you to specifically choose CPU and memory limits.
This release introduces several new flags to various commands, including:
pgo create cluster --cpu-limit
pgo create cluster --memory-limit
pgo create cluster --pgbackrest-cpu-limit
pgo create cluster --pgbackrest-memory-limit
pgo create cluster --pgbouncer-cpu-limit
pgo create cluster --pgbouncer-memory-limit
pgo update cluster --cpu-limit
pgo update cluster --memory-limit
pgo update cluster --pgbackrest-cpu-limit
pgo update cluster --pgbackrest-memory-limit
pgo create pgbouncer --cpu-limit
pgo create pgbouncer --memory-limit
pgo update pgbouncer --cpu-limit
pgo update pgbouncer --memory-limit
Additionally, these values can be modified directly in a pgcluster Custom Resource and the PostgreSQL Operator will react and make the modifications.
pgo-deployer
container can now run using an aribtrary UID.pgo-deployer
container to OpenShift 3.11 environments, a new template YAML file, postgresql-operator-ocp311.yml
is provided. This YAML file requires that the pgo-deployer
is run with cluster-admin
role for OpenShift 3.11 environments due to the lack of support of the escalate
RBAC verb. Other environments (e.g. Kubernetes, OpenShift 4+) still do not require cluster-admin
.pgo
client if the client-setup.sh
script gets interrupted. Contributed by Itay Grudev (@itay-grudev).pgo-deployer
container now assigns the required Service Account all the appropriate get
RBAC privileges via the postgres-operator.yml
file that it needs to properly install. This allows the install
functionality to properly work across multiple runs.pgo-deployer
leverages version 4.4 of the oc
client.MustRunAsNonRoot
Pod Security Policies and the like. Reported by Olivier Beyler (@obeyler).Published by jkatz over 4 years ago
Published by jkatz over 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.3.1 on May 21, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.3.1 release includes the following software versions upgrades:
PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
SCRAM is a password authentication method in PostgreSQL that has been available since PostgreSQL 10 and is considered to be superior to the md5
authentication method. The PostgreSQL Operator now introduces support for SCRAM on the pgo create user
and pgo update user
commands by means of the --password-type
flag. The following values for --password-type
will select the following authentication methods:
--password-type=""
, --password-type="md5"
=> md5--password-type="scram"
, --password-type="scram-sha-256"
=> SCRAM-SHA-256In turn, the PostgreSQL Operator will hash the passwords based on the chosen method and store the computed hash in PostgreSQL.
When using SCRAM support, it is important to note the following observations and limitations:
pgo update user
(e.g. --password
, --rotate-password
, --expires
) with the desire to keep the persisted password using SCRAM, it is necessary to specify the "--password-type=scram-sha-256" directive.pgo restart
and pgo reload
This release introduces the pgo restart
command, which allow you to perform a PostgreSQL restart on one or more instances within a PostgreSQL cluster.
You restart all instances at the same time using the following command:
pgo restart hippo
or specify a specific instance to restart using the --target
flag (which follows a similar behavior to the --target
flag on pgo scaledown
and pgo failover
):
pgo restart hippo --target=hippo-abcd
The restart itself is performed by calling the Patroni restart
REST endpoint on the specific instance (primary or replica) being restarted.
As with the pgo failover
and pgo scaledown
commands it is also possible to specify the --query
flag to query instances available for restart:
pgo restart mycluster --query
With the new pgo restart
command, using --query
flag with the pgo failover
and pgo scaledown
commands include the PENDING RESTART
information, which is now returned with any replication information.
This release allows for the pgo reload
command to properly reloads all instances (i.e. the primary and all replicas) within the cluster.
The dynamic namespace mode (e.g. pgo create namespace
+ pgo delete namespace
) provides the ability to create and remove Kubernetes namespaces and automatically add them unto the purview of the PostgreSQL Operator. Through the course of fixing usability issues with working with the other namespaces modes (readonly
, disabled
), a change needed to be introduced that broke compatibility with Kubernetes 1.12 and earlier.
The PostgreSQL Operator still supports managing PostgreSQL Deployments across multiple namespaces in Kubernetes 1.12 and earlier, but only with readonly
mode. In readonly
mode, a cluster administrator needs to create the namespace and the RBAC needed to run the PostgreSQL Operator in that namespace. However, it is now possible to define the RBAC required for the PostgreSQL Operator to manage clusters in a namespace via a ServiceAccount, as described in the Namespace section of the documentation.
The usability change allows for one to add namespace to the PostgreSQL Operator's purview (or deploy the PostgreSQL Operator within a namespace) and automatically set up the appropriate RBAC for the PostgreSQL Operator to correctly operate.
cluster-admin
privilege for deploying the PostgreSQL Operator. Reported by (@obeyler).disabled
and readonly
, the PostgreSQL Operator will now dynamically create the required RBAC when a new namespace is added if that namespace has the RBAC defined in local-namespace-rbac.yaml
. This occurs when PGO_DYNAMIC_NAMESPACE
is set to true
.Guaranteed
. The metrics defaults are 100m/24Mi and the pgBadger defaults are 500m/24Mi. Reported by (@jose-joye).DISABLE_FSGROUP
option as part of the installation. When set to true
, this does not add a FSGroup to the Pod Security Context when deploying PostgreSQL related containers or pgAdmin 4. This is helpful when deploying the PostgreSQL Operator in certain environments, such as OpenShift with a restricted
Security Context Constraint. Defaults to false
.DISABLE_FSGROUP
will need to be set to true
for that). The example PostgreSQL Operator SCC is left in the examples
directory for reference.PGO_DISABLE_TLS
is set to true
, then PGO_TLS_NO_VERIFY
is set to true
.pgo-deployer
environmental variables that we not needed to be set by a user were internalized. These include ANSIBLE_CONFIG
and HOME
.pgo-deployer
container to install the PostgreSQL Operator, update the default watched namespace to pgo
as the example only uses this namespace.pgo show namespace
command now properly indicates which namespaces a user is able to access.pgo-apiserver
will successfully run if PGO_DISABLE_TLS
is set to true
. Reported by (@zhubx007).pgo-deployer
from failing if it detects the existence of dependent cluster-wide objects already present.pgo-deployer
using the default file with hostpathstorage
will now successfully deploy PostgreSQL clusters without any adjustments.pgo-client
container.client-setup.sh
executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation methodCCP_IMAGE_PULL_SECRET_MANIFEST
and PGO_IMAGE_PULL_SECRET_MANIFEST
in the pgo-deployer
configuration.pgo-deployer
container. These include #1, #4, and #8 in the STORAGE
family of variables.Published by jkatz over 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.2.3 on May 21, 2020.
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.2.3 release includes the following software versions upgrades:
PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
pgo-rmdata
Job no longer calls the rm
command on any data within the PVC, but rather leaves this task to the storage provisionerpgo show namespace
command now properly indicates which namespaces a user is able to access.rsync
is intalled on the pgo-backrest-repo-sync
UBI7 image.pgo apply
is executed, which was the previous behavior. Reported by José Joye (@jose-joye).archive_timeout
when new PostgreSQL clusters are initialized. Reported by Adrian (@adifri).pgo scaledown
after it is failed over.pgo-rmdata
Job will not fail if a PostgreSQL cluster has not been properly initialized.failover
ConfigMap for a PostgreSQL cluster is now removed when the cluster is deleted.Published by jkatz over 4 years ago
Crunchy Data announces the release of the PostgreSQL Operator 4.3.0 on May 1, 2020
The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.
The PostgreSQL Operator 4.3.0 release includes the following software versions upgrades:
PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.
pgo-deployer
containerpgo upgrade
ClusterRole
requirement for using the PostgreSQL OperatorA key component of building database architectures that can ensure continuity of operations is to be able to have the database available across multiple data
centers. In Kubernetes, this would mean being able to have the PostgreSQL Operator be able to have the PostgreSQL Operator run in multiple Kubernetes clusters, have PostgreSQL clusters exist in these Kubernetes clusters, and only ensure the "standby" deployment is promoted in the event of an outage or planned switchover.
As of this release, the PostgreSQL Operator now supports standby PostgreSQL clusters that can be deployed across namespaces or other Kubernetes or Kubernetes-enabled clusters (e.g. OpenShift). This is accomplished by leveraging the PostgreSQL Operator's support for
pgBackRest and leveraging an intermediary, i.e. S3, to provide the ability for the standby cluster to read in the PostgreSQL archives and replicate the data. This allows a user to quickly promote a standby PostgreSQL cluster in the event that the primary cluster suffers downtime (e.g. data center outage), for planned switchovers such as Kubernetes cluster maintenance or moving a PostgreSQL workload from one data center to another.
To support standby clusters, there are several new flags available on pgo create cluster
that are required to set up a new standby cluster. These include:
--standby
: If set, creates the PostgreSQL cluster as a standby cluster.--pgbackrest-repo-path
: Allows the user to override the pgBackRest
repository path for a cluster. While this setting can now be utilized when creating any cluster, it is typically required for the creation of standby clusters as the repository path will need to match that of the primary cluster.--password-superuser
: When creating a standby cluster, allows the user to specify a password for the superuser that matches the superuser account in the cluster the standby is replicating from.--password-replication
: When creating a standby cluster, allows the user to specify a password for the replication user that matches the superuser account in the cluster the standby is replicating from.Note that the --password
flag must be used to ensure the password of the main PostgreSQL user account matches that of the primary PostgreSQL cluster, if you are using Kubernetes to manage the user's password.
For example, if you have a cluster named hippo
and wanted to create a standby cluster called hippo
and assuming the S3 credentials are using the defaults provided to the PostgreSQL Operator, you could execute a command similar to:
pgo create cluster hippo-standby --standby \
--pgbackrest-repo-path=/backrestrepo/hippo-backrest-shared-repo
--password-superuser=superhippo
--password-replication=replicahippo
To shutdown the primary cluster (if you can), you can execute a command similar to:
pgo update cluster hippo --shutdown
To promote the standby cluster to be able to accept write traffic, you can execute the following command:
pgo update cluster hippo-standby --promote-standby
To convert the old primary cluster into a standby cluster, you can execute the following command:
pgo update cluster hippo --enable-standby
Once the old primary is converted to a standby cluster, you can bring it online with the following command:
pgo update cluster hippo --startup
For information on the architecture and how to
set up a standby PostgreSQL cluster, please refer to the documentation.
At present, streaming replication between the primary and standby clusters are not supported, but the PostgreSQL instances within each cluster do support streaming replication.
pgo-deployer
containerInstallation, alongside upgrading, have long been two of the biggest challenges of using the PostgreSQL Operator. This release makes improvements on both (with upgrading being described in the next section).
For installation, we have introduced a new container called pgo-deployer
. For environments that use hostpath storage (e.g. minikube), installing the PostgreSQL Operator can be as simple as:
kubectl create namespace pgo
kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.3.0/installers/kubectl/postgres-operator.yml
The pgo-deployer
container can be configured by a manifest called postgres-operator.yml
and provides a set of environmental variables that should be familiar from using the other installers.
The pgo-deployer
launches a Job in the namespace that the PostgreSQL Operator will be installed into and sets up the requisite Kubernetes objects: CRDs, Secrets, ConfigMaps, etc.
The pgo-deployer
container can also be used to uninstall the PostgreSQL Operator. For more information, please see the installation documentation.
One of the biggest challenges to using a newer version of the PostgreSQL Operator was upgrading from an older version.
This release introduces the ability to automatically upgrade from an older version of the Operator (as early as 4.1.0) to the newest version (4.3.0) using the pgo upgrade
command.
The pgo upgrade
command follows a process similar to the manual PostgreSQL Operator upgrade process, but instead automates it.
To find out more about how to upgrade the PostgreSQL Operator, please review the upgrade documentation.
The ability to customize the configuration for a PostgreSQL cluster with the PostgreSQL Operator can now be easily modified by making changes directly to the ConfigMap that is created with each PostgreSQL cluster. The ConfigMap, which follows the pattern <clusterName>-pgha-config
(e.g. hippo-pgha-config
for
pgo create cluster hippo
), manages the user-facing configuration settings available for a PostgreSQL cluster, and when modified, it will automatically synchronize the settings across all primaries and replicas in a PostgreSQL cluster.
Presently, the ConfigMap can be edited using the kubectl edit cm
command, and future iterations will add functionality to the PostgreSQL Operator to make this process easier.
The PostgreSQL Operator provides the ability to set customization for how large the PVC can be via the "storage config" options available in the PostgreSQL Operator configuration file (aka pgo.yaml
). While these provide a baseline level of customizability, it is often important to be able to set the size of the PVC that a PostgreSQL cluster should use at cluster creation time. In other words, users should be able to choose exactly how large they want their PostgreSQL PVCs ought to be.
PostgreSQL Operator 4.3 introduces the ability to set the PVC sizes for the PostgreSQL cluster, the pgBackRest repository for the PostgreSQL cluster, and the PVC size for each tablespace at cluster creation time. Additionally, this behavior has been extended to the clone functionality as well, which is helpful when trying to resize a PostgreSQL cluster. Here is some information on the flags that have been added:
--pvc-size
- sets the PVC size for the PostgreSQL data directory
--pgbackrest-pvc-size
- sets the PVC size for the PostgreSQL pgBackRest repository
For tablespaces, one can use the pvcsize
option to set the PVC size for that tablespace.
--pvc-size
- sets the PVC size for the PostgreSQL data directory for the newly created cluster
--pgbackrest-pvc-size
- sets the PVC size for the PostgreSQL pgBackRest repository for the newly created cluster
Tablespaces can be used to spread out PostgreSQL workloads across multiple volumes, which can be used for a variety of use cases:
and more.
Tablespaces can be created via the pgo create cluster
command using the --tablespace
flag. The arguments to --tablespace
can be passed in using one of several key/value pairs, including:
name
(required) - the name of the tablespacestorageconfig
(required) - the storage configuration to use for the tablespacepvcsize
- if specified, the size of the PVC. Defaults to the PVC size in the storage configurationEach value is separated by a :
, for example:
pgo create cluster hacluster --tablespace=name=ts:storageconfig=nfsstorage
All tablespaces are mounted in the /tablespaces
directory. The PostgreSQL Operator manages the mount points and persistent volume claims (PVCs) for the tablespaces, and ensures they are available throughout all of the PostgreSQL lifecycle operations, including:
etc.
One additional value is added to the pgcluster CRD:
Tablespaces are automatically created in the PostgreSQL cluster. You can access them as soon as the cluster is initialized. For example, using the tablespace created above, you could create a table on the tablespace ts
with the following SQL:
CREATE TABLE (id int) TABLESPACE ts;
Tablespaces can also be added to existing PostgreSQL clusters by using the pgo update cluster
command. The syntax is similar to that of creating a PostgreSQL cluster with a tablespace, i.e.:
pgo update cluster hacluster --tablespace=name=ts2:storageconfig=nfsstorage
As additional volumes need to be mounted to the Deployments, this action can cause downtime, though the expectation is that the downtime is brief.
Based on usage, future work will look to making this more flexible. Dropping tablespaces can be tricky as no objects must exist on a tablespace in order for PostgreSQL to drop it (i.e. there is no DROP TABLESPACE .. CASCADE command).
Connecting to PostgreSQL clusters is a typical requirement when deploying to an untrusted network, such as a public cloud. The PostgreSQL Operator makes it easy to enable TLS for PostgreSQL. To do this, one must create two secrets prior: one containing the trusted certificate authority (CA) and one containing the PostgreSQL server's TLS keypair, e.g.:
kubectl create secret generic postgresql-ca --from-file=ca.crt=/path/to/ca.crt
kubectl create secret tls hippo-tls-keypair \
--cert=/path/to/server.crt \
--key=/path/to/server.key
From there, one can create a PostgreSQL cluster that supports TLS with the following command:
pgo create cluster hippo-tls \
--server-ca-secret=hippo-tls-keypair \
--server-tls-secret=postgresql-ca
To create a PostgreSQL cluster that only accepts TLS connections and rejects any connection attempts made over an insecure channel, you can use the --tls-only
flag on cluster creation, e.g.:
pgo create cluster hippo-tls \
--tls-only \
--server-ca-secret=hippo-tls-keypair \
--server-tls-secret=postgresql-ca
An optimization used for improving PostgreSQL performance related to file system usage is to have the PostgreSQL write-ahead logs (WAL) written to a different mounted volume than other parts of the PostgreSQL system, such as the data directory.
To support this, the PostgreSQL Operator now supports the ability to specify an external volume for writing the PostgreSQL write-head log (WAL) during cluster creation, which carries through to replicas and clones. When not specified, the WAL resides within the PGDATA directory and volume, which is the present behavior.
To create a PostgreSQL cluster to use an external volume, one can use the --wal-storage-config
flag at cluster creation time to select the storage configuration to use, e.g.
pgo create cluster --wal-storage-config=nfsstorage hippo
Additionally, it is also possible to specify the size of the WAL storage on all newly created clusters. When in use, the size of the volume can be overridden per-cluster. This is specified with the --wal-storage-size
flag, i.e.
pgo create cluster --wal-storage-config=nfsstorage --wal-storage-size=10Gi hippo
This implementation does not define the WAL volume in any deployment templates because the volume name and mount path are constant.
ClusterRole
Requirement for the PostgreSQL OperatorPostgreSQL Operator 4.0 introduced the ability to manage PostgreSQL clusters across multiple Kubernetes Namespaces. PostgreSQL Operator 4.1 built on this functionality by allowing users to dynamically control which Namespaces it managed as well as the PostgreSQL clusters deployed to them. In order to leverage this feature, one must grant a ClusterRole
level permission via a ServiceAccount to the PostgreSQL Operator.
There are a lot of deployment environments for the PostgreSQL Operator that only need for it to exists within a single namespace and as such, granting cluster-wide privileges is superfluous, and in many cases, undesirable. As such, it should be possible to deploy the PostgreSQL Operator to a single namespace without requiring a ClusterRole
.
To do this, but maintain the aforementioned Namespace functionality for those who require it, PostgreSQL Operator 4.3 introduces the ability to opt into deploying it with minimum required ClusterRole
privileges and in turn, the ability to deploy the PostgreSQL Operator without a ClusterRole
. To do so, the PostgreSQL Operator introduces the concept of "namespace operating mode" which lets one select the type deployment to create. The namespace mode is set at the install time for the PostgreSQL Operator, and files into one of three options:
dynamic
: This is the default. This enables full dynamic Namespace management capabilities, in which the PostgreSQL Operator can create, delete and update any Namespaces within the Kubernetes cluster, while then also having the ability to create the Roles, Role Bindings and Service Accounts within those Namespaces for normal operations. The PostgreSQL Operator can also listen for Namespace events and create or remove controllers for various Namespaces as changes are made to Namespaces from Kubernetes and the PostgreSQL Operator's management.
readonly
: In this mode, the PostgreSQL Operator is able to listen for namespace events within the Kubernetetes cluster, and then manage controllers as Namespaces are added, updated or deleted. While this still requires a ClusterRole
, the permissions mirror those of a "read-only" environment, and as such the PostgreSQL Operator is unable to create, delete or update Namespaces itself nor create RBAC that it requires in any of those Namespaces. Therefore, while in readonly
, mode namespaces must be preconfigured with the proper RBAC as the PostgreSQL Operator cannot create the RBAC itself.
disabled
: Use this mode if you do not want to deploy the PostgreSQL Operator with any ClusterRole
privileges, especially if you are only deploying the PostgreSQL Operator to a single namespace. This disables any Namespace management capabilities within the PostgreSQL Operator and will simply attempt to work with the target Namespaces specified during installation. If no target Namespaces are specified, then the Operator will be configured to work within the namespace in which it is deployed. As with the readonly
mode, while in this mode, Namespaces must be pre-configured with the proper RBAC, since the PostgreSQL Operator cannot create the RBAC itself.
Based on the installer you use, the variables to set this mode are either named:
NAMESPACE_MODE
PGO_NAMESPACE_MODE
namespace_mode
pgAdmin 4 is a popular graphical user interface that lets you work with PostgreSQL databases from both a desktop or web-based client. With its ability to manage and orchestrate changes for PostgreSQL users, the PostgreSQL Operator is a natural partner to keep a pgAdmin 4 environment synchronized with a PostgreSQL environment.
This release introduces an integration with pgAdmin 4 that allows you to deploy a pgAdmin 4 environment alongside a PostgreSQL cluster and keeps the user's database credentials synchronized. You can simply log into pgAdmin 4 with your PostgreSQL username and password and immediately have access to your databases.
For example, let's there is a PostgreSQL cluster called hippo
that has a user named hippo
with password datalake
:
pgo create cluster hippo --username=hippo --password=datalake
After the PostgreSQL cluster becomes ready, you can create a pgAdmin 4 deployment with the pgo create pgadmin
command:
pgo create pgadmin hippo
This creates a pgAdmin 4 deployment unique to this PostgreSQL cluster and synchronizes the PostgreSQL user information into it. To access pgAdmin 4, you can set up a port-forward to the Service, which follows the pattern <clusterName>-pgadmin
, to port 5050
:
kubectl port-forward svc/hippo-pgadmin 5050:5050
Point your browser at http://localhost:5050
and use your database username (e.g. hippo
) and password (e.g. datalake
) to log in.
(Note: if your password does not appear to work, you can retry setting up the user with the pgo update user
command: pgo update user hippo --password=datalake
)
The pgo create user
, pgo update user
, and pgo delete user
commands are synchronized with the pgAdmin 4 deployment. Note that if you use pgo create user
without the --managed
flag prior to deploying pgAdmin 4, then the user's credentials will not be synchronized to the pgAdmin 4 deployment. However, a subsequent run of pgo update user --password
will synchronize the credentials with pgAdmin 4.
You can remove the pgAdmin 4 deployment with the pgo delete pgadmin
command.
We have released the first version of this change under "feature preview" so you can try it out. As with all of our features, we open to feedback on how we can continue to improve the PostgreSQL Operator.
pgo df
pgo df
provides information on the disk utilization of a PostgreSQL cluster, and previously, this was not reporting accurate numbers. The new pgo df
looks at each PVC that is mounted to each PostgreSQL instance in a cluster, including the PVCs for tablespaces, and computers the overall utilization. Even better, the data is returned in a structured format for easy scraping. This implementation also leverages Golang concurrency to help compute the results quickly.
The pgBouncer integration was completely rewritten to support the TLS-only operations via the PostgreSQL Operator. While most of the work was internal, you should now see a much more stable pgBouncer experience.
The pgBouncer attributes in the pgclusters.crunchydata.com
CRD are also declarative and any updates will be reflected by the PostgreSQL Operator.
Additionally, a few new commands were added:
pgo create pgbouncer --cpu
and pgo update pgbouncer --memory
resource request flags for settings container resources for the pgBouncer instances. For CPU, this will also set the limit.pgo create pgbouncer --enable-memory-limit
sets the Kubernetes resource limit for memorypgo create pgbouncer --replicas
sets the number of pgBouncer Pods to deploy with a PostgreSQL cluster. The default is 1
.pgo show pgbouncer
shows information about a pgBouncer deploymentpgo update pgbouncer --cpu
and pgo update pgbouncer --memory
resource request flags for settings container resources for the pgBouncer instances after they are deployed. For CPU, this will also set the limit.pgo update pgbouncer --disables-memory-limit
and pgo update pgbouncer --enable-memory-limit
respectively unset and set the Kubernetes resource limit for memorypgo update pgbouncer --replicas
sets the number of pgBouncer Pods to deploy with a PostgreSQL cluster.pgo update pgbouncer --rotate-password
allows one to rotate the serviceThe user management commands were rewritten to support the TLS only workflow. These commands now return additional information about a user when actions are taken. Several new flags have been added too, including the option to view all output in JSON. Other flags include:
pgo update user --rotate-password
to automatically rotate the passwordpgo update user --disable-login
which disables the ability for a PostgreSQL user to loginpgo update user --enable-login
which enables the ability for a PostgreSQL user to loginpgo update user --valid-always
which sets a password to always be valid, i.e. it has nopgo show user
does not show system accounts by default now, but can be made to show the system accounts by using pgo show user --show-system-accounts
A major change as well is that the default password expiration function is now defaulted to be unlimited (i.e. never expires) which aligns with typical PostgreSQL workflows.
pgo create cluster
will now set the default database name to be the name of the cluster. For example, pgo create cluster hippo
would create the initial database named hippo
.Database
configuration parameter in pgo.yaml
(db_name
in the Ansible inventory) is now set to ""
by default.--password
/-w
flag for pgo create cluster
now only sets the password for the regular user account that is created, not all of the system accounts (e.g. the postgres
superuser).postgres-ha.yaml
file is no longer is no longer created by the Operator for every PostgreSQL cluster.DefaultInstanceMemory
, DefaultBackrestMemory
, and DefaultPgBouncerMemory
options to the pgo.yaml
configuration to allow for the setting of default memory requests for PostgreSQL instances, the pgBackRest repository, and pgBouncer instances respectively.Default...ContainerResources
set of parameters from the pgo.yaml
configuration file.pgbackups.crunchydata.com
, deprecated since 4.2.0, has now been completely removed, along with any code that interfaced with it.PreferredFailoverFeature
is removed. This had not been doing anything since 4.2.0, but some of the legacy bits and configuration were still there.pgo status
no longer returns information about the nodes available in a Kubernetes cluster--series
flag from pgo create cluster
command. This affects API calls more than actual usage of the pgo
client.pgo benchmark
, pgo show benchmark
, pgo delete benchmark
are removed. PostgreSQL benchmarks with pgbench
can still be executed using the crunchy-pgbench
container.pgo ls
is removed.pgo create cluster
now returns its contents in JSON. The output now includes information about the user that is created.pgo show backup
now returns its contents in JSON. The output view of pgo show backup
remains the same.PreferredFailoverNode
feature, as it had already been effectively removed.rm
calls when cleaning up PostgreSQL clusters. This behavior is left to the storage provisioner that one deploys with their PostgreSQL instances.<clusterName>-<backupType>-sch-backup
pgo create cluster
around PostgreSQL users and databases, including:
--ccp-image-prefix
sets the CCPImagePrefix
that specifies the image prefix for the PostgreSQL related containers that are deployed by the PostgreSQL Operator--cpu
flag that sets the amount of CPU to use for the PostgreSQL instances in the cluster. This also sets the limit.--database
/ -d
flag that sets the name of the initial database created.--enable-memory-limit
, --enable-pgbackrest-memory-limit
, --enable-pgbouncer-memory-limit
enable the Kubernetes memory resource limit for PostgreSQL, pgBackRest, and pgBouncer respectively--memory
flag that sets the amount of memory to use for the PostgreSQL instances in the cluster--user
/ -u
flag that sets the PostgreSQL username for the standard database user--password-length
sets the length of the password that should be generated, if --password
is not set.--pgbackrest-cpu
flag that sets the amount of CPU to use for the pgBackRest repository--pgbackrest-memory
flag that sets the amount of memory to use for the pgBackRest repository--pgbackrest-s3-ca-secret
specifies the name of a Kubernetes Secret that contains a key (aws-s3-ca.crt
) to override the default CA used for making connections to a S3 interface--pgbackrest-storage-config
lets one specify a different storage configuration to use for a local pgBackRest repository--pgbouncer-cpu
flag that sets the amount of CPU to use for the pgBouncer instances--pgbouncer-memory
flag that sets the amount of memory to use for the pgBouncer instances--pgbouncer-replicas
sets the number of pgBouncer Pods to deploy with the PostgreSQL cluster. The default is 1
.--pgo-image-prefix
sets the PGOImagePrefix
that specifies the image prefix for the PostgreSQL Operator containers that help to manage the PostgreSQL clusters--show-system-accounts
returns the credentials of the system accounts (e.g. the postgres
superuser) along with the credentials for the standard database userpgo update cluster
now supports the --cpu
, --disable-memory-limit
, --disable-pgbackrest-memory-limit
, --enable-memory-limit
, --enable-pgbackrest-memory-limit
, --memory
, --pgbackrest-cpu
, and --pgbackrest-memory
flags to allow PostgreSQL instances and the pgBackRest repository to have their resources adjusted post deploymentPodAntiAffinityPgBackRest
and PodAntiAffinityPgBouncer
to the pgo.yaml
configuration file to set specific Pod anti-affinity rules for pgBackRest and pgBouncer Pods that are deployed along with PostgreSQL clusters that are managed by the Operator. The default for pgBackRest and pgBouncer is to use the value that is set in PodAntiAffinity
.pgo create cluster
now supports the --pod-anti-affinity-pgbackrest
and --pod-anti-affinity-pgbouncer
flags to specifically overwrite the pgBackRest repository and pgBouncer Pod anti-affinity rules on a specific PostgreSQL cluster deployment, which overrides any values present in PodAntiAffinityPgBackRest
and PodAntiAffinityPgBouncer
respectfully. The default for pgBackRest and pgBouncer is to use the value for pod anti-affinity that is used for the PostgreSQL instances in the cluster.crunchydata
) for the containers that are deployed by the PostgreSQL Operator. This adds two fields to the pgcluster CRD: CCPImagePrefix
and `PGOImagePrefixpgo create cluster
by using the --pgbackrest-s3-ca-secret
flag, which refers to an existing Secret that contains a key called aws-s3-ca.crt
that contains the CA. Reported by Aurelien Marie @(aurelienmarie)pgo clone
now supports the --enable-metrics
flag, which will deploy the monitoring sidecar along with the newly cloned PostgreSQL cluster.--enable-autofail
flag to pgo update
to make it clear how the autofailover mechanism can be re-enabled for a PostgreSQL cluster.backoffLimit
from Jobs that can be retried, which is most of them./opt/cpm/bin/health
wal_level
is now defaulted to logical
to enable logical replicationarchive_timeout
is now a default setting in the crunchy-postgres-ha
and crunchy-postgres-ha-gis
containers and is set to 60
ArchiveTimeout
, LogStatement
, LogMinDurationStatement
are removed from pgo.yaml
, as these can be customized either via a custom postgresql.conf
file or postgres-ha.yaml
filenode
ClusterRole is no longer used<clusterName>-<backupType>-sch-backup
pv/create-pv-nfs.sh
has been modified to create persistent volumes with their own directories on the NFS filesystems. This better mimics production environments. The older version of the script still exists as pv/create-pv-nfs-legacy.sh
pgo-rmdata
Job no longer calls the rm
command on any data within the PVC, but rather leaves this task to the storage provisionerexpenv
in the add-targeted-namespace.sh
scriptpgo restore
after 'pgo scaledown' is executedpgo scaledown
after it is failed overpgo apply
is executed, which was the previous behavior. Reported by José Joye (@jose-joye)--query
flag in pgo scaledown
and pgo failover
. This now follows the pattern outlined by the Kubernetes safe random string generator
stanza-create
Job now waits for both the PostgreSQL cluster and the pgBackRest repository to be ready before executingbackoffLimit
from Jobs that can be retried, which is most of them. Reported by Leo Khomenko (@lkhomenk)pgo-rmdata
Job will not fail if a PostgreSQL cluster has not been properly initializedpgo-rmdata
crash related to an improper SecurityContextfailover
ConfigMap for a PostgreSQL cluster is now removed when the cluster is deleted24
pgo-client
imagePullPolicy to be IfNotPresent
, which is the default for all of the managed containers across the projectUsePAM yes
in the sshd_config
file to fix an issue with using SSHD in newer versions of Dockeradd-targeted-namespace.sh
script