postgres-operator

Production PostgreSQL for Kubernetes, from high availability Postgres clusters to full-scale database-as-a-service.

APACHE-2.0 License

Stars
3.7K
Committers
104

Bot releases are hidden (Show)

postgres-operator - 4.5.1

Published by jkatz almost 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.5.1 on November 12, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

PostgreSQL Operator 4.5.1 release includes the following software versions upgrades:

  • PostgreSQL is now at versions 13.1, 12.5, 11.10, 10.15, 9.6.20, and 9.5.24.
  • Patroni is now at version 2.0.1.
  • PL/Perl can now be used in the PostGIS-enabled containers. Suggested by Denish Patel (@denishpatel).

Changes

  • Simplified creation of a PostgreSQL cluster from a pgcluster resource. A user no longer has to provide a pgBackRest repository Secret: the Postgres Operator will now automatically generate this.
  • The exposed ports for Services associated with a cluster is now available from the pgo show cluster command.
  • If the pgo-config ConfigMap is not created during the installation of the Postgres Operator, the Postgres Operator will generate one when it initializes.
  • Providing a value for pgo_admin_password in the installer is now optional. If no value is provided, the password for the initial administrative user is randomly generated.
  • Added an example for how to create a PostgreSQL cluster that uses S3 for pgBackRest backups via a custom resource.

Fixes

  • Fix readiness check for a standby leader. Previously, the standby leader would not report as ready, even though it was. Reported by Alec Rooney (@alrooney).
  • Proper determination if a pgcluster custom resource creation has been processed by its corresponding Postgres Operator controller. This prevents the custom resource from being run by the creation logic multiple times.
  • Prevent initdb (cluster reinitialization) from occurring if the PostgreSQL container cannot initialize while bootstrapping from an existing PGDATA directory.
  • Fix issue with UBI 8 / CentOS 8 when running a pgBackRest bootstrap or restore job, where duplicate "repo types" could be set. Specifically, the ensures the name of the repo type is set via the PGBACKREST_REPO1_TYPE environmental variable. Reported by Alec Rooney (@alrooney).
  • Ensure external WAL and Tablespace PVCs are fully recreated during a restore. Reported by (@aurelien43).
  • Ensure pgo show backup will work regardless of state of any of the PostgreSQL clusters. This pulls the information directly from the pgBackRest Pod itself. Reported by (@saltenhub).
  • Ensure that sidecars (e.g. metrics collection, pgAdmin 4, pgBouncer) are deployable when using the PostGIS-enabled PostgreSQL image. Reported by Jean-Denis Giguère (@jdenisgiguere).
  • Allow for special characters in pgBackRest environmental variables. Reported by (@SockenSalat).
  • Ensure password for the pgbouncer administrative user stays synchronized between an existing Kubernetes Secret and PostgreSQL should the pgBouncer be recreated.
  • When uninstalling an instance of the Postgres Operator in a Kubernetes cluster that has multiple instances of the Postgres Operator, ensure that only the requested instance to be uninstalled is the one that's uninstalled.
  • The logger no longer defaults to using a log level of DEBUG.
postgres-operator - 4.5.1 Release Candidate 1

Published by jkatz almost 4 years ago

postgres-operator - 4.5.0

Published by jkatz about 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.5.0 on October 2, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.5.0 release includes the following software versions upgrades:

Additionally, PostgreSQL Operator 4.5.0 introduces support for the CentOS 8 and UBI 8 base container images. In addition to using the newer operating systems, this enables support for TLS 1.3 when connecting to PostgreSQL.

This release also moves to building the containers to using Buildah 1.14.9 for the UBI 8 containers, and version 1.11.6 for the CentOS 7 & CentOS 8 & UBI 7 containers.

The monitoring stack for the PostgreSQL Operator has shifted to use upstream components as opposed to repackaging them. These are specified as part of the PostgreSQL Operator Installer. We have tested this release with the following versions of each component:

  • Prometheus: 2.20.0
  • Grafana: 6.7.4
  • Alertmanager: 0.21.0

PostgreSQL Operator is tested with Kubernetes 1.15 - 1.19, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), Amazon EKS, and VMware Enterprise PKS 1.3+.

Major Features

PostgreSQL Operator Monitoring

PostgreSQL Operator Monitoring

This release makes several changes to the PostgreSQL Operator Monitoring solution, notably making it much easier to set up a turnkey PostgreSQL monitoring solution with the PostgreSQL Operator using the open source pgMonitor project.

pgMonitor combines insightful queries for PostgreSQL with several proven tools for statistics collection, data visualization, and alerting to allow one to deploy a turnkey monitoring solution for PostgreSQL. The pgMonitor 4.4 release added support for Kubernetes environments, particularly with the pgnodemx that allows one to get host-like information from the Kubernetes Pod a PostgreSQL instance is deployed within.

PostgreSQL Operator 4.5 integrates with pgMonitor to take advantage of its Kubernetes support, and provides the following visualized metrics out-of-the-box:

  • Pod metrics (CPU, Memory, Disk activity)
  • PostgreSQL utilization (Database activity, database size, WAL size, replication lag)
  • Backup information, including last backup and backup size
  • Network utilization (traffic, saturation, latency)
  • Alerts (uptime et al.)

More metrics and visualizations will be added in future releases. You can further customize these to meet the needs for your enviornment.

PostgreSQL Operator 4.5 uses the upstream packages for Prometheus, Grafana, and Alertmanager. Those using earlier versions of monitoring provided with the PostgreSQL Operator will need to switch to those packages. The tested versions of these packages for PostgreSQL Operator 4.5 include:

  • Prometheus (2.20.0)
  • Grafana (6.7.4)
  • Alertmanager (0.21.0)

You can find out how to install PostgreSQL Operator Monitoring in the installation section:

https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/

Customizing pgBackRest via ConfigMap

pgBackRest powers the disaster recovery capabilities of PostgreSQL clusters deployed by the PostgreSQL Operator. While the PostgreSQL Operator provides many toggles to customize a pgBackRest configuration, it can be easier to do so directly using the pgBackRest configuration file format.

This release adds the ability to specify the pgBackRest configuration from either a ConfigMap or Secret by using the pgo create cluster --pgbackrest-custom-config flag, or by setting the BackrestConfig attributes in the pgcluster CRD. Setting this allows any pgBackRest resource (Pod, Job etc.) to leverage this custom configuration.

Note that some settings will be overriden by the PostgreSQL Operator regardless of the settings of a customized pgBackRest configuration file due to the nature of how the PostgreSQL instances managed by the Operator access pgBackRest. However, these are typically not the settings that one wants to customize.

Apply Custom Annotations to Managed Deployments

It is now possible to add custom annotations to the Deployments that the PostgreSQL Operator manages. These include:

  • PostgreSQL instances
  • pgBackRest repositories
  • pgBouncer instances

Annotations are applied on a per-cluster basis, and can be set either for all the managed Deployments within a cluster or individual Deployment groups. The annotations can be set as part of the Annotations section of the pgcluster specification.

This also introduces several flags to the pgo client that help with the management of the annotations. These flags are available on pgo create cluster and pgo update cluster commands and include:

  • --annotation - apply annotations on all managed Deployments
  • --annotation-postgres - applies annotations on all managed PostgreSQL Deployments
  • --annotation-pgbackrest - applies annotations on all managed pgBackRest Deployments
  • --annotation-pgbouncer - applies annotations on all managed pgBouncer Deployments

These flags work similarly to how one manages annotations and labels from kubectl. To add an annotation, one follows the format:

--annotation=key=value

To remove an annotation, one follows the format:

--annotation=key-

Breaking Changes

  • The crunchy-collect container, used for metrics collection is renamed to crunchy-postgres-exporter
  • The backrest-restore-<fromClusterName>-to-<toPVC> pgtask has been renamed to backrest-restore-<clusterName>. Additionally, the following parameters no longer need to be specified for the pgtask:
    • pgbackrest-stanza
    • pgbackrest-db-path
    • pgbackrest-repo-path
    • pgbackrest-repo-host
    • backrest-s3-verify-tls
  • When a restore job completes, it now emits the message restored Primary created instead of restored PVC created.
  • The toPVC parameter has been removed from the restore request endpoint.
  • Restore jobs using pg_restore no longer have from-<pvcName> in their names.
  • The pgo-backrest-restore container has been retired.
  • The pgo load command has been removed. This also retires the pgo-load container.
  • The crunchy-prometheus and crunchy-grafana containers are now removed. Please use the corresponding upstream containers.

Features

  • The metrics collection container now has configurable resources. This can be set as part of the custom resource workflow as well as from the pgo client when using the following command-line arguments:
    • CPU resource requests:
      • pgo create cluster --exporter-cpu
      • pgo update cluster --exporter-cpu
    • CPU resource limits:
      • pgo create cluster --exporter-cpu-limit
      • pgo update cluster --exporter-cpu-limit
    • Memory resource requests:
      • pgo create cluster --exporter-memory
      • pgo update cluster --exporter-memory
    • Memory resource limits:
      • pgo create cluster --exporter-memory-limit
      • pgo update cluster --exporter-memory-limit
  • Support for TLS 1.3 connections to PostgreSQL when using the UBI 8 and CentOS 8 containers
  • Added support for the pgnodemx extension which makes container-level metrics (CPU, memory, storage utilization) available via a PostgreSQL-based interface.

Changes

  • The PostgreSQL Operator now supports the default storage class that is available within a Kubernetes cluster. The installers are updated to use the default storage class by default.
  • The pgo restore methodology is changed to mirror the approach taken by pgo create cluster --restore-from that was introduced in the previous release. While pgo restore will still perform a "restore in-place", it will now take the following actions:
    • Any existing persistent volume claims (PVCs) in a cluster removed.
    • New PVCs are initialized and the data from the PostgreSQL cluster is restored based on the parameters specified in pgo restore.
    • Any customizations for the cluster (e.g. custom PostgreSQL configuration) will be available.
    • This also fixes several bugs that were reported with the pgo restore functionality, some of which are captured further down in these release notes.
  • Connections to pgBouncer can now be passed along to the default postgres database. If you have a pre-existing pgBouncer Deployment, the most convenient way to access this functionality is to redeploy pgBouncer for that PostgreSQL cluster (pgo delete pgbouncer + pgo create pgbouncer). Suggested by (@lgarcia11).
  • The Downward API is now available to PostgreSQL instances.
  • The pgBouncer pgbouncer.ini and pg_hba.conf have been moved from the pgBouncer Secret to a ConfigMap whose name follows the pattern <clusterName>-pgbouncer-cm. These are mounted as part of a project volume in conjunction with the current pgBouncer Secret.
  • The pgo df command will round values over 1000 up to the next unit type, e.g. 1GiB instead of 1024MiB.

Fixes

  • Ensure that if a PostgreSQL cluster is recreated from a PVC with existing data that it will apply any custom PostgreSQL configuration settings that are specified.
  • Fixed issues with PostgreSQL replica Pods not becoming ready after running pgo restore. This fix is a result of the change in methodology for how a restore occurs.
  • The pgo scaledown now allows for the removal of replicas that are not actively running.
  • The pgo scaledown --query command now shows replicas that may not be in an active state.
  • The pgBackRest URI style defaults to host if it is not set.
  • pgBackRest commands can now be executed even if there are multiple pgBackRest Pods available in a Deployment, so long as there is only one "running" pgBackRest Pod. Reported by Rubin Simons (@rubin55).
  • Ensure pgBackRest S3 Secrets can be upgraded from PostgreSQL Operator 4.3.
  • Ensure pgBouncer Port is derived from the cluster's port, not the Operator configuration defaults.
  • External WAL PVCs are only removed for the replica they are targeted for on a scaledown. Reported by (@dakine1111).
  • When deleting a cluster with the --keep-backups flag, ensure that backups that were created via --backup-type=pgdump are retained.
  • Return an error if a cluster is not found when using pgo df instead of timing out.
  • pgBadger now has a default memory limit of 64Mi, which should help avoid a visit from the OOM killer.
  • The Postgres Exporter now works if it is deployed in a TLS-only environment, i.e. the --tls-only flag is set. Reported by (@shuhanfan).
  • Fix pgo label when applying multiple labels at once.
  • Fix pgo create pgorole so that the expression --permissions=* works.
  • The operator container will no longer panic if all Deployments are scaled to 0 without using the pgo update cluster <mycluster> --shutdown command.
postgres-operator - 4.5.0 Release Candidate 1

Published by jkatz about 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.5.0 on September XX, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.5.0 release includes the following software versions upgrades:

Additionally, PostgreSQL Operator 4.5.0 introduces support for the CentOS 8 and UBI 8 base container images. In addition to using the newer operating systems, this enables support for TLS 1.3 when connecting to PostgreSQL. This release also moves to building the containers using Buildah 1.14.9.

The monitoring stack for the PostgreSQL Operator has shifted to use upstream components as opposed to repackaging them. These are specified as part of the PostgreSQL Operator Installer. We have tested this release with the following versions of each component:

  • Prometheus: 2.20.0
  • Grafana: 6.7.4
  • Alertmanager: 0.21.0

PostgreSQL Operator is tested with Kubernetes 1.15 - 1.19, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), Amazon EKS, and VMware Enterprise PKS 1.3+.

Major Features

PostgreSQL Operator Monitoring

PostgreSQL Operator Monitoring

This release makes several changes to the PostgreSQL Operator Monitoring solution, notably making it much easier to set up a turnkey PostgreSQL monitoring solution with the PostgreSQL Operator using the open source pgMonitor project.

pgMonitor combines insightful queries for PostgreSQL with several proven tools for statistics collection, data visualization, and alerting to allow one to deploy a turnkey monitoring solution for PostgreSQL. The pgMonitor 4.4 release added support for Kubernetes environments, particularly with the pgnodemx that allows one to get host-like information from the Kubernetes Pod a PostgreSQL instance is deployed within.

PostgreSQL Operator 4.5 integrates with pgMonitor to take advantage of its Kubernetes support, and provides the following visualized metrics out-of-the-box:

  • Pod metrics (CPU, Memory, Disk activity)
  • PostgreSQL utilization (Database activity, database size, WAL size, replication lag)
  • Backup information, including last backup and backup size
  • Network utilization (traffic, saturation, latency)
  • Alerts (uptime et al.)

More metrics and visualizations will be added in future releases. You can further customize these to meet the needs for your enviornment.

PostgreSQL Operator 4.5 uses the upstream packages for Prometheus, Grafana, and Alertmanager. Those using earlier versions of monitoring provided with the PostgreSQL Operator will need to switch to those packages. The tested versions of these packages for PostgreSQL Operator 4.5 include:

  • Prometheus (2.20.0)
  • Grafana (6.7.4)
  • Alertmanager (0.21.0)

You can find out how to install PostgreSQL Operator Monitoring in the installation section:

https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/

Customizing pgBackRest via ConfigMap

pgBackRest powers the disaster recovery capabilities of PostgreSQL clusters deployed by the PostgreSQL Operator. While the PostgreSQL Operator provides many toggles to customize a pgBackRest configuration, it can be easier to do so directly using the pgBackRest configuration file format.

This release adds the ability to specify the pgBackRest configuration from either a ConfigMap or Secret by using the pgo create cluster --pgbackrest-custom-config flag, or by setting the BackrestConfig attributes in the pgcluster CRD. Setting this allows any pgBackRest resource (Pod, Job etc.) to leverage this custom configuration.

Note that some settings will be overriden by the PostgreSQL Operator regardless of the settings of a customized pgBackRest configuration file due to the nature of how the PostgreSQL instances managed by the Operator access pgBackRest. However, these are typically not the settings that one wants to customize.

Apply Custom Annotations to Managed Deployments

It is now possible to add custom annotations to the Deployments that the PostgreSQL Operator manages. These include:

  • PostgreSQL instances
  • pgBackRest repositories
  • pgBouncer instances

Annotations are applied on a per-cluster basis, and can be set either for all the managed Deployments within a cluster or individual Deployment groups. The annotations can be set as part of the Annotations section of the pgcluster specification.

This also introduces several flags to the pgo client that help with the management of the annotations. These flags are available on pgo create cluster and pgo update cluster commands and include:

  • --annotation - apply annotations on all managed Deployments
  • --annotation-postgres - applies annotations on all managed PostgreSQL Deployments
  • --annotation-pgbackrest - applies annotations on all managed pgBackRest Deployments
  • --annotation-pgbouncer - applies annotations on all managed pgBouncer Deployments

These flags work similarly to how one manages annotations and labels from kubectl. To add an annotation, one follows the format:

--annotation=key=value

To remove an annotation, one follows the format:

--annotation=key-

Breaking Changes

  • The crunchy-collect container, used for metrics collection is renamed to crunchy-postgres-exporter
  • The backrest-restore-<fromClusterName>-to-<toPVC> pgtask has been renamed to backrest-restore-<clusterName>. Additionally, the following parameters no longer need to be specified for the pgtask:
    • pgbackrest-stanza
    • pgbackrest-db-path
    • pgbackrest-repo-path
    • pgbackrest-repo-host
    • backrest-s3-verify-tls
  • When a restore job completes, it now emits the message restored Primary created instead of restored PVC created.
  • The toPVC parameter has been removed from the restore request endpoint.
  • Restore jobs using pg_restore no longer have from-<pvcName> in their names.
  • The pgo-backrest-restore container has been retired.
  • The pgo load command has been removed. This also retires the pgo-load container.
  • The crunchy-prometheus and crunchy-grafana containers are now removed. Please use the corresponding upstream containers.

Features

  • The metrics collection container now has configurable resources. This can be set as part of the custom resource workflow as well as from the pgo client when using the following command-line arguments:
    • CPU resource requests:
      • pgo create cluster --exporter-cpu
      • pgo update cluster --exporter-cpu
    • CPU resource limits:
      • pgo create cluster --exporter-cpu-limit
      • pgo update cluster --exporter-cpu-limit
    • Memory resource requests:
      • pgo create cluster --exporter-memory
      • pgo update cluster --exporter-memory
    • Memory resource limits:
      • pgo create cluster --exporter-memory-limit
      • pgo update cluster --exporter-memory-limit
  • Support for TLS 1.3 connections to PostgreSQL when using the UBI 8 and CentOS 8 containers
  • Added support for the pgnodemx extension which makes container-level metrics (CPU, memory, storage utilization) available via a PostgreSQL-based interface.

Changes

  • The PostgreSQL Operator now supports the default storage class that is available within a Kubernetes cluster. The installers are updated to use the default storage class by default.
  • The pgo restore methodology is changed to mirror the approach taken by pgo create cluster --restore-from that was introduced in the previous release. While pgo restore will still perform a "restore in-place", it will now take the following actions:
    • Any existing persistent volume claims (PVCs) in a cluster removed.
    • New PVCs are initialized and the data from the PostgreSQL cluster is restored based on the parameters specified in pgo restore.
    • Any customizations for the cluster (e.g. custom PostgreSQL configuration) will be available.
    • This also fixes several bugs that were reported with the pgo restore functionality, some of which are captured further down in these release notes.
  • Connections to pgBouncer can now be passed along to the default postgres database. If you have a pre-existing pgBouncer Deployment, the most convenient way to access this functionality is to redeploy pgBouncer for that PostgreSQL cluster (pgo delete pgbouncer + pgo create pgbouncer). Suggested by (@lgarcia11).
  • The Downward API is now available to PostgreSQL instances.
  • The pgBouncer pgbouncer.ini and pg_hba.conf have been moved from the pgBouncer Secret to a ConfigMap whose name follows the pattern <clusterName>-pgbouncer-cm. These are mounted as part of a project volume in conjunction with the current pgBouncer Secret.
  • The pgo df command will round values over 1000 up to the next unit type, e.g. 1GiB instead of 1024MiB.

Fixes

  • Ensure that if a PostgreSQL cluster is recreated from a PVC with existing data that it will apply any custom PostgreSQL configuration settings that are specified.
  • Fixed issues with PostgreSQL replica Pods not becoming ready after running pgo restore. This fix is a result of the change in methodology for how a restore occurs.
  • The pgo scaledown now allows for the removal of replicas that are not actively running.
  • The pgo scaledown --query command now shows replicas that may not be in an active state.
  • The pgBackRest URI style defaults to host if it is not set.
  • pgBackRest commands can now be executed even if there are multiple pgBackRest Pods available in a Deployment, so long as there is only one "running" pgBackRest Pod. Reported by Rubin Simons (@rubin55).
  • Ensure pgBackRest S3 Secrets can be upgraded from PostgreSQL Operator 4.3.
  • Ensure pgBouncer Port is derived from the cluster's port, not the Operator configuration defaults.
  • External WAL PVCs are only removed for the replica they are targeted for on a scaledown. Reported by (@dakine1111).
  • Return an error if a cluster is not found when using pgo df instead of timing out.
  • pgBadger now has a default memory limit of 64Mi, which should help avoid a visit from the OOM killer.
  • The Postgres Exporter now works if it is deployed in a TLS-only environment, i.e. the --tls-only flag is set. Reported by (@shuhanfan).
  • Fix pgo label when applying multiple labels at once.
  • Fix pgo create pgorole so that the expression --permissions=* works.
  • The operator container will no longer panic if all Deployments are scaled to 0 without using the pgo update cluster <mycluster> --shutdown command.
postgres-operator - 4.5.0 Beta 1

Published by jkatz about 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.5.0 on September XX, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.5.0 release includes the following software versions upgrades:

Additionally, PostgreSQL Operator 4.5.0 introduces support for the CentOS 8 and UBI 8 base container images. In addition to using the newer operating systems, this enables support for TLS 1.3 when connecting to PostgreSQL. This release also moves to building the containers using Buildah 1.14.9.

The monitoring stack for the PostgreSQL Operator has shifted to use upstream components as opposed to repackaging them. These are specified as part of the PostgreSQL Operator Installer. We have tested this release with the following versions of each component:

  • Prometheus: 2.20.0
  • Grafana: 6.7.4
  • Alertmanager: 0.21.0

PostgreSQL Operator is tested with Kubernetes 1.15 - 1.18, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Major Features

PostgreSQL Operator Monitoring

Screen Shot 2020-09-01 at 4.51.02 PM.png

This release makes several changes to the PostgreSQL Operator Monitoring solution, notably making it much easier to set up a turnkey PostgreSQL monitoring solution with the PostgreSQL Operator using the open source pgMonitor project.

pgMonitor combines insightful queries for PostgreSQL with several proven tools for statistics collection, data visualization, and alerting to allow one to deploy a turnkey monitoring solution for PostgreSQL. The pgMonitor 4.4 release added support for Kubernetes environments, particularly with the pgnodemx that allows one to get host-like information from the Kubernetes Pod a PostgreSQL instance is deployed within.

PostgreSQL Operator 4.5 integrates with pgMonitor to take advantage of its Kubernetes support, and provides the following visualized metrics out-of-the-box:

  • Pod metrics (CPU, Memory, Disk activity)
  • PostgreSQL utilization (Database activity, database size, WAL size, replication lag)
  • Backup information, including last backup and backup size
  • Network utilization (traffic, saturation, latency)
  • Alerts (uptime et al.)

More metrics and visualizations will be added in future releases. You can further customize these to meet the needs for your enviornment.

PostgreSQL Operator 4.5 uses the upstream packages for Prometheus, Grafana, and Alertmanager. Those using earlier versions of monitoring provided with the PostgreSQL Operator will need to switch to those packages. The tested versions of these packages for PostgreSQL Operator 4.5 include:

  • Prometheus (2.20.0)
  • Grafana (6.7.4)
  • Alertmanager (0.21.0)

You can find out how to install PostgreSQL Operator Monitoring in the installation section:

https://access.crunchydata.com/documentation/postgres-operator/latest/latest/installation/metrics/

Customizing pgBackRest via ConfigMap

pgBackRest powers the disaster recovery capabilities of PostgreSQL clusters deployed by the PostgreSQL Operator. While the PostgreSQL Operator provides many toggles to customize a pgBackRest configuration, it can be easier to do so directly using the pgBackRest configuration file format.

This release adds the ability to specify the pgBackRest configuration from either a ConfigMap or Secret by using the pgo create cluster --pgbackrest-custom-config flag, or by setting the BackrestConfig attributes in the pgcluster CRD. Setting this allows any pgBackRest resource (Pod, Job etc.) to leverage this custom configuration.

Note that some settings will be overriden by the PostgreSQL Operator regardless of the settings of a customized pgBackRest configuration file due to the nature of how the PostgreSQL instances managed by the Operator access pgBackRest. However, these are typically not the settings that one wants to customize.

Apply Custom Annotations to Managed Deployments

It is now possible to add custom annotations to the Deployments that the PostgreSQL Operator manages. These include:

  • PostgreSQL instances
  • pgBackRest repositories
  • pgBouncer instances

Annotations are applied on a per-cluster basis, and can be set either for all the managed Deployments within a cluster or individual Deployment groups. The annotations can be set as part of the Annotations section of the pgcluster specification.

This also introduces several flags to the pgo client that help with the management of the annotations. These flags are available on pgo create cluster and pgo update cluster commands and include:

  • --annotation - apply annotations on all managed Deployments
  • --annotation-postgres - applies annotations on all managed PostgreSQL Deployments
  • --annotation-pgbackrest - applies annotations on all managed pgBackRest Deployments
  • --annotation-pgbouncer - applies annotations on all managed pgBouncer Deployments

These flags work similarly to how one manages annotations and labels from kubectl. To add an annotation, one follows the format:

--annotation=key=value

To remove an annotation, one follows the format:

--annotation=key-

Breaking Changes

  • The crunchy-collect container, used for metrics collection is renamed to crunchy-postgres-exporter
  • The backrest-restore-<fromClusterName>-to-<toPVC> pgtask has been renamed to backrest-restore-<clusterName>. Additionally, the following parameters no longer need to be specified for the pgtask:
    • pgbackrest-stanza
    • pgbackrest-db-path
    • pgbackrest-repo-path
    • pgbackrest-repo-host
    • backrest-s3-verify-tls
  • When a restore job completes, it now emits the message restored Primary created instead of restored PVC created.
  • The toPVC parameter has been removed from the restore request endpoint.
  • Restore jobs using pg_restore no longer have from-<pvcName> in their names.
  • The pgo-backrest-restore container has been retired.
  • The pgo load command has been removed. This also retires the pgo-load container.
  • The crunchy-prometheus and crunchy-grafana containers are now removed. Please use the corresponding upstream containers.

Features

  • The metrics collection container now has configurable resources. This can be set as part of the custom resource workflow as well as from the pgo client when using the following command-line arguments:
    • CPU resource requests:
      • pgo create cluster --exporter-cpu
      • pgo update cluster --exporter-cpu
    • CPU resource limits:
      • pgo create cluster --exporter-cpu-limit
      • pgo update cluster --exporter-cpu-limit
    • Memory resource requests:
      • pgo create cluster --exporter-memory
      • pgo update cluster --exporter-memory
    • Memory resource limits:
      • pgo create cluster --exporter-memory-limit
      • pgo update cluster --exporter-memory-limit
  • Support for TLS 1.3 connections to PostgreSQL when using the UBI 8 and CentOS 8 containers
  • Added support for the pgnodemx extension which makes container-level metrics (CPU, memory, storage utilization) available via a PostgreSQL-based interface.

Changes

  • The pgo restore methodology is changed to mirror the approach taken by pgo create cluster --restore-from that was introduced in the previous release. While pgo restore will still perform a "restore in-place", it will now take the following actions:
    • Any existing persistent volume claims (PVCs) in a cluster removed.
    • New PVCs are initialized and the data from the PostgreSQL cluster is restored based on the parameters specified in pgo restore.
    • Any customizations for the cluster (e.g. custom PostgreSQL configuration) will be available.
    • This also fixes several bugs that were reported with the pgo restore functionality, some of which are captured further down in these release notes.
  • The Downward API is now available to PostgreSQL instances.
  • The pgo df command will round values over 1000 up to the next unit type, e.g. 1GiB instead of 1024MiB.

Fixes

  • Ensure that if a PostgreSQL cluster is recreated from a PVC with existing data that it will apply any custom PostgreSQL configuration settings that are specified.
  • Fixed issues with PostgreSQL replica Pods not becoming ready after running pgo restore. This fix is a result of the change in methodology for how a restore occurs.
  • The pgBackRest URI style defaults to host if it is not set.
  • pgBackRest commands can now be executed even if there are multiple pgBackRest Pods available in a Deployment, so long as there is only one "running" pgBackRest Pod.
  • Ensure pgBackRest S3 Secrets can be upgraded from PostgreSQL Operator 4.3.
  • Return an error if a cluster is not found when using pgo df instead of timing out.
  • pgBadger now has a default memory limit of 64Mi, which should help avoid a visit from the OOM killer.
  • Fix pgo label when applying multiple labels at once.
  • Fix pgo create pgorole so that the expression --permissions=* works.
postgres-operator - 4.5.0 Alpha 1

Published by jkatz about 4 years ago

postgres-operator - 4.2.4

Published by jkatz about 4 years ago

Changes

  • Add the --client flag to pgo version to output the client version of pgo

Fixes

  • Ensure WAL archives are pushed to all repositories when pgBackRest is set to use both a local and a S3-based repository
postgres-operator - 4.4.1

Published by jkatz about 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.4.1 on August 18, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.4.1 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.4, 11.9, 10.14, 9.6.19, and 9.5.23

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Fixes

  • The pgBackRest URI style defaults to host if it is not set.
  • Fix pgo label when applying multiple labels at once.
  • pgBadger now has a default memory limit of 64Mi, which should help avoid a visit from the OOM killer.
  • Fix pgo create pgorole so that the expression --permissions=* works.
postgres-operator - 4.3.3

Published by jkatz about 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.3.3 on August 18, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.3.3 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.4, 11.9, 10.14, 9.6.19, and 9.5.23
  • pgBouncer is now at version 1.14.

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Changes

  • Perform a pg_dump from a specific database using the --database flag when using pgo backup with --backup-type=pgdump.
  • Restore a pg_dump to a specific database using the --pgdump-database flag using pgo restore when --backup-type=pgdump is specified.
  • Add the --client flag to pgo version to output the client version of pgo.
  • The PostgreSQL cluster scope is now utilized to identify and sync the ConfigMap responsible for the DCS for a PostgreSQL cluster.
  • The PGMONITOR_PASSWORD is now populated by an environmental variable secret. This environmental variable is only set on a primary instance as it is only needed at the time a PostgreSQL cluster is initialized.
  • Remove "Operator Start Time" from pgo status as it is more convenient and accurate to get this information from kubectl and the like, and it was not working due to RBAC privileges. (Reported by @mw-0).
  • pgo-rmdata container no longer runs as the root user, but as daemon (UID 2)
  • Remove dependency on the expenv binary that was included in the PostgreSQL Operator release. All expenv calls were either replaced with the native envsubst program or removed.

Fixes

  • Add validation to ensure that limits for CPU/memory are greater-than-or-equal-to the requests. This applies to any command that can set a limit/request.
  • Ensure WAL archives are pushed to all repositories when pgBackRest is set to use both a local and a S3-based repository
  • Silence expected error conditions when a pgBackRest repository is being initialized.
  • Add the watch permissions to the pgo-deployer ServiceAccount.
  • Ensure client-setup.sh works with when there is an existing pgo client in the install path
  • Ensure the PostgreSQL Operator can be uninstalled by adding list verb ClusterRole privileges to several Kubernetes objects.
  • Bring up the correct number of pgBouncer replicas when pgo update cluster --startup is issued.
  • Fixed issue where pgo scale would not work after pgo update cluster --shutdown and pgo update cluster --startup were run.
  • Ensure pgo scaledown deletes external WAL volumes from the replica that is removed.
  • Fix for PostgreSQL cluster startup logic when performing a restore.
  • Do not consider non-running Pods as primary Pods when checking for multiple primaries (Reported by @djcooklup).
  • Fix race condition that could occur while pgo upgrade was running while a HA configuration map attempted to sync. (Reported by Paul Heinen @v3nturetheworld).
  • Silence "ConfigMap not found" error messages that occurred during PostgreSQL cluster initialization, as these were not real errors.
  • Fix an issue with controller processing, which could manifest in PostgreSQL clusters not being deleted.
  • Eliminate gcc from the postgres-ha and pgadmin4 containers.
  • Fix pgo label when applying multiple labels at once.
postgres-operator - 4.4.1 Release Candidate 1

Published by jkatz about 4 years ago

postgres-operator - 4.3.3 Release Candidate 1

Published by jkatz about 4 years ago

postgres-operator - 4.4.0

Published by jkatz over 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.4.0 on July 17, 2020. Instructions for installing the Postgres Operator are here:

https://www.crunchydata.com/developers/download-postgres/containers/postgres-operator

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.4.0 release includes the following software versions upgrades:

  • PostGIS 3.0 is now supported. There is now a manual upgrade path between PostGIS containers.
  • pgRouting is now included in the PostGIS containers.
  • pgBackRest is now at version 2.27.
  • pgBouncer is now at version 1.14.

PostgreSQL Operator is tested with Kubernetes 1.15 - 1.18, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Major Features

  • Create New PostgreSQL Clusters from pgBackRest Repositories
  • Improvements to RBAC Reconciliation.
  • TLS Authentication for PostgreSQL Instances.
  • A Helm Chart is now available and support for deploying the PostgreSQL Operator.

Create New PostgreSQL Clusters from pgBackRest Repositories

A technique frequently used in PostgreSQL data management is to have a pgBackRest repository that can be used to create new PostgreSQL clusters. This can be helpful for a variety of purposes:

  • Creating a development or test database from a production data set
  • Performing a point-in-time-restore on a database that is separate from the primary database

and more.

This can be accomplished with the following new flags on pgo create cluster:

  • --restore-from: used to specify the name of the pgBackRest repository to restore from via the name of the PostgreSQL cluster (whether the PostgreSQL cluster is active or not).
  • --restore-opts: used to specify additional options like the ones specified to pgbackrest restore (e.g. --type and --target if performing a point-in-time-recovery).

Only one restore can be performed against a pgBackRest repository at a given time.

RBAC Reconciliation

PostgreSQL Operator 4.3 introduced a change that allows for the Operator to manage the role-based access controls (RBAC) based upon the Namespace Operating mode that is selected. This ensures that the PostgreSQL Operator is able to function correctly within the Namespace or Namespaces that it is permitted to access. This includes Service Accounts, Roles, and Role Bindings within a Namespace.

PostgreSQL Operator 4.4 removes the requirements of granting the PostgreSQL Operator bind and escalate privileges for being able to reconcile its own RBAC, and further defines which RBAC is specifically required to use the PostgreSQL Operator (i.e. the removal of wildcard * privileges). The permissions that the PostgreSQL Operator requires to perform the reconciliation are assigned when it is deployed and is a function of which NAMESPACE_MODE is selected (dynamic, readonly, or disabled).

This change renames the DYNAMIC_RBAC parameter in the installer to RECONCILE_RBAC and is set to true by default.

For more information on how RBAC reconciliation works, please visit the RBAC reconciliation documentation.

TLS Authentication for PostgreSQL Instances

Certificate-based authentication is a powerful PostgreSQL feature that allows for a PostgreSQL client to authenticate using a TLS certificate. While there are a variety of permutations for this can be set up, we can at least create a standardized way for enabling the replication connection to authenticate with a certificate, as we do have a known certificate authority.

PostgreSQL Operator 4.4 introduces the --replication-tls-secret flag on the pgo create cluster command, which, if specified and if the prerequisites are specified (--server-tls-secret and --server-ca-secret), then the replication account ("primaryuser") is configured to use certificate-based authentication. Combine with --tls-only for powerful results.

Note that the common name (CN) on the certificate MUST be "primaryuser", otherwise one must specify a mapping in a pg_ident configuration block to map to "primary" user.

When mounted to the container, the connection sslmode that the replication user uses is set to verify-ca by default. We can make that guarantee based on the certificate authority that is being mounted. Using verify-full would cause the Operator to make assumptions about the cluster that we cannot make, and as such a custom pg_ident configuration block is needed for that. However, using verify-full allows for mutual authentication between primary and replica.

Breaking Changes

  • The parameter to set the RBAC reconciliation settings is renamed to RECONCILE_RBAC (from DYNAMIC_RBAC).

Features

  • Added support for using the URI path style feature of pgBackRest. This includes:
    • Adding the BackrestS3URIStyle configuration parameter to the PostgreSQL Operator ConfigMap (pgo.yaml), which accepts the values of host or path.
    • Adding the --pgbackrest-s3-uri-style flag to pgo create cluster, which accepts values of host or path.
  • Added support to disable TLS verification when connecting to a pgBackRest repository. This includes:
    • Adding the BackrestS3VerifyTLS configuration parameter to the PostgreSQL Operator ConfigMap (pgo.yaml). Defaults to true.
    • Adding the --pgbackrest-s3-verify-tls flag to pgo create cluster, which accepts values of true or false.
  • Perform a pg_dump from a specific database using the --database flag when using pgo backup with --backup-type=pgdump.
  • Restore a pg_dump to a specific database using the --pgdump-database flag using pgo restore when --backup-type=pgdump is specified.
  • Allow for support of authentication parameters in the pgha-config (e.g. sslmode). See the documentation for words of caution on using these.
  • Add the --client flag to pgo version to output the client version of pgo.
  • A Helm Chart using Helm v3 is now available.

Changes

  • pgo clone is now deprecated. For a better cloning experience, please use pgo create cluster --restore-from
  • The PostgreSQL cluster scope is now utilized to identify and sync the ConfigMap responsible for the DCS for a PostgreSQL cluster.
  • The PGMONITOR_PASSWORD is now populated by an environmental variable secret. This environmental variable is only set on a primary instance as it is only needed at the time a PostgreSQL cluster is initialized.
  • Remove "Operator Start Time" from pgo status as it is more convenient and accurate to get this information from kubectl and the like, and it was not working due to RBAC privileges. (Reported by @mw-0).
  • Removed unused pgcluster attributes PrimaryHost and SecretFrom.
  • pgo-rmdata container no longer runs as the root user, but as daemon (UID 2)
  • Remove dependency on the expenv binary that was included in the PostgreSQL Operator release. All expenv calls were either replaced with the native envsubst program or removed.

Fixes

  • Add validation to ensure that limits for CPU/memory are greater-than-or-equal-to the requests. This applies to any command that can set a limit/request.
  • Ensure PVC capacities are being accurately reported when using pgo show cluster
  • Ensure WAL archives are pushed to all repositories when pgBackRest is set to use both a local and a S3-based repository
  • Silence expected error conditions when a pgBackRest repository is being initialized.
  • Deployments with pgo-deployer using the default file with hostpathstorage will now successfully deploy PostgreSQL clusters without any adjustments.
  • Add the watch permissions to the pgo-deployer ServiceAccount.
  • Ensure the PostgreSQL Operator can be uninstalled by adding list verb ClusterRole privileges to several Kubernetes objects.
  • Ensure client-setup.sh executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation method.
  • Ensure client-setup.sh works with when there is an existing pgo client in the install path.
  • Update the documentation to properly name CCP_IMAGE_PULL_SECRET_MANIFEST and PGO_IMAGE_PULL_SECRET_MANIFEST in the pgo-deployer configuration.
  • Bring up the correct number of pgBouncer replicas when pgo update cluster --startup is issued.
  • Fixed issue where pgo scale would not work after pgo update cluster --shutdown and pgo update cluster --startup were run.
  • Ensure pgo scaledown deletes external WAL volumes from the replica that is removed.
  • Fix for PostgreSQL cluster startup logic when performing a restore.
  • Several fixes for selecting default storage configurations and sizes when using the pgo-deployer container. These include #1, #4, and #8.
  • Do not consider non-running Pods as primary Pods when checking for multiple primaries (Reported by @djcooklup).
  • Fix race condition that could occur while pgo upgrade was running while a HA configuration map attempted to sync. (Reported by Paul Heinen @v3nturetheworld).
  • The custom setup example was updated to reflect the current state of bootstrapping the PostgreSQL container.
  • Silence "ConfigMap not found" error messages that occurred during PostgreSQL cluster initialization, as these were not real errors.
  • Fix an issue with controller processing, which could manifest in PostgreSQL clusters not being deleted.
  • Eliminate gcc from the postgres-ha and pgadmin4 containers.
postgres-operator - 4.4.0 Release Candidate 1

Published by jkatz over 4 years ago

postgres-operator - 4.4.0 Beta 2

Published by jkatz over 4 years ago

Crunchy Data is pleased to announce the release of PostgreSQL Operator 4.4.0 Beta 2. We encourage you to download it and try it out.

Changes Since Beta 2:

Breaking Changes

  • pgBackRest is updated to 2.27

Changes

  • Added a Helm Chart for installing the metrics stack.
  • Certain artifacts that would show up in pgo show cluster (e.g. a repo sync pod from pgo clone or creating a cluster from a pgBackRest repository) no longer show up.
  • Removed unused pgcluster attributes PrimaryHost and SecretFrom.

Fixes

  • Ensure the new pgBackRest S3 URI path style and the disable TLS verification work when using both local and S3 storage.
  • Reset the primary instance after a manual restart (i.e. pgo restart)
  • Update version label for the pgo client when using the Ansible installer to ensure that it can actually download the installer.
postgres-operator - 4.4.0 Beta 1

Published by andrewlecuyer over 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.4.0-beta.1 on July 2, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.4.0 release includes the following software versions upgrades:

  • PostGIS 3.0 is now supported. There is now a manual upgrade path between PostGIS containers.
  • pgBackRest is now at version 2.27

PostgreSQL Operator is tested with Kubernetes 1.15 - 1.18, OpenShift 3.11+, OpenShift 4.4+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Major Features

  • Create New PostgreSQL Clusters from pgBackRest Repositories
  • Improvements to RBAC Reconciliation.
  • TLS Authentication for PostgreSQL Instances.
  • A Helm Chart is now available and support for deploying the PostgreSQL Operator.

Create New PostgreSQL Clusters from pgBackRest Repositories

A technique frequently used in PostgreSQL data management is to have a pgBackRest repository that can be used to create new PostgreSQL clusters. This can be helpful for a variety of purposes:

  • Creating a development or test database from a production data set
  • Performing a point-in-time-restore on a database that is separate from the primary database

and more.

This can be accomplished with the following new flags on pgo create cluster:

  • --restore-from: used to specify the name of the pgBackRest repository to store from via the name of the PostgreSQL cluster (whether the PostgreSQL cluster is active or not).
  • --restore-opts: used to specify additional options like the ones specified to pgbackrest restore (e.g. --type and --target if performing a point-in-time-recovery).

Only one restore can be performed against a pgBackRest repository at a given time.

RBAC Reconciliation

PostgreSQL Operator 4.3 introduced a change that allows for the Operator to manage the role-based access controls (RBAC) based upon the Namespace Operating mode that is selected. This ensures that the PostgreSQL Operator is able to function correctly within the Namespace or Namespaces that it is permitted to access. This includes Service Accounts, Roles, and Role Bindings within a Namespace.

PostgreSQL Operator 4.4 removes the requirements of granting the PostgreSQL Operator bind and escalate privileges for being able to reconcile its own RBAC, and further defines which RBAC is specifically required to use the PostgreSQL Operator (i.e. the removal of wildcard * privileges). The permissions that the PostgreSQL Operator requires to perform the reconciliation are assigned when it is deployed and is a function of which NAMESPACE_MODE is selected (dynamic, readonly, or disabled).

This change renames the DYNAMIC_RBAC parameter in the installer to RECONCILE_RBAC and is set to true by default.

For more information on how RBAC reconciliation works, please visit the RBAC reconciliation documentation.

TLS Authentication for PostgreSQL Instances

Certificate-based authentication is a powerful PostgreSQL feature that allows for a PostgreSQL client to authenticate using a TLS certificate. While there are a variety of permutations for this can be set up, we can at least create a standardized way for enabling the replication connection to authenticate with a certificate, as we do have a known certificate authority.

PostgreSQL Operator 4.4 introduces the --replication-tls-secret flag on the pgo create cluster command, which, if specified and if the prerequisites are specified (--server-tls-secret and --server-ca-secret), then the replication account ("primaryuser") is configured to use certificate-based authentication. Combine with --tls-only for powerful results.

Note that the common name (CN) on the certificate MUST be "primaryuser", otherwise one must specify a mapping in a pg_ident configuration block to map to "primary" user.

When mounted to the container, the connection sslmode that the replication user uses is set to verify-ca by default. We can make that guarantee based on the certificate authority that is being mounted. Using verify-full would cause the Operator to make assumptions about the cluster that we cannot make, and as such a custom pg_ident configuration block is needed for that. However, using verify-full allows for mutual authentication between primary and replica.

Breaking Changes

  • The parameter to set the RBAC reconciliation settings is renamed to RECONCILE_RBAC (from DYNAMIC_RBAC).

Features

  • Added support for using the URI path style feature of pgBackRest. This includes:
    • Adding the BackrestS3URIStyle configuration parameter to the PostgreSQL Operator ConfigMap (pgo.yaml), which accepts the values of host or path.
    • Adding the --pgbackrest-s3-uri-style flag to pgo create cluster, which accepts values of host or path.
  • Added support to disable TLS verification when connecting to a pgBackRest repository. This includes:
    • Adding the BackrestS3VerifyTLS configuration parameter to the PostgreSQL Operator ConfigMap (pgo.yaml). Defaults to true.
    • Adding the --pgbackrest-s3-verify-tls flag to pgo create cluster, which accepts values of true or false.
  • Perform a pg_dump from a specific database using the --database flag when using pgo backup with --backup-type=pgdump.
  • Restore a pg_dump to a specific database using the --pgdump-database flag using pgo restore when --backup-type=pgdump is specified.
  • Allow for support of authentication parameters in the pgha-config (e.g. sslmode). See the documentation for words of caution on using these.
  • Add the --client flag to pgo version to output the client version of pgo.
  • A Helm Chart using Helm v3 is now available.

Changes

  • The PostgreSQL cluster scope is now utilized to identify and sync the ConfigMap responsible for the DCS for a PostgreSQL cluster.
  • The PGMONITOR_PASSWORD is now populated by an environmental variable secret. This environmental variable is only set on a primary instance as it is only needed at the time a PostgreSQL cluster is initialized.
  • Remove "Operator Start Time" from pgo status as it is more convenient and accurate to get this information from kubectl and the like, and it was not working due to RBAC privileges. (Reported by @mw-0).
  • Remove dependency on the expenv binary that was included in the PostgreSQL Operator release. All expenv calls were either replaced with the native envsubst program or removed.

Fixes

  • Add validation to ensure that limits for CPU/memory are greater-than-or-equal-to the requests. This applies to any command that can set a limit/request.
  • Ensure PVC capacities are being accurately reported when using pgo show cluster
  • Ensure WAL archives are pushed to all repositories when pgBackRest is set to use both a local and a S3-based repository
  • Silence expected error conditions when a pgBackRest repository is being initialized.
  • Deployments with pgo-deployer using the default file with hostpathstorage will now successfully deploy PostgreSQL clusters without any adjustments.
  • Ensure the PostgreSQL Operator can be uninstalled by adding list verb ClusterRole privileges to several Kubernetes objects.
  • Ensure client-setup.sh executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation method.
  • Update the documentation to properly name CCP_IMAGE_PULL_SECRET_MANIFEST and PGO_IMAGE_PULL_SECRET_MANIFEST in the pgo-deployer configuration.
  • Several fixes for selecting default storage configurations and sizes when using the pgo-deployer container. These include #1, #4, and #8.
  • Do not consider non-running Pods as primary Pods when checking for multiple primaries (Reported by @djcooklup).
  • The custom setup example was updated to reflect the current state of bootstrapping the PostgreSQL container.
  • Fix an issue with controller processing, which could manifest in PostgreSQL clusters not being deleted.
  • Eliminate gcc from the postgres-ha and pgadmin4 containers.
postgres-operator - 4.3.2

Published by jkatz over 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.3.2 on June 3, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

Version 4.3.2 of the PostgreSQL Operator contains bug fixes to the installer container and changes to how CPU/memory requests and limits can be specified.

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Changes

Resource Limit Flags

PostgreSQL Operator 4.3.0 introduced some new options to tune the resource requests for PostgreSQL instances under management and other associated deployments, including pgBackRest and pgBouncer. From some of our learnings of running PostgreSQL in Kubernetes, we heavily restricted how the limits on the Pods could be set, and tied them to be the same as the requests.

Due to feedback from a variety of sources, this caused more issues than it helped, and as such, we decided to introduce a breaking change into a patch release and remove the --enable-*-limit and --disable-*-limit series of flags and replace them with flags that allow you to specifically choose CPU and memory limits.

This release introduces several new flags to various commands, including:

  • pgo create cluster --cpu-limit
  • pgo create cluster --memory-limit
  • pgo create cluster --pgbackrest-cpu-limit
  • pgo create cluster --pgbackrest-memory-limit
  • pgo create cluster --pgbouncer-cpu-limit
  • pgo create cluster --pgbouncer-memory-limit
  • pgo update cluster --cpu-limit
  • pgo update cluster --memory-limit
  • pgo update cluster --pgbackrest-cpu-limit
  • pgo update cluster --pgbackrest-memory-limit
  • pgo create pgbouncer --cpu-limit
  • pgo create pgbouncer --memory-limit
  • pgo update pgbouncer --cpu-limit
  • pgo update pgbouncer --memory-limit

Additionally, these values can be modified directly in a pgcluster Custom Resource and the PostgreSQL Operator will react and make the modifications.

Other Changes

  • The pgo-deployer container can now run using an aribtrary UID.
  • For deployments of the PostgreSQL Operator using the pgo-deployer container to OpenShift 3.11 environments, a new template YAML file, postgresql-operator-ocp311.yml is provided. This YAML file requires that the pgo-deployer is run with cluster-admin role for OpenShift 3.11 environments due to the lack of support of the escalate RBAC verb. Other environments (e.g. Kubernetes, OpenShift 4+) still do not require cluster-admin.
  • Allow for the resumption of download the pgo client if the client-setup.sh script gets interrupted. Contributed by Itay Grudev (@itay-grudev).

Fixes

  • The pgo-deployer container now assigns the required Service Account all the appropriate get RBAC privileges via the postgres-operator.yml file that it needs to properly install. This allows the install functionality to properly work across multiple runs.
  • For OpenShift deploymments, the pgo-deployer leverages version 4.4 of the oc client.
  • Use numeric UIDs for users in the PostgreSQL Operator management containers to support MustRunAsNonRoot Pod Security Policies and the like. Reported by Olivier Beyler (@obeyler).
postgres-operator - 4.3.2 RC 1

Published by jkatz over 4 years ago

postgres-operator - 4.3.1

Published by jkatz over 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.3.1 on May 21, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.3.1 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.3, 11.8, 10.13, 9.6.18, and 9.5.22

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Changes

Initial Support for SCRAM

SCRAM is a password authentication method in PostgreSQL that has been available since PostgreSQL 10 and is considered to be superior to the md5 authentication method. The PostgreSQL Operator now introduces support for SCRAM on the pgo create user and pgo update user commands by means of the --password-type flag. The following values for --password-type will select the following authentication methods:

  • --password-type="", --password-type="md5" => md5
  • --password-type="scram", --password-type="scram-sha-256" => SCRAM-SHA-256

In turn, the PostgreSQL Operator will hash the passwords based on the chosen method and store the computed hash in PostgreSQL.

When using SCRAM support, it is important to note the following observations and limitations:

  • When using one of the password modifications commands on pgo update user (e.g. --password, --rotate-password, --expires) with the desire to keep the persisted password using SCRAM, it is necessary to specify the "--password-type=scram-sha-256" directive.
  • SCRAM does not work with the current pgBouncer integration with the PostgreSQL Operator. pgBouncer presently supports only one password-based authentication type at a time. Additionally, to enable support for SCRAM, pgBouncer would require a list of plaintext passwords to be stored in a file that is accessible to it. Future work can evaluate how to leverage SCRAM support with pgBouncer.

pgo restart and pgo reload

This release introduces the pgo restart command, which allow you to perform a PostgreSQL restart on one or more instances within a PostgreSQL cluster.

You restart all instances at the same time using the following command:

pgo restart hippo

or specify a specific instance to restart using the --target flag (which follows a similar behavior to the --target flag on pgo scaledown and pgo failover):

pgo restart hippo --target=hippo-abcd

The restart itself is performed by calling the Patroni restart REST endpoint on the specific instance (primary or replica) being restarted.

As with the pgo failover and pgo scaledown commands it is also possible to specify the --query flag to query instances available for restart:

pgo restart mycluster --query

With the new pgo restart command, using --query flag with the pgo failover and pgo scaledown commands include the PENDING RESTART information, which is now returned with any replication information.

This release allows for the pgo reload command to properly reloads all instances (i.e. the primary and all replicas) within the cluster.

Dynamic Namespace Mode and Older Kubernetes Versions

The dynamic namespace mode (e.g. pgo create namespace + pgo delete namespace) provides the ability to create and remove Kubernetes namespaces and automatically add them unto the purview of the PostgreSQL Operator. Through the course of fixing usability issues with working with the other namespaces modes (readonly, disabled), a change needed to be introduced that broke compatibility with Kubernetes 1.12 and earlier.

The PostgreSQL Operator still supports managing PostgreSQL Deployments across multiple namespaces in Kubernetes 1.12 and earlier, but only with readonly mode. In readonly mode, a cluster administrator needs to create the namespace and the RBAC needed to run the PostgreSQL Operator in that namespace. However, it is now possible to define the RBAC required for the PostgreSQL Operator to manage clusters in a namespace via a ServiceAccount, as described in the Namespace section of the documentation.

The usability change allows for one to add namespace to the PostgreSQL Operator's purview (or deploy the PostgreSQL Operator within a namespace) and automatically set up the appropriate RBAC for the PostgreSQL Operator to correctly operate.

Other Changes

  • The RBAC required for deploying the PostgreSQL Operator is now decomposed into the exact privileges that are needed. This removes the need for requiring a cluster-admin privilege for deploying the PostgreSQL Operator. Reported by (@obeyler).
  • With namespace modes disabled and readonly, the PostgreSQL Operator will now dynamically create the required RBAC when a new namespace is added if that namespace has the RBAC defined in local-namespace-rbac.yaml. This occurs when PGO_DYNAMIC_NAMESPACE is set to true.
  • If the PostgreSQL Operator has permissions to manage it's own RBAC within a namespace, it will now reconcile and auto-heal that RBAC as needed (e.g. if it is invalid or has been removed) to ensure it can properly interact with and manage that namespace.
  • Add default CPU and memory limits for the metrics collection and pgBadger sidecars to help deployments that wish to have a Pod QoS of Guaranteed. The metrics defaults are 100m/24Mi and the pgBadger defaults are 500m/24Mi. Reported by (@jose-joye).
  • Introduce DISABLE_FSGROUP option as part of the installation. When set to true, this does not add a FSGroup to the Pod Security Context when deploying PostgreSQL related containers or pgAdmin 4. This is helpful when deploying the PostgreSQL Operator in certain environments, such as OpenShift with a restricted Security Context Constraint. Defaults to false.
  • Remove the custom Security Context Constraint (SCC) that would be deployed with the PostgreSQL Operator, so now the PostgreSQL Operator can be deployed using default OpenShift SCCs (e.g. "restricted", though note that DISABLE_FSGROUP will need to be set to true for that). The example PostgreSQL Operator SCC is left in the examples directory for reference.
  • When PGO_DISABLE_TLS is set to true, then PGO_TLS_NO_VERIFY is set to true.
  • Some of the pgo-deployer environmental variables that we not needed to be set by a user were internalized. These include ANSIBLE_CONFIG and HOME.
  • When using the pgo-deployer container to install the PostgreSQL Operator, update the default watched namespace to pgo as the example only uses this namespace.

Fixes

  • Fix for cloning a PostgreSQL cluster when the pgBackRest repository is stored in S3.
  • The pgo show namespace command now properly indicates which namespaces a user is able to access.
  • Ensure the pgo-apiserver will successfully run if PGO_DISABLE_TLS is set to true. Reported by (@zhubx007).
  • Prevent a run of pgo-deployer from failing if it detects the existence of dependent cluster-wide objects already present.
  • Deployments with pgo-deployer using the default file with hostpathstorage will now successfully deploy PostgreSQL clusters without any adjustments.
  • Ensure image pull secrets are attached to deployments of the pgo-client container.
  • Ensure client-setup.sh executes to completion if existing PostgreSQL Operator credentials exist that were created by a different installation method
  • Update the documentation to properly name CCP_IMAGE_PULL_SECRET_MANIFEST and PGO_IMAGE_PULL_SECRET_MANIFEST in the pgo-deployer configuration.
  • Several fixes for selecting default storage configurations and sizes when using the pgo-deployer container. These include #1, #4, and #8 in the STORAGE family of variables.
  • The custom setup example was updated to reflect the current state of bootstrapping the PostgreSQL container.
postgres-operator - 4.2.3

Published by jkatz over 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.2.3 on May 21, 2020.

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.2.3 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.3, 11.8, 10.13, 9.6.18, and 9.5.22

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Changes

  • This now includes support for using the JIT compilation feature introduced in PostgreSQL 11
  • The pgBackRest stanza creation and backup jobs are retried until successful, following the Kubernetes default number of retries (6)
  • POSIX shared memory is now used for the PostgreSQL Deployments.
  • Quote identifiers for the database name and user name in bootstrap scripts for the PostgreSQL containers
  • The pgo-rmdata Job no longer calls the rm command on any data within the PVC, but rather leaves this task to the storage provisioner

Fixes

  • Fix for cloning a PostgreSQL cluster when the pgBackRest repository is stored in S3.
  • The pgo show namespace command now properly indicates which namespaces a user is able to access.
  • Ensre rsync is intalled on the pgo-backrest-repo-sync UBI7 image.
  • Default the recovery action to "promote" when performing a "point-in-time-recovery" (PITR), which will ensure that a PITR process completes.
  • Report errors in a SQL policy at the time pgo apply is executed, which was the previous behavior. Reported by José Joye (@jose-joye).
  • Allow the standard PostgreSQL user created with the Operator to be able to create and manage objects within its own user schema. Reported by Nicolas HAHN (@hahnn).
  • Correctly set the default value for archive_timeout when new PostgreSQL clusters are initialized. Reported by Adrian (@adifri).
  • Allow the original primary to be removed with pgo scaledown after it is failed over.
  • The pgo-rmdata Job will not fail if a PostgreSQL cluster has not been properly initialized.
  • The failover ConfigMap for a PostgreSQL cluster is now removed when the cluster is deleted.
  • The custom setup example was updated to reflect the current state of bootstrapping the PostgreSQL container.
  • The replica Service is now properly managed based on the existence of replicas in a PostgreSQL cluster, i.e. if there are replicas, the Service exists, if not, it is removed.
postgres-operator - 4.3.0

Published by jkatz over 4 years ago

Crunchy Data announces the release of the PostgreSQL Operator 4.3.0 on May 1, 2020

The PostgreSQL Operator is released in conjunction with the Crunchy Container Suite.

The PostgreSQL Operator 4.3.0 release includes the following software versions upgrades:

  • The PostgreSQL containers now use versions 12.2, 11.7, 10.12, 9.6.17, and 9.5.21
    • This now includes support for using the JIT compilation feature introduced in PostgreSQL 11
  • PostgreSQL containers now support PL/Python3
  • pgBackRest is now at version 2.25
  • Patroni is now at version 1.6.5
  • postgres_exporter is now at version 0.7.0
  • pgAdmin 4 is at 4.18

PostgreSQL Operator is tested with Kubernetes 1.13 - 1.18, OpenShift 3.11+, OpenShift 4.3+, Google Kubernetes Engine (GKE), and VMware Enterprise PKS 1.3+.

Major Features

Standby Clusters + Multi-Kubernetes Deployments

A key component of building database architectures that can ensure continuity of operations is to be able to have the database available across multiple data
centers. In Kubernetes, this would mean being able to have the PostgreSQL Operator be able to have the PostgreSQL Operator run in multiple Kubernetes clusters, have PostgreSQL clusters exist in these Kubernetes clusters, and only ensure the "standby" deployment is promoted in the event of an outage or planned switchover.

As of this release, the PostgreSQL Operator now supports standby PostgreSQL clusters that can be deployed across namespaces or other Kubernetes or Kubernetes-enabled clusters (e.g. OpenShift). This is accomplished by leveraging the PostgreSQL Operator's support for
pgBackRest and leveraging an intermediary, i.e. S3, to provide the ability for the standby cluster to read in the PostgreSQL archives and replicate the data. This allows a user to quickly promote a standby PostgreSQL cluster in the event that the primary cluster suffers downtime (e.g. data center outage), for planned switchovers such as Kubernetes cluster maintenance or moving a PostgreSQL workload from one data center to another.

To support standby clusters, there are several new flags available on pgo create cluster that are required to set up a new standby cluster. These include:

  • --standby: If set, creates the PostgreSQL cluster as a standby cluster.
  • --pgbackrest-repo-path: Allows the user to override the pgBackRest repository path for a cluster. While this setting can now be utilized when creating any cluster, it is typically required for the creation of standby clusters as the repository path will need to match that of the primary cluster.
  • --password-superuser: When creating a standby cluster, allows the user to specify a password for the superuser that matches the superuser account in the cluster the standby is replicating from.
  • --password-replication: When creating a standby cluster, allows the user to specify a password for the replication user that matches the superuser account in the cluster the standby is replicating from.

Note that the --password flag must be used to ensure the password of the main PostgreSQL user account matches that of the primary PostgreSQL cluster, if you are using Kubernetes to manage the user's password.

For example, if you have a cluster named hippo and wanted to create a standby cluster called hippo and assuming the S3 credentials are using the defaults provided to the PostgreSQL Operator, you could execute a command similar to:

pgo create cluster hippo-standby --standby \
  --pgbackrest-repo-path=/backrestrepo/hippo-backrest-shared-repo
  --password-superuser=superhippo
  --password-replication=replicahippo

To shutdown the primary cluster (if you can), you can execute a command similar to:

pgo update cluster hippo --shutdown

To promote the standby cluster to be able to accept write traffic, you can execute the following command:

pgo update cluster hippo-standby --promote-standby

To convert the old primary cluster into a standby cluster, you can execute the following command:

pgo update cluster hippo --enable-standby

Once the old primary is converted to a standby cluster, you can bring it online with the following command:

pgo update cluster hippo --startup

For information on the architecture and how to
set up a standby PostgreSQL cluster, please refer to the documentation.

At present, streaming replication between the primary and standby clusters are not supported, but the PostgreSQL instances within each cluster do support streaming replication.

Installation via the pgo-deployer container

Installation, alongside upgrading, have long been two of the biggest challenges of using the PostgreSQL Operator. This release makes improvements on both (with upgrading being described in the next section).

For installation, we have introduced a new container called pgo-deployer. For environments that use hostpath storage (e.g. minikube), installing the PostgreSQL Operator can be as simple as:

kubectl create namespace pgo
kubectl apply -f https://raw.githubusercontent.com/CrunchyData/postgres-operator/v4.3.0/installers/kubectl/postgres-operator.yml

The pgo-deployer container can be configured by a manifest called postgres-operator.yml and provides a set of environmental variables that should be familiar from using the other installers.

The pgo-deployer launches a Job in the namespace that the PostgreSQL Operator will be installed into and sets up the requisite Kubernetes objects: CRDs, Secrets, ConfigMaps, etc.

The pgo-deployer container can also be used to uninstall the PostgreSQL Operator. For more information, please see the installation documentation.

Automatic PostgreSQL Operator Upgrade Process

One of the biggest challenges to using a newer version of the PostgreSQL Operator was upgrading from an older version.

This release introduces the ability to automatically upgrade from an older version of the Operator (as early as 4.1.0) to the newest version (4.3.0) using the pgo upgrade command.

The pgo upgrade command follows a process similar to the manual PostgreSQL Operator upgrade process, but instead automates it.

To find out more about how to upgrade the PostgreSQL Operator, please review the upgrade documentation.

Improved Custom Configuration for PostgreSQL Clusters

The ability to customize the configuration for a PostgreSQL cluster with the PostgreSQL Operator can now be easily modified by making changes directly to the ConfigMap that is created with each PostgreSQL cluster. The ConfigMap, which follows the pattern <clusterName>-pgha-config (e.g. hippo-pgha-config for
pgo create cluster hippo), manages the user-facing configuration settings available for a PostgreSQL cluster, and when modified, it will automatically synchronize the settings across all primaries and replicas in a PostgreSQL cluster.

Presently, the ConfigMap can be edited using the kubectl edit cm command, and future iterations will add functionality to the PostgreSQL Operator to make this process easier.

Customize PVC Size on PostgreSQL cluster Creation & Clone

The PostgreSQL Operator provides the ability to set customization for how large the PVC can be via the "storage config" options available in the PostgreSQL Operator configuration file (aka pgo.yaml). While these provide a baseline level of customizability, it is often important to be able to set the size of the PVC that a PostgreSQL cluster should use at cluster creation time. In other words, users should be able to choose exactly how large they want their PostgreSQL PVCs ought to be.

PostgreSQL Operator 4.3 introduces the ability to set the PVC sizes for the PostgreSQL cluster, the pgBackRest repository for the PostgreSQL cluster, and the PVC size for each tablespace at cluster creation time. Additionally, this behavior has been extended to the clone functionality as well, which is helpful when trying to resize a PostgreSQL cluster. Here is some information on the flags that have been added:

pgo create cluster

--pvc-size - sets the PVC size for the PostgreSQL data directory
--pgbackrest-pvc-size - sets the PVC size for the PostgreSQL pgBackRest repository

For tablespaces, one can use the pvcsize option to set the PVC size for that tablespace.

pgo clone cluster

--pvc-size - sets the PVC size for the PostgreSQL data directory for the newly created cluster
--pgbackrest-pvc-size - sets the PVC size for the PostgreSQL pgBackRest repository for the newly created cluster

Tablespaces

Tablespaces can be used to spread out PostgreSQL workloads across multiple volumes, which can be used for a variety of use cases:

  • Partitioning larger data sets
  • Putting data onto archival systems
  • Utilizing hardware (or a storage class) for a particular database object, e.g. an index

and more.

Tablespaces can be created via the pgo create cluster command using the --tablespace flag. The arguments to --tablespace can be passed in using one of several key/value pairs, including:

  • name (required) - the name of the tablespace
  • storageconfig (required) - the storage configuration to use for the tablespace
  • pvcsize - if specified, the size of the PVC. Defaults to the PVC size in the storage configuration

Each value is separated by a :, for example:

pgo create cluster hacluster --tablespace=name=ts:storageconfig=nfsstorage

All tablespaces are mounted in the /tablespaces directory. The PostgreSQL Operator manages the mount points and persistent volume claims (PVCs) for the tablespaces, and ensures they are available throughout all of the PostgreSQL lifecycle operations, including:

  • Provisioning
  • Backup & Restore
  • High-Availability, Failover, Healing
  • Clone

etc.

One additional value is added to the pgcluster CRD:

  • TablespaceMounts: a map of the name of the tablespace and its associated storage.

Tablespaces are automatically created in the PostgreSQL cluster. You can access them as soon as the cluster is initialized. For example, using the tablespace created above, you could create a table on the tablespace ts with the following SQL:

CREATE TABLE (id int) TABLESPACE ts;

Tablespaces can also be added to existing PostgreSQL clusters by using the pgo update cluster command. The syntax is similar to that of creating a PostgreSQL cluster with a tablespace, i.e.:

pgo update cluster hacluster --tablespace=name=ts2:storageconfig=nfsstorage

As additional volumes need to be mounted to the Deployments, this action can cause downtime, though the expectation is that the downtime is brief.

Based on usage, future work will look to making this more flexible. Dropping tablespaces can be tricky as no objects must exist on a tablespace in order for PostgreSQL to drop it (i.e. there is no DROP TABLESPACE .. CASCADE command).

Easy TLS-Enabled PostgreSQL Clusters

Connecting to PostgreSQL clusters is a typical requirement when deploying to an untrusted network, such as a public cloud. The PostgreSQL Operator makes it easy to enable TLS for PostgreSQL. To do this, one must create two secrets prior: one containing the trusted certificate authority (CA) and one containing the PostgreSQL server's TLS keypair, e.g.:

kubectl create secret generic postgresql-ca --from-file=ca.crt=/path/to/ca.crt
kubectl create secret tls hippo-tls-keypair \
  --cert=/path/to/server.crt \
  --key=/path/to/server.key

From there, one can create a PostgreSQL cluster that supports TLS with the following command:

pgo create cluster hippo-tls \
  --server-ca-secret=hippo-tls-keypair \
  --server-tls-secret=postgresql-ca

To create a PostgreSQL cluster that only accepts TLS connections and rejects any connection attempts made over an insecure channel, you can use the --tls-only flag on cluster creation, e.g.:

pgo create cluster hippo-tls \
  --tls-only \
  --server-ca-secret=hippo-tls-keypair \
  --server-tls-secret=postgresql-ca

External WAL Volume

An optimization used for improving PostgreSQL performance related to file system usage is to have the PostgreSQL write-ahead logs (WAL) written to a different mounted volume than other parts of the PostgreSQL system, such as the data directory.

To support this, the PostgreSQL Operator now supports the ability to specify an external volume for writing the PostgreSQL write-head log (WAL) during cluster creation, which carries through to replicas and clones. When not specified, the WAL resides within the PGDATA directory and volume, which is the present behavior.

To create a PostgreSQL cluster to use an external volume, one can use the --wal-storage-config flag at cluster creation time to select the storage configuration to use, e.g.

pgo create cluster --wal-storage-config=nfsstorage hippo

Additionally, it is also possible to specify the size of the WAL storage on all newly created clusters. When in use, the size of the volume can be overridden per-cluster. This is specified with the --wal-storage-size flag, i.e.

pgo create cluster --wal-storage-config=nfsstorage --wal-storage-size=10Gi hippo

This implementation does not define the WAL volume in any deployment templates because the volume name and mount path are constant.

Elimination of ClusterRole Requirement for the PostgreSQL Operator

PostgreSQL Operator 4.0 introduced the ability to manage PostgreSQL clusters across multiple Kubernetes Namespaces. PostgreSQL Operator 4.1 built on this functionality by allowing users to dynamically control which Namespaces it managed as well as the PostgreSQL clusters deployed to them. In order to leverage this feature, one must grant a ClusterRole level permission via a ServiceAccount to the PostgreSQL Operator.

There are a lot of deployment environments for the PostgreSQL Operator that only need for it to exists within a single namespace and as such, granting cluster-wide privileges is superfluous, and in many cases, undesirable. As such, it should be possible to deploy the PostgreSQL Operator to a single namespace without requiring a ClusterRole.

To do this, but maintain the aforementioned Namespace functionality for those who require it, PostgreSQL Operator 4.3 introduces the ability to opt into deploying it with minimum required ClusterRole privileges and in turn, the ability to deploy the PostgreSQL Operator without a ClusterRole. To do so, the PostgreSQL Operator introduces the concept of "namespace operating mode" which lets one select the type deployment to create. The namespace mode is set at the install time for the PostgreSQL Operator, and files into one of three options:

  • dynamic: This is the default. This enables full dynamic Namespace management capabilities, in which the PostgreSQL Operator can create, delete and update any Namespaces within the Kubernetes cluster, while then also having the ability to create the Roles, Role Bindings and Service Accounts within those Namespaces for normal operations. The PostgreSQL Operator can also listen for Namespace events and create or remove controllers for various Namespaces as changes are made to Namespaces from Kubernetes and the PostgreSQL Operator's management.

  • readonly: In this mode, the PostgreSQL Operator is able to listen for namespace events within the Kubernetetes cluster, and then manage controllers as Namespaces are added, updated or deleted. While this still requires a ClusterRole, the permissions mirror those of a "read-only" environment, and as such the PostgreSQL Operator is unable to create, delete or update Namespaces itself nor create RBAC that it requires in any of those Namespaces. Therefore, while in readonly, mode namespaces must be preconfigured with the proper RBAC as the PostgreSQL Operator cannot create the RBAC itself.

  • disabled: Use this mode if you do not want to deploy the PostgreSQL Operator with any ClusterRole privileges, especially if you are only deploying the PostgreSQL Operator to a single namespace. This disables any Namespace management capabilities within the PostgreSQL Operator and will simply attempt to work with the target Namespaces specified during installation. If no target Namespaces are specified, then the Operator will be configured to work within the namespace in which it is deployed. As with the readonly mode, while in this mode, Namespaces must be pre-configured with the proper RBAC, since the PostgreSQL Operator cannot create the RBAC itself.

Based on the installer you use, the variables to set this mode are either named:

  • PostgreSQL Operator Installer: NAMESPACE_MODE
  • Developer Installer: PGO_NAMESPACE_MODE
  • Ansible Installer: namespace_mode

Feature Preview: pgAdmin 4 Integration + User Synchronization

pgAdmin 4 is a popular graphical user interface that lets you work with PostgreSQL databases from both a desktop or web-based client. With its ability to manage and orchestrate changes for PostgreSQL users, the PostgreSQL Operator is a natural partner to keep a pgAdmin 4 environment synchronized with a PostgreSQL environment.

This release introduces an integration with pgAdmin 4 that allows you to deploy a pgAdmin 4 environment alongside a PostgreSQL cluster and keeps the user's database credentials synchronized. You can simply log into pgAdmin 4 with your PostgreSQL username and password and immediately have access to your databases.

For example, let's there is a PostgreSQL cluster called hippo that has a user named hippo with password datalake:

pgo create cluster hippo --username=hippo --password=datalake

After the PostgreSQL cluster becomes ready, you can create a pgAdmin 4 deployment with the pgo create pgadmin
command:

pgo create pgadmin hippo

This creates a pgAdmin 4 deployment unique to this PostgreSQL cluster and synchronizes the PostgreSQL user information into it. To access pgAdmin 4, you can set up a port-forward to the Service, which follows the pattern <clusterName>-pgadmin, to port 5050:

kubectl port-forward svc/hippo-pgadmin 5050:5050

Point your browser at http://localhost:5050 and use your database username (e.g. hippo) and password (e.g. datalake) to log in.

(Note: if your password does not appear to work, you can retry setting up the user with the pgo update user command: pgo update user hippo --password=datalake)

The pgo create user, pgo update user, and pgo delete user commands are synchronized with the pgAdmin 4 deployment. Note that if you use pgo create user without the --managed flag prior to deploying pgAdmin 4, then the user's credentials will not be synchronized to the pgAdmin 4 deployment. However, a subsequent run of pgo update user --password will synchronize the credentials with pgAdmin 4.

You can remove the pgAdmin 4 deployment with the pgo delete pgadmin command.

We have released the first version of this change under "feature preview" so you can try it out. As with all of our features, we open to feedback on how we can continue to improve the PostgreSQL Operator.

Enhanced pgo df

pgo df provides information on the disk utilization of a PostgreSQL cluster, and previously, this was not reporting accurate numbers. The new pgo df looks at each PVC that is mounted to each PostgreSQL instance in a cluster, including the PVCs for tablespaces, and computers the overall utilization. Even better, the data is returned in a structured format for easy scraping. This implementation also leverages Golang concurrency to help compute the results quickly.

Enhanced pgBouncer Integration

The pgBouncer integration was completely rewritten to support the TLS-only operations via the PostgreSQL Operator. While most of the work was internal, you should now see a much more stable pgBouncer experience.

The pgBouncer attributes in the pgclusters.crunchydata.com CRD are also declarative and any updates will be reflected by the PostgreSQL Operator.

Additionally, a few new commands were added:

  • pgo create pgbouncer --cpu and pgo update pgbouncer --memory resource request flags for settings container resources for the pgBouncer instances. For CPU, this will also set the limit.
  • pgo create pgbouncer --enable-memory-limit sets the Kubernetes resource limit for memory
  • pgo create pgbouncer --replicas sets the number of pgBouncer Pods to deploy with a PostgreSQL cluster. The default is 1.
  • pgo show pgbouncer shows information about a pgBouncer deployment
  • pgo update pgbouncer --cpu and pgo update pgbouncer --memory resource request flags for settings container resources for the pgBouncer instances after they are deployed. For CPU, this will also set the limit.
  • pgo update pgbouncer --disables-memory-limit and pgo update pgbouncer --enable-memory-limit respectively unset and set the Kubernetes resource limit for memory
  • pgo update pgbouncer --replicas sets the number of pgBouncer Pods to deploy with a PostgreSQL cluster.
  • pgo update pgbouncer --rotate-password allows one to rotate the service
    account password for pgBouncer

Rewritten pgo User Management commands

The user management commands were rewritten to support the TLS only workflow. These commands now return additional information about a user when actions are taken. Several new flags have been added too, including the option to view all output in JSON. Other flags include:

  • pgo update user --rotate-password to automatically rotate the password
  • pgo update user --disable-login which disables the ability for a PostgreSQL user to login
  • pgo update user --enable-login which enables the ability for a PostgreSQL user to login
  • pgo update user --valid-always which sets a password to always be valid, i.e. it has no
    expiration
  • pgo show user does not show system accounts by default now, but can be made to show the system accounts by using pgo show user --show-system-accounts

A major change as well is that the default password expiration function is now defaulted to be unlimited (i.e. never expires) which aligns with typical PostgreSQL workflows.

Breaking Changes

  • pgo create cluster will now set the default database name to be the name of the cluster. For example, pgo create cluster hippo would create the initial database named hippo.
  • The Database configuration parameter in pgo.yaml (db_name in the Ansible inventory) is now set to "" by default.
  • the --password/-w flag for pgo create cluster now only sets the password for the regular user account that is created, not all of the system accounts (e.g. the postgres superuser).
  • A default postgres-ha.yaml file is no longer is no longer created by the Operator for every PostgreSQL cluster.
  • "Limit" resource parameters are no longer set on the containers, in particular, the PostgreSQL container, due to undesired behavior stemming from the host machine OOM killer. Further details can be found in the original pull request.
  • Added DefaultInstanceMemory, DefaultBackrestMemory, and DefaultPgBouncerMemory options to the pgo.yaml configuration to allow for the setting of default memory requests for PostgreSQL instances, the pgBackRest repository, and pgBouncer instances respectively.
  • If unset by either the PostgreSQL Operator configuration or one-off, the default memory resource requests for the following applications are:
    • PostgreSQL: The installers default to 128Mi (suitable for test environments), though the "default of last resort" is 512Mi to be consistent with the PostgreSQL default shared memory requirement
    • pgBackRest: 48Mi
    • pgBouncer: 24Mi
  • Remove the Default...ContainerResources set of parameters from the pgo.yaml configuration file.
  • The pgbackups.crunchydata.com, deprecated since 4.2.0, has now been completely removed, along with any code that interfaced with it.
  • The PreferredFailoverFeature is removed. This had not been doing anything since 4.2.0, but some of the legacy bits and configuration were still there.
  • pgo status no longer returns information about the nodes available in a Kubernetes cluster
  • Remove --series flag from pgo create cluster command. This affects API calls more than actual usage of the pgo client.
  • pgo benchmark, pgo show benchmark, pgo delete benchmark are removed. PostgreSQL benchmarks with pgbench can still be executed using the crunchy-pgbench container.
  • pgo ls is removed.
  • The API that is used by pgo create cluster now returns its contents in JSON. The output now includes information about the user that is created.
  • The API that is used by pgo show backup now returns its contents in JSON. The output view of pgo show backup remains the same.
  • Remove the PreferredFailoverNode feature, as it had already been effectively removed.
  • Remove explicit rm calls when cleaning up PostgreSQL clusters. This behavior is left to the storage provisioner that one deploys with their PostgreSQL instances.
  • Schedule backup job names have been shortened, and follow a pattern that looks like <clusterName>-<backupType>-sch-backup

Features

  • Several additions to pgo create cluster around PostgreSQL users and databases, including:
    • --ccp-image-prefix sets the CCPImagePrefix that specifies the image prefix for the PostgreSQL related containers that are deployed by the PostgreSQL Operator
    • --cpu flag that sets the amount of CPU to use for the PostgreSQL instances in the cluster. This also sets the limit.
      ---database / -d flag that sets the name of the initial database created.
    • --enable-memory-limit, --enable-pgbackrest-memory-limit, --enable-pgbouncer-memory-limit enable the Kubernetes memory resource limit for PostgreSQL, pgBackRest, and pgBouncer respectively
    • --memory flag that sets the amount of memory to use for the PostgreSQL instances in the cluster
    • --user / -u flag that sets the PostgreSQL username for the standard database user
    • --password-length sets the length of the password that should be generated, if --password is not set.
    • --pgbackrest-cpu flag that sets the amount of CPU to use for the pgBackRest repository
    • --pgbackrest-memory flag that sets the amount of memory to use for the pgBackRest repository
    • --pgbackrest-s3-ca-secret specifies the name of a Kubernetes Secret that contains a key (aws-s3-ca.crt) to override the default CA used for making connections to a S3 interface
    • --pgbackrest-storage-config lets one specify a different storage configuration to use for a local pgBackRest repository
    • --pgbouncer-cpu flag that sets the amount of CPU to use for the pgBouncer instances
    • --pgbouncer-memory flag that sets the amount of memory to use for the pgBouncer instances
    • --pgbouncer-replicas sets the number of pgBouncer Pods to deploy with the PostgreSQL cluster. The default is 1.
    • --pgo-image-prefix sets the PGOImagePrefix that specifies the image prefix for the PostgreSQL Operator containers that help to manage the PostgreSQL clusters
    • --show-system-accounts returns the credentials of the system accounts (e.g. the postgres superuser) along with the credentials for the standard database user
  • pgo update cluster now supports the --cpu, --disable-memory-limit, --disable-pgbackrest-memory-limit, --enable-memory-limit, --enable-pgbackrest-memory-limit, --memory, --pgbackrest-cpu, and --pgbackrest-memory flags to allow PostgreSQL instances and the pgBackRest repository to have their resources adjusted post deployment
  • Added the PodAntiAffinityPgBackRest and PodAntiAffinityPgBouncer to the pgo.yaml configuration file to set specific Pod anti-affinity rules for pgBackRest and pgBouncer Pods that are deployed along with PostgreSQL clusters that are managed by the Operator. The default for pgBackRest and pgBouncer is to use the value that is set in PodAntiAffinity.
  • pgo create cluster now supports the --pod-anti-affinity-pgbackrest and --pod-anti-affinity-pgbouncer flags to specifically overwrite the pgBackRest repository and pgBouncer Pod anti-affinity rules on a specific PostgreSQL cluster deployment, which overrides any values present in PodAntiAffinityPgBackRest and PodAntiAffinityPgBouncer respectfully. The default for pgBackRest and pgBouncer is to use the value for pod anti-affinity that is used for the PostgreSQL instances in the cluster.
  • One can specify the "image prefix" (e.g. crunchydata) for the containers that are deployed by the PostgreSQL Operator. This adds two fields to the pgcluster CRD: CCPImagePrefix and `PGOImagePrefix
  • Specify a different S3 Certificate Authority (CA) with pgo create cluster by using the --pgbackrest-s3-ca-secret flag, which refers to an existing Secret that contains a key called aws-s3-ca.crt that contains the CA. Reported by Aurelien Marie @(aurelienmarie)
  • pgo clone now supports the --enable-metrics flag, which will deploy the monitoring sidecar along with the newly cloned PostgreSQL cluster.
  • The pgBackRest repository now uses ED25519 SSH key pairs.
  • Add the --enable-autofail flag to pgo update to make it clear how the autofailover mechanism can be re-enabled for a PostgreSQL cluster.

Changes

  • Remove backoffLimit from Jobs that can be retried, which is most of them.
  • POSIX shared memory is now used for the PostgreSQL Deployments.
  • Increase the number of namespaces that can be watched by the PostgreSQL Operator.
  • The number of unsupported pgBackRest flags on the deny list has been reduced.
  • The liveness and readiness probes for a PostgreSQL cluster now reference the /opt/cpm/bin/health
  • wal_level is now defaulted to logical to enable logical replication
  • archive_timeout is now a default setting in the crunchy-postgres-ha and crunchy-postgres-ha-gis containers and is set to 60
  • ArchiveTimeout, LogStatement, LogMinDurationStatement are removed from pgo.yaml, as these can be customized either via a custom postgresql.conf file or postgres-ha.yaml file
  • Quoted identifiers for the database name and user name in bootstrap scripts for the PostgreSQL containers
  • Password generation now leverages cryptographically secure random number generation and uses the full set of typeable ASCII characters
  • The node ClusterRole is no longer used
  • The names of the scheduled backups are shortened to use the pattern <clusterName>-<backupType>-sch-backup
  • The PostgreSQL Operator now logs its timestamps using RFC3339 formatting as implemented by Go
  • SSH key pairs are no longer created as part of the Operator installation process. This was a legacy behavior that had not been removed
  • The pv/create-pv-nfs.sh has been modified to create persistent volumes with their own directories on the NFS filesystems. This better mimics production environments. The older version of the script still exists as pv/create-pv-nfs-legacy.sh
  • Load pgBackRest S3 credentials into environmental variables as Kubernetes Secrets, to avoid revealing their contents in Kubernetes commands or in logs
  • Update how the pgBackRest and pgMonitor pamareters are loaded into Deployment templates to no longer use JSON fragments
  • The pgo-rmdata Job no longer calls the rm command on any data within the PVC, but rather leaves this task to the storage provisioner
  • Remove using expenv in the add-targeted-namespace.sh script

Fixes

  • Ensure PostgreSQL clusters can be successfully restored via pgo restore after 'pgo scaledown' is executed
  • Allow the original primary to be removed with pgo scaledown after it is failed over
  • The replica Service is now properly managed based on the existence of replicas in a PostgreSQL cluster, i.e. if there are replicas, the Service exists, if not, it is removed
  • Report errors in a SQL policy at the time pgo apply is executed, which was the previous behavior. Reported by José Joye (@jose-joye)
  • Ensure all replicas are listed out via the --query flag in pgo scaledown and pgo failover. This now follows the pattern outlined by the Kubernetes safe random string generator
  • Default the recovery action to "promote" when performing a "point-in-time-recovery" (PITR), which will ensure that a PITR process completes
  • The stanza-create Job now waits for both the PostgreSQL cluster and the pgBackRest repository to be ready before executing
  • Remove backoffLimit from Jobs that can be retried, which is most of them. Reported by Leo Khomenko (@lkhomenk)
  • The pgo-rmdata Job will not fail if a PostgreSQL cluster has not been properly initialized
  • Fixed a separate pgo-rmdata crash related to an improper SecurityContext
  • The failover ConfigMap for a PostgreSQL cluster is now removed when the cluster is deleted
  • Allow the standard PostgreSQL user created with the Operator to be able to create and manage objects within its own user schema. Reported by Nicolas HAHN (@hahnn)
  • Honor the value of "PasswordLength" when it is set in the pgo.yaml file for password generation. The default is now set at 24
  • Do not log pgBackRest environmental variables to the Kubernetes logs
  • By default, exclude using the trusted OS certificate authority store for the Windows pgo client.
  • Update the pgo-client imagePullPolicy to be IfNotPresent, which is the default for all of the managed containers across the project
  • Set UsePAM yes in the sshd_config file to fix an issue with using SSHD in newer versions of Docker
  • Only add Operator labels to a managed namespace if the namespace already exists when executing the add-targeted-namespace.sh script