PostgreSQL

Version as of 22:29, 28 Apr 2024

to this version.

Return to Version archive.

View current version

PostgreSQL Server Backup

When the Amanda Enterprise Edition is configured and licensed for PostgreSQL backup, the ZMC allows you to select a PostgreSQL server to back up. When a PostgreSQL server is backed up, all databases are included in the backup, which can be either full (using a copy of the data directory) or incremental (which depends on PosgreSQL write-ahead logs, referred to as WALs).

Note that backing up PostgreSQL tablespaces is not supported.

Requirements for PostgreSQL Server Backup and Restore

These instructions assume you have already installed and licensed the Amanda Enterprise server and the PostgreSQL server being backed up. There are a number of additional requirements:

  • The amanda-client.conf file on the server must be manually edited to include the following lines:

property "PG-DATADIR" "Path_to_PSQL_Data_Dir"
property "PG-ARCHIVEDIR" "Path_to_PSQL_Archive_Dir"
property "PG-HOST" "hostname_or_directory_of_socket_file"
property "PG-TMP" "Path_to_temp_dir"
property "PSQL-PATH" "Path_to_PSQL_Binary"
property "PG-USER"
"PostgreSQL_username"
property "PG-PASSFILE" "Path_to_PSQL_Password_File"
property "PG-PASSWORD" "PSQL_Password"

Path_to_PSQL_Data_Dir

The path to the PostgreSQL data directory.

Path_to_PSQL_Archive_Dir

The path to the PostgreSQL archive directory.  Specify the path where the archive command copies files and stores them between full backup runs. This directory must be writeable by the user that runs PostgreSQL processes.

Specify the location that has been configured in PostgreSQL for continuous Write-ahead Log (WAL) archiving (i.e. the archive_command in the PostgreSQL config file). Note that write-ahead logging must be enabled. WAL is not enabled by default in either PostgreSQL or PostgreSQL Plus.

hostname_or_directory_of_socket_file

Specify the hostname (localhost if that is appropriate) or the directory where a socket file is located. Entries beginning with / are interpreted as a socket file directory (just the directory, for example, /tmp, not /tmp/.s.PGSQL.5432). If a directory is used, the PostgreSQL server and Amanda backup server must reside on the same machine.

Path_to_temp_dir

Temporary directory. Must have enough capacity to temporarily store the backup data for a full backup of the PostgreSQL server. Defaults to /tmp.

Path_to_PSQL_Binary

The path to the PostgreSQL psql binary executable file.

PostgreSQL_username

The PostgreSQL database user to connect as, which must have superuser privileges.

Path_to_PSQL_Password_File

Passfile to use for Postgre 8.1 or greater. See http://www.postgresql.org/docs/8.1/static/libpq-pgpass.html

PSQL_Password

The PostgreSQL password (deprecated in PostgreSQL 8.1).

For further details on application properties, see amanda-client.conf(5). For specific details on PostgreSQL agent properties, see ampgsql(8).

Configuring PostgreSQL Server Backups from the ZMC Backup What Page

Create a dedicated backup set for each PostreSQL server you intend to back up. On the Backup What page you are prompted to select what type of object you want to back up. Choose PostgreSQL, and the following options are displayed:

backup_where_postgreSQL.png

Host Name
The name of the machine running the PostgreSQL server you intend to back up.
Data Directory
The path to the PostgreSQL data directory.
Encryption and Compression
Set these options as desired. They are described in more detail here
Advanced Options - Estimate 
If estimates are taking too long and the databases being backed up do not change in size that much from backup to backup, use the the Historical Average calculated from previous backups. In most cases, the default of Reliably Accurate is appropriate.

After you have set the options, click the Add button to add the PostgreSQL server to the backup set. Click Apply Plan to Server to commit the changes; you can then configure the backup set just as you would any other by setting the options on Backup Where, Backup How, and Backup When, etc.

Configuring PostgreSQL Server Restores from the ZMC Restore What Page

Make sure that PostgreSQL is installed in the same location as when the backup was run. The databases and logs file locations should also match the original configuration.

Either select the desired PostgreSQL backup from one of the Reports, or go directly to the Restore Where page and select a PostgreSQL backup for restore. The Explore button lets you select from the most recent backups.

When you have selected the backup object that includes the PostgreSQL server for restore, the Restore What page displays the following options: 

restore-what-postgresql (1).png

Select the databases you wish to restore. Choose All for a full restore up to the last backup.  Click Next Step when you are done, and the Restore Where options are displayed:

restore-where-postgreSQL.png

Set the restore options as desired. Note that the Destination Directory and Temporary Directory much each have enough space to hold the selected backup data. If you choose the same directory for both, make sure that the selected directory has enough space to hold two copies of the backup image. Do not specify specify the PostgreSQL data/cluster directory as a destination, especially if PostgreSQL is running.

After reviewing the entries, click Restore to start the restore process.

When the ZMC restore process is complete, the restored files will reside on the specified host and destination directory.  Completing the restore is accomplished outside the ZMC using the host operating system and PostgreSQL as described below.

Completing the PostgreSQL Database Point-in-time Recovery

  1. If the PostGreSQL server is running, stop it using the following command:

        # /etc/init.d/pgplus_83 stop
     
  2. As a safeguard, copy the entire cluster data directory and any tablespaces from the stopped production server to a temporary location. This will require enough disk space on the system to hold two copies of the existing database. If sufficient disk space is unavailable, you should at the very least make a copy of everything in the pg_xlog subdirectory of the cluster data directory. pg_xlog may contain logs that were not archived before the system was stopped.  For example:

        # cp -rp /opt/PostgresPlus/8.3/data/ /opt/postgres-restore/

                    -- or --

        # cp -rp /opt/PostgresPlus/8.3/data/pg_xlog/ /opt/postgres-restore/
     
  3. Remove all files and subdirectories under the cluster data directory, and under the root directories of any relevant tablespaces.

        # rm -rf /opt/PostgresPlus/8.3/data/*
     
  4. Using the restored database dump, Restore the database files from your backup dump.

- unpack earliest--the base or level 0--PostgreSQL backup image

  # cd /opt/postgres-restore/
  # ls
  data  zmc_restore_20090327141718  zmc_restore_20090329142314 
  # tar xfv zmc_restore_20090327141718
  archive_dir.tar
  data_dir.tar

- earliest backup image (zmc_restore_20090327141718 in this case) can be removed if space is needed

- unpack "data_dir.tar" file into the database data directory

  # cd /opt/PostgresPlus/8.3/data/
  # tar xf /opt/postgres-restore/data_dir.tar

- Be careful that all files and directories are restored with the right ownership (the database system user, not root!) and with the right permissions. If you are using tablespaces, you should verify that the symbolic links in pg_tblspc/ were correctly restored.

[Ted, again, my pg_tblspc/ doesn't have any links so don't know anything more about this latter part than what is written above]

- unpack "archive_dir.tar" file and all remaining, incremental, PostgreSQL backup images into a temporary "archive" directory owned by the database system user

  # mkdir /opt/postgres-restore/archive/
  # chown postgres:postgres /opt/postgres-restore/archive/
  # cd /opt/postgres-restore/archive/
  # tar xf /opt/postgres-restore/archive_dir.tar
  # tar xf /opt/postgres-restore/zmc_restore_20090329142314
  # ls
  00000002000000000000004A  00000002000000000000004B

- data_dir.tar and archive_dir.tar tar files and all Postgres backup images can be deleted if desired.

   5. Remove any files present in pg_xlog/; these came from the backup dump and are therefore probably obsolete rather than current. If you didn't archive pg_xlog/ at all, then recreate it, being careful to ensure that you re-establish it as a symbolic link if you had it set up that way before. Be sure to recreate the subdirectory pg_xlog/archive_status/ as well in this case.

# rm /opt/PostgresPlus/8.3/data/pg_xlog/*
# rm /opt/PostgresPlus/8.3/data/pg_xlog/archive_status/*

   6. If you had unarchived WAL segment files that you saved in step 2, copy them into pg_xlog/. (It is best to copy them, not move them, so that you still have the unmodified files if a problem occurs and you have to start over.)

# rm -rf /opt/PostgresPlus/8.3/data/pg_xlog
# cp -rp /opt/postgres-restore/data/pg_xlog/ /opt/PostgresPlus/8.3/data/

   7. Create a recovery command file recovery.conf in the cluster data directory (see Recovery Settings, http://www.postgresql.org/docs/8.3/s...ONFIG-SETTINGS). You might also want to temporarily modify pg_hba.conf to prevent ordinary users from connecting until you are sure the recovery has worked.

- edit /opt/PostgresPlus/8.3/data/recovery.conf and include at least the following line with the path to your temporary "archive" directory

  restore_command = 'cp /opt/postgres-restore/archive/%f "%p"'
- change the ownership and permissions on this file to be owned by the database system user and only readable and writable by this user

  # chown postgres:postgres /opt/PostgresPlus/8.3/data/recovery.conf
  # chmod 0600 /opt/PostgresPlus/8.3/data/recovery.conf

   8. Start the server. The server will go into recovery mode and proceed to read through the archived WAL files it needs. Should the recovery be terminated because of an external error, the server can simply be restarted and it will continue recovery. Upon completion of the recovery process, the server will rename recovery.conf to recovery.done (to prevent accidentally re-entering recovery mode in case of a crash later) and then commence normal database operations.

# /etc/init.d/pgplus_83 start

   9. Inspect the contents of the database to ensure you have recovered to where you want to be. If not, return to step 1. If all is well, let in your users by restoring pg_hba.conf to normal.

 


 

Recovering using a Continuous Archive Backup.

Troubleshooting

[Need troubleshooting steps]

Troubleshooting Restores