Setting up Postgresql warm standby / replication at Joyent

Posted by Jen Tong on June 3, 2009

As I mentioned in an earlier entry I’m using PostgreSql 8.3.3 at Joyent. It works great but the documentation is not as good as the ubiquitous MySql.

I required high availability but not real time fail over so I opted to use the warm standby which is a pseudo-built in feature of postgres as of version 8.3 that provides easy failover that can be automated if necessary. It’s pseudo-built in because no coding is required, but you need to get your hands pretty dirty to get it working.

To get it working you’ll need to do some work on the primary server so that it spits out incremental write ahead log (WAL) files as well as a bit more work on the warm standby server(s) to consume these logs.

You can follow these steps for both a brand new postgres installation as well as an existing heavy traffic installation. Furthermore, no downtime is required to set this up.


  • You're running PostgreSql 8.3.3 (the version that came with my Joyent node)
  • You have at least two identical nodes for this purpose at Joyent
  • You have NFS space for your write ahead log files
  • You've at least scanned the official docs on this subject

Primary Setup

All we need to do on the primary server is set it up to push archive WAL files to our NFS mount. It will push WAL files out in 16 mb chunks. Since 16mb is a lot of data to lose on a low traffic DB like mine, I’ll be forcing it to flush every hour. This makes it a space hog, but disk space is cheap (at least at Joyent).

  1. Modify the /var/pgsql/data/postgresql.conf. Set the following parameters
    archive_mode = on
    archive_command = 'cp -i %p /shared/psql_wal/%f </dev/null'
    archive_timeout = 3600
  2. Restart postgres to apply the changes
    sudo -u postgres pg_ctl restart -D /var/pgsql/data

Standby Setup

I only set up one standby node, but you can have as many as you like as long as they all have access to the same NFS mount.

This is where the configuration gets a bit hairy. We’ll need to build a small, but very useful, C utility from source called pg_standby. We’ll also be doing quite a bit of configuration.

  1. Before we configure stuff we'll need to build pg_standby from source. It's not provided with the default Joyent posgres installation.
    1. Download the PostgreSql source code to your standby server. I could not find the source for v8.3.3, so I grabbed v8.3.4. wget works great for this.
    2. Decompress the tar ball and run configure and gmake cd postgresql ./configure ... lots of output from configure ... ./gmake all ... lots of output from gmake ...
    3. Next build the contents of contrib. This is the folder where all the cool semi-supported utilities (including pg_standby) live. cd contrib gmake all ... lots of output from gmake ...
    4. It's built. Time to install it. cd pg_standby sudo cp ./pg_standby /opt/local/bin/pg_standby sudo chmod 755 /opt/local/bin/pg_standby
  2. We now have pg_standby. You can verify that it works by running which pg_standby. Next we'll stop the standby server to prepare for a checkpoint backup from our primary system. Run this command on the standby server
    sudo -u postgres pg_ctl stop -D /var/pgsql/data
  3. With the standby server down we'll log into the primary server and start the hot backup to our NFS mount. echo "SELECT pg_start_backup('mybackup');" | psql -U postgres sudo tar -cvf /shared/mybackup.tar /var/pgsql/data
  4. While the backup flag is still set, log into the standby system and restore this backup. sudo rm -rf /var/pgsql/data sudo cp /shared/mybackup.tar /var/pgsql cd /var/pgsql sudo tar -xvf mybackup.tar sudo chown -R postgres:postgres data
  5. With the backup completed, log back into the primary system and clear the backup flag. echo "SELECT pg_stop_backup();" | psql -U postgres
  6. Time to configure. On the standby server create a recovery.conf file in /var/pgsql/data/ with the following contents. This tells postgres to slurp up the log files. restore_command = 'pg_standby -l -d -s 2 -t /tmp/pgsql.trigger.5432 /shared/psql_wal/ %f %p %r 2>>standby.log'
  7. Start up postgres on the standby server. It will start restoring data right away (since our live DB has already produced some data for it to consume) sudo -u postgres pg_ctl start -D /var/pgsql/data
  8. Take a peek at the restore to make sure there aren't too many errors. It may complain about missing files. Ignore these warnings sudo -u postgres tail -f /var/pgsql/data/standby.log
  9. Verify that it's still in standby mode. It should complain with this message: 'psql: FATAL: the database system is starting up' psql

Doing the Failover

Your primary server is busy serving up requests and your warm standbys are slurping up logs every hour. This is great and all, but what do we do when the primary fails? Here’s my process for flipping over to a standby.

  1. Log into the primary server and make sure that postgresql is all the way down.
  2. If the primary server is not totally shut down, take it down the rest of the way. kill -9 the primary postgres process if necessary. sudo kill -9 `sudo cat /var/pgsql/data/ | head -n 1`
  3. log into the warm standby
  4. switch to the postges user sudo -u postgres bash
  5. Verify that the server is still in standby mode. It should report 'psql: FATAL: the database system is starting up'. psql
  6. Assuming that it is not up, create the trigger file. This tells our standby server that it's time to become a primary node. It's very important that only one server is primary at a time so i hope you followed steps 1 and 2. touch /tmp/pgsql.trigger.5432
  7. Verify that the standby server is running psql
  8. Get your clients pointing to the new server (change DNS, change IPs in configuration files, etc.)


This tutorial was largely adapted from Ichsan’s Using pg_standby for high availability of Postgresql